top of page
Writer's pictureFahad H

Facebook to WaPo: No, we are not rating the trustworthiness of users

On August 22, The Washington Post reported Facebook has spent the last year building, and is now using, a rating system that assigns users a reputation score as part of its efforts to fight fake news.

According to that report, Facebook has implemented a trustworthiness score between zero and 1 to measure the credibility of users reporting false news.

The Washington Post report states,“The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.”

But Facebook says the report got it wrong, from the headline — “Facebook is rating the trustworthiness of its users on a scale from zero to 1” — to the notion that it employs a reputation score for users.

From a Facebook spokesperson:

The idea that we have a centralized “reputation” score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading. What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.

Tessa Lyons, Facebook’s News Feed product manager, who is quoted in the Washington Post story, does clarify within the report that there is no single unified reputation score that users are assigned, but rather a score that is one measurement among thousands of behavioral clues Facebook takes into account when reviewing spam and fake news reports.

“If someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true,” Lyons told The Washington Post via an email.

In recent weeks, Facebook has published multiple posts addressing the work it is doing to fight misinformation and make its platform safer for users, including an overview of its content review policies.

The company has repeatedly explained its efforts to identify malicious content — including the company’s first transparency report released in May — but The Washington Post’s news that Facebook has a process to weigh the accuracy of users flagging posts was relatively unknown before yesterday. And while Facebook disagrees with The Washington Post’s reporting of the story, Lyons’ own comments speak to the fact that Facebook is paying attention to the users reporting posts as false and measuring their accuracy based on third-party fact-checker results.

Facebook’s comments to Marketing Land — that it does have a process to protect against people indiscriminately flagging news as fake and attempting to game the system — confirms more is being done beyond using machine learning to identify malicious content and fact-checkers to review reported posts. The company’s statement on the matter is evidence that Facebook is somehow evaluating the accuracy of users reporting posts to deter those indiscriminately flagging content as false.

0 views0 comments

Comments


bottom of page