Following Facebook CEO Mark Zuckerberg’s appearance before the EU Parliament on Tuesday to address the company’s mishandling of user data, Facebook published three separate announcements covering its current efforts to stop the spread of fake news, fight misinformation campaigns and protect user safety.
The overlap between these three issues has been a focus for Facebook since the company announced in March that it had suspended Cambridge Analytica for exploiting user data.
In its latest “Hard Questions” post outlining the company’s initiative to stop fake news, Facebook reiterated its policy to remove accounts and content that violate Community Standards or ad policies. It also mentioned its latest efforts to give users more context around news posts in their News Feed via the “Related Stories” feature it launched last month.
Facebook notes in the post that although fake news doesn’t necessarily violate its new Community Standards guidelines, it does often fall into other categories that could cause it to be penalized. For example, some fake news content may be identified as spam, hate speech or shared by a fake account — all reasons for removal.
The company says its new policies to fight “coordinated inauthentic activity” — a label that could apply to both fake news and misinformation campaigns — includes machine learning technology that helps internal teams detect fraud and spam.
“We now block millions of fake accounts every day when they try to register,” writes Facebook product manager, Tessa Lyons, in the “Hard Questions” post.
Facebook also distinguished the differences between misinformation on the platform that had a political agenda and misinformation that was financially motivated — explaining the course of action it was taking to curb those distributing fake news and misinformation for financial gains.
“If spammers can get enough people to click on fake news stories and visit their sites, they’ll make money off the ads they show,” writes Lyons. She says Facebook is taking actions like penalizing click-bait, links to low-quality web pages and ad farms to make such scams unprofitable, thus removing the incentive to spread fake news. Facebook has also removed these publishers’ ability to monetize their pages by not allowing them to run ads.
As part of its initiative to fight fake news, Facebook is partnering with independent third-party fact-checkers in some countries (although it doesn’t list where). Facebook says its fact-checkers are certified through the non-partisan International Fact-Checking network. The result so far has been an 80 percent decrease on average in future views of content identified as false by the fact-checkers.
In a separate post addressing Facebook’s fight against misinformation, the company shared a film (linked below), which it produced to highlight the nuances and complicated nature of misinformation campaigns and the need to fight it from multiple angles.
Facing Facts HD Posted by Facebook on Friday, May 18, 2018
It also announced that the group it put together to conduct an independent study into how social media impacts political elections was requesting proposals from scholars to measure the volume and effects of misinformation on the site.
In addition to addressing its fake news and misinformation problems, Facebook also announced it is now showing users an alert within their News Feed, encouraging them to review details around Facebook’s advertising policies, face recognition and user data.
Facebook says in the coming weeks, users will get a customized message that shows how Facebook uses data from partners to display relevant ads; the political, religious and relationship information the user has opted to share via their profile; how Facebook is using face recognition; and recent updates to its terms of service and data policies.
The timing of the user alerts aligns with GDPR, which starts tomorrow. Facebook says it introduced a “similar experience” in Europe as part of its GDPR preparations and is now making it available everywhere.
댓글