The challenges YouTube has long faced in policing hateful, extremist and inflammatory content came into full view this spring when advertisements from major brands were found running alongside extremist propoganda videos. Advertisers on both sides of the Atlantic pulled or threatened to pull their ads from the platform.
Google has announced several steps to address advertiser concerns. On Sunday, Kent Walker, Google’s general counsel, outlined four steps Google is taking to address extremist-related content on YouTube. The blog post also appeared as an op-ed in Financial Times.
“There should be no place for terrorist content on our services,” wrote Walker, while acknowledging Google, and the industry as a whole, needs to accelerate efforts to address it. “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.”
YouTube will be applying more machine learning technology, more people and more discretion to its policing of extremist content going forward. The four steps Walker put forth are as follows:
Devote more engineering resources. Machine learning models are being used to train new “content classifiers” that help identify and remove extremist and terrorism-related content faster. Google says it has applied content analysis models to analyze more than half the terrorism-related content removed over the past six months to determine, for example, if a video was posted by an extremist group or was a news broadcast about a terrorist attack. That is the type of nuance that makes policing YouTube’s huge and ever-growing body of content so challenging.
Add more independent human reviewers. Anyone can flag a video for inappropriate content, but Google says the flags are often misapplied. YouTube’s Trusted Flagger program enlists independent experts to flag inappropriate videos. Google says reports from Trusted Flaggers are 90 percent accurate. The company will add 50 non-governmental organizations (NGOs) that work in the areas of hate speech, self-harm and terrorism to the 63 organizations that already participate. The NGOs receive operational grants for participating in the Trusted Flagger program. Google says it’s also expanding work with counter-extremist groups to rout out videos aiming to radicalize or recruit.
Applying subtlety when determining not to monetize video content. This is the biggest change announced. Just because a video does not explicitly violate YouTube policies, it doesn’t mean it will be monetized. For example, videos that contain inflammatory religious or supremacist content may not violate the hate speech policy but will now appear behind a new interstitial warning when identified. Videos that have the warning applied will not be eligible for advertising or user comments or endorsements. Those efforts will also make the videos harder for other users to discover. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” said Walker.
Expand counter-radicalization efforts. YouTube is working with Jigsaw, an incubator within Google’s parent company Alphabet, to expand use of the Redirect Method across Europe. The Redirect Method aims to direct people seeking out ISIS-produced content to existing YouTube videos that debunk ISIS recruiting themes.
YouTube, of course, is not the only network to become a breeding ground for extremists and supremacists. These groups and individuals have found Facebook and Twitter, too, to be places ripe for recruitment and spreading their messages. All are now coming to terms with the need to find a better balance between fostering free speech and not fostering violence and extremism.
In a blog post published last week titled “Hard Questions: How We Counter Terrorism,” Facebook laid out the behind-the-scenes steps it is taking to keep terrorist content off the network. Twitter reported it removed nearly 377,000 accounts in the last half of 2016 for promoting terrorism. Neither has seen the advertiser backlash that Google experienced this spring, but Google, Facebook, Microsoft and Twitter have committed to establishing an international forum in which to share and develop technology and provide greater industry support to address terrorism online.
Comments