Since announcing efforts to restrict violent extremist content in June, YouTube released an update on their brand safety initiatives this week.
The Google-owned company says its teams have manually reviewed more than a million videos to improve the flagging technology being used to monitor content. YouTube also says that during the past month, over 83 percent of the videos removed because of violent content were taken down without a human having to flag it.
“We’ve always used a mix of human flagging and human review together with technology to address controversial content on YouTube,” writes the YouTube team. “In June, we introduced machine learning to flag violent extremism content and escalate it for human review.”
In addition to enhancing its machine-learning efforts around brand safety, YouTube has also added 35 new non-governmental organizations (NGOs) to its Trusted Flagger program, confirming the site has reached 70 percent of its goal for the program.
Other initiatives include supporting programs to counter extremist messaging and applying tougher standards around videos that do not violate guidelines but contain controversial religious or supremacist content.
“These videos remain on YouTube, but they are behind a warning interstitial, aren’t recommended, monetized, and don’t have key features including comments, suggested videos, and likes,” says YouTube.
On top of YouTube’s direct efforts to fight violent extremist content, Google.org announced it was starting a $5 million innovation fund to counter hate and extremism in September of this year: “This funding will support technology-driven solutions, as well as grassroots efforts like community youth projects that help build communities and promote resistance to radicalization.”
See our timeline of Google’s efforts to address brand safety concerns since ads from major brands were found running alongside extremist videos on YouTube this spring.
Comments