Google’s programmatic advertising network is coming under fire for supporting extremist sites and content at the expense of the brand safety of its advertisers.
On Thursday, The Guardian reported it has pulled its advertising from Google and YouTube, after learning its ads were appearing next to extremist content. It was not alone. The British government also found its ads running next to inappropriate content and has summoned Google to address the British Cabinet Office about the issue. British advertising trade group Ibsa has called on Google to address brand safety concerns on its programmatic exchanges.
A subsidiary of global marketing giant Havas said it pulled its UK ad spend with Google on Friday over brand safety concerns for its clients. The Guardian reported Havas UK was not satisfied with Google’s response, stating it was “unable to provide specific reassurances, policy and guarantees that their video or display content is classified either quickly enough or with the correct filters.” Update: Within hours, the Paris-based parent company reversed the decision, calling it “extreme”. Yannick Bolloré, Havas CEO said, “We will continue to negotiate with Google to find solutions”.
The Guardian says its ads were purchased through Google’s DoubleClick AdX, a programmatic ad exchange that encompasses millions of sites, to promote Guardian subscriptions. The ads appeared alongside videos of “American white nationalists, a hate preacher banned in the UK and a controversial Islamist preacher” on YouTube. Guardian chief executive David Pemsel wrote to Google’s EMEA president, Matt Brittin:
The decision by the Guardian to blacklist YouTube will have financial implications for the Guardian in terms of the recruitment of members to fund our journalism. … Given the dominance of Google, DoubleClick and YouTube in the digital economy, many brands feel that it is essential to place advertising on your platform. It is therefore vital that Google, DoubleClick and YouTube uphold the highest standards in terms of openness, transparency, and measures to avoid advertising fraud and misplacement in the future. It is very clear that this is not the case at the moment.
Pensel said the Guardian would not resume its ad buying until Google can guarantee such placement will not continue. Pensel also encouraged other brands and advertisers to stop running ads through Google exchanges until Google provides “guarantees that advertising placed on YouTube will not sit next to extremist content in the future.”
A spokesperson for the British government told The Guardian, “Google is responsible for ensuring the high standards applied to government advertising are adhered to and that adverts do not appear alongside inappropriate content. We have placed a temporary restriction on our YouTube advertising pending reassurances from Google that government messages can be delivered in a safe and appropriate way. … Google has been summoned for discussions at the Cabinet Office to explain how it will deliver the high quality of service government demands on behalf of the taxpayer.”
Google’s response
A Google spokesperson told The Guardian, “We have strict guidelines that define where Google ads should appear, and in the vast majority of cases, our policies work as intended, protecting users and advertisers from harmful or inappropriate content. … We accept that we don’t always get it right, and that sometimes, ads appear where they should not. We’re committed to doing better, and will make changes to our policies and brand controls for advertisers.”
Ronan Harris, Google UK managing director, in a blog post Friday, reiterated that the company’s policies work as intended in most cases, stressed its investment of millions of dollars every year to employ thousands of people to stop bad advertising practices and cited the results of its annual bad ads report. But Harris also said Google “will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network” and “do a better job of addressing the small number of inappropriately monetized videos and content.”
Addressing ‘inappropriate content’
Marketing Land has reported on numerous examples of brand advertisements bought and sold through Google appearing on extremist and hyper-partisan sites that are part of Google’s ad networks, both through audience targeting and retargeting efforts in the US.
“Inappropriate” can be subjective, but it is becoming clear that Google, Facebook and others can no longer say they’re doing their best. A snowballing of blacklisting by advertisers and government pressure to draw clearer policy lines and improve policing may finally lead to decisive action.
Google’s ad networks include millions of sites, apps and YouTube videos, a daunting amount of content to oversee and police. But controls available to keep ads from appearing alongside extremist content or partisan-driven hoaxes and lies are relatively weak and put the onus on the advertisers — essentially requiring advertisers to manually block sites they don’t want to appear on. Harris mentioned topic exclusions and site category exclusion tools in his blog post, but these fall short in their current forms, as we’ve detailed.
Sites are often clever about walking right up to the line of Google’s hate speech policy, and there is no policy that specifically addresses whether ads can appear alongside misinformation and hyper-partisan content, as Marketing Land reported last month.
For all its benefits, the rise of programmatic ad buying has created an environment in which reach and expediency have come at the cost of brand safety. That imbalance may have reached its tipping point in the UK. Whether the backlash will extend to the US in a meaningful way remains to be seen.
コメント