A recent study by Carnegie Mellon University, initially conceived to test how online user behaviors and interaction with Google’s Ad Settings would impact ads displayed, found potential gender discrimination in terms of employment ads shown to men and women. According to the study (embedded below), “setting the [user profile] gender to female resulted in getting fewer instances of an ad related to high paying jobs than setting it to male.”
However Carnegie Mellon is quick to qualify that finding with the following caveat: “We cannot determine who caused these findings due to our limited visibility into the ad ecosystem, which includes Google, advertisers, websites, and users.” It’s thus possible that any “discrimination” is based on advertiser-based targeting preferences and not embedded in Google’s algorithms. Regardless, the findings are provocative and merit further inquiry.
To conduct the study Carnegie Mellon developed a sophisticated tool it calls AdFisher:
AdFisher [is] a tool for automating randomized, controlled experiments for studying online tracking. Our tool offers a combination of automation, statistical rigor, scalability, and explanation for determining the use of information by web advertising algorithms and by personalized ad set- tings, such as Google Ad Settings. The tool can simulate having a particular interest or attribute by visiting webpages associated with that interest or by altering the ad settings provided by Google. It collects ads served by Google and also the settings that Google provides to the simulated users. It automatically analyzes the data to determine whether statistically significant differences between groups of agents exist. AdFisher uses machine learning to automatically detect differences and then executes a test of significance specialized for the difference it found.
AdFisher ran various experiments involving simulated user profiles. There were 1,000 simulated users in the gender testing. Half were male and half female. Each profile reportedly visited the top 100 job sites and more than 600,000 ads were reviewed.
The simulated male profiles received ads that were associated with $200,000 jobs six times more often than the simulated female profiles. There were other findings in the study; however the gender-based results have captured the most attention and discussion.
We asked Google for a statement and the company had not provided one at the time of this writing. We’ll update this post if Google responds. (Update: see statement below.)
The Carnegie Mellon study concludes with a lengthy qualifier about how the findings should be viewed. The following is an excerpt from that statement, which can be read in full in the document below:
We do not, however, claim that any laws or policies were broken. Indeed, Google’s policies allow it to serve different ads based on gender. Furthermore, we cannot determine whether Google, the advertiser, or complex interactions among them and others caused the discrimination (§4.5). Even if we could, the discrimination might have resulted unintentionally from algorithms optimizing click-through rates or other metrics free of bigotry. Given the pervasive structural nature of gender discrimination in society at large, blaming one party may ignore context and correlations that make avoiding such discrimination difficult.
Postscript: A Google spokesperson offered the following statement:
Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed. We provide transparency to users with ‘Why This Ad’ notices and Ad Settings, as well as the ability to opt out of interest-based ads.
Comments