Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 19: Sexist and Racist Google Adsense Advertisements

概要: Advertisements chosen by Google Adsense are reported as producing sexist and racist results.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
推定: Googleが開発し提供したAIシステムで、Women と Minority Groupsに影響を与えた

インシデントのステータス

インシデントID
19
レポート数
27
インシデント発生日
2013-01-23
エディタ
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

19

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2013

Date of Incident Month

The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank. Enter in the format of MM
 

01

Date of Incident Day

The day on which the incident occurred. If a precise date is unavailable, leave blank. Enter in the format of DD
 

13

Estimated Date

“Yes” if the data was estimated. “No” otherwise.
 

No

CSETv0 分類法のクラス

分類法の詳細

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Unknown/unclear

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

High

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Expert

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

Advertiser's preference, Google user's search history, Google user's purchase history

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+7
Discrimination in Online Ad Delivery
+13
Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks
How Much Does Your Name Matter?+1
Probing the Dark Side of Google’s Ad-Targeting System
Fighting Algorithmic Bias & Homogenous Thinking in AICan computers be racist? Big data, inequality, and discriminationGoogle exposes racial discrimination in online ads delivery - study
Discrimination in Online Ad Delivery

Discrimination in Online Ad Delivery

arxiv.org

Discrimination in Online Ad Delivery

Discrimination in Online Ad Delivery

dataprivacylab.org

Embedded racism determines online advertising placement

Embedded racism determines online advertising placement

privacyinternational.org

Google a 'Black' Name, Get an Arrest Ad?

Google a 'Black' Name, Get an Arrest Ad?

theroot.com

Google search results 'show racial bias'

Google search results 'show racial bias'

telegraph.co.uk

Google searches expose racial bias, says study of names

Google searches expose racial bias, says study of names

bbc.com

Racism is Poisoning Online Ad Delivery, Says Harvard Professor

Racism is Poisoning Online Ad Delivery, Says Harvard Professor

technologyreview.com

Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks

Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks

dailymail.co.uk

Harvard professor says 'black' names in Google searches more likely to offer arrest ads

Harvard professor says 'black' names in Google searches more likely to offer arrest ads

archive.boston.com

Study Finds Google Search Ads Are Racially Biased

Study Finds Google Search Ads Are Racially Biased

businessinsider.com

Online search ads expose racial bias, study finds

Online search ads expose racial bias, study finds

phys.org

Can Googling be racist?

Can Googling be racist?

theguardian.com

Google Ad Delivery Can Show 'Racial Bias,' Says Harvard Study

Google Ad Delivery Can Show 'Racial Bias,' Says Harvard Study

abcnews.go.com

Google's Online Ad Results Guilty Of Racial Profiling, According To New Study

Google's Online Ad Results Guilty Of Racial Profiling, According To New Study

huffingtonpost.com

Google Ads Reveal Bias Against African-Americans, Harvard University Study Reveals

Google Ads Reveal Bias Against African-Americans, Harvard University Study Reveals

sitepronews.com

Harvard professor spots Web search bias

Harvard professor spots Web search bias

bostonglobe.com

Are Google Ad Results Guilty of Racial Profiling?

Are Google Ad Results Guilty of Racial Profiling?

cio.com

Is Google Racist Or Is It The Rest Of Us?

Is Google Racist Or Is It The Rest Of Us?

forbes.com

Harvard Researcher: Google-Generated Ads Show Racial Bias

Harvard Researcher: Google-Generated Ads Show Racial Bias

thecrimson.com

Discrimination in Online Ad Delivery

Discrimination in Online Ad Delivery

queue.acm.org

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

sites.law.berkeley.edu

How Much Does Your Name Matter?

How Much Does Your Name Matter?

freakonomics.com

Probing the Dark Side of Google’s Ad-Targeting System

Probing the Dark Side of Google’s Ad-Targeting System

technologyreview.com

Women less likely to be shown ads for high-paid jobs on Google, study shows

Women less likely to be shown ads for high-paid jobs on Google, study shows

theguardian.com

Fighting Algorithmic Bias & Homogenous Thinking in AI

Fighting Algorithmic Bias & Homogenous Thinking in AI

topbots.com

Can computers be racist? Big data, inequality, and discrimination

Can computers be racist? Big data, inequality, and discrimination

fordfoundation.org

Google exposes racial discrimination in online ads delivery - study

Google exposes racial discrimination in online ads delivery - study

rt.com

Discrimination in Online Ad Delivery
arxiv.org · 2013

A Google search for a person's name, such as "Trevon Jones", may yield a personalized ad for public records about Trevon that may be neutral, such as "Looking for Trevon Jones?", or may be suggestive of an arrest record, such as "Trevon Jon…

Discrimination in Online Ad Delivery
dataprivacylab.org · 2013

Frequently Asked Questions

  1. Isn't the arrest rate of blacks higher anyway?

The ads appear regardless of whether the company sponsoring the ad has a criminal record for the name. The appearance of the ads are not related to any arrest stati…

Embedded racism determines online advertising placement
privacyinternational.org · 2013

In 2013, Harvard professor Latanya Sweeney found that racial discrimination pervades online advertising delivery. In a study, she found that searches on black-identifying names such as Revon, Lakisha, and Darnell are 25% more likely to be s…

Google a 'Black' Name, Get an Arrest Ad?
theroot.com · 2013

Pop a name into Google and you're likely to end up with corresponding advertisements alongside your results. Wild guess which types of names are more likely to yield arrest-related ads suggesting that the person searched for has a record.

Y…

Google search results 'show racial bias'
telegraph.co.uk · 2013

Names typically associated with black people are more likely to produce adverts related to criminal activity, according to the Harvard University paper .

A Google search for a name such as Tom Smith may bring up personalised public records,…

Google searches expose racial bias, says study of names
bbc.com · 2013

Image caption Prof Sweeney said technology could be used to counteract racial intolerance

A study of Google searches has found "significant discrimination" in advert results depending on the perceived race of names searched for.

Harvard pro…

Racism is Poisoning Online Ad Delivery, Says Harvard Professor
technologyreview.com · 2013

“Have you ever been arrested? Imagine the question not appearing in the solitude of your thoughts as you read this paper, but appearing explicitly whenever someone queries your name in a search engine.”

Screenshot of a Google ad.

So begins …

Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks
dailymail.co.uk · 2013

Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks

Professor finds 'significant discrimination' in ad results, with black names 25 per cent more likely to be linked to arrest recor…

Harvard professor says 'black' names in Google searches more likely to offer arrest ads
archive.boston.com · 2013

Is Google biasing the ads it serves up based on whether a name sounds "black"?

That's the conclusion of a paper by Harvard professor Latanya Sweeney, who wrote in her paper that searches on names that may be identified as black brought up a…

Study Finds Google Search Ads Are Racially Biased
businessinsider.com · 2013

Ads pegged to Google search results can be racially biased because of how certain names are associated with blacks or whites, according to a new study.

Harvard University professor Latanya Sweeney found "statistically significant discrimina…

Online search ads expose racial bias, study finds
phys.org · 2013

The Google search page appears on a computer screen in Washington on August 30, 2010. Ads pegged to Google search results can be racially biased because of how certain names are associated with blacks or whites, according to a new study.

Ad…

Can Googling be racist?
theguardian.com · 2013

Readers, I hate it to break it to you, but according to Harvard the internet is racist. I suggest you stop using it immediately unless you want your patronage of Google et al to blacken your name. Actually, err, maybe wait until you finish …

Google Ad Delivery Can Show 'Racial Bias,' Says Harvard Study
abcnews.go.com · 2013

A Google search for a "racially associated name" is more likely to trigger advertisements suggesting the person has a criminal background, according to a study by a Harvard professor.

Latanya Sweeney, a professor of government and technolog…

Google's Online Ad Results Guilty Of Racial Profiling, According To New Study
huffingtonpost.com · 2013

Every job candidate lives in fear that a Google search could reveal incriminating indiscretions from a distant past. But a new study examining racial bias in the wording of online ads suggests that Google's advertising algorithms may be unf…

Google Ads Reveal Bias Against African-Americans, Harvard University Study Reveals
sitepronews.com · 2013

February 6, 2013

'Arrest' Appears With Greater Frequency in Ads Featuring 'Black' Names

The delivery of Google ads has significant racial bias, according to a study by a Harvard University professor.

Professor Latanya Sweeney says names tha…

Harvard professor spots Web search bias
bostonglobe.com · 2013

Web page results of ads that appeared on-screen when Harvard professor Latanya Sweeney typed her name in a google search. Ads featured services for arrest records. Sweeney conducted a study that concluded searches with "black sounding" name…

Are Google Ad Results Guilty of Racial Profiling?
cio.com · 2013

Is Google’s search algorithm guilty of racism? A study by a Harvard researcher found that it could be.

Professor Latanya Sweeney says she found "statistically significant discrimination" when comparing ads served with results from online se…

Is Google Racist Or Is It The Rest Of Us?
forbes.com · 2013

A lovely little piece of research that shows that the ads served up alongside Google searches could, if you were that way inclined, be seen as somewhat racist:

A recent study of Google searches by Professor Latanya Sweeney has found "signif…

Harvard Researcher: Google-Generated Ads Show Racial Bias
thecrimson.com · 2013

UPDATED: February 20, 2013, at 10:35 a.m.

A Harvard researcher has found that typically African-American names are more likely to be linked to a criminal record in Google-generated advertisements on the online search engine and on the news …

Discrimination in Online Ad Delivery
queue.acm.org · 2013

The January/February 2019 issue of acmqueue is out now

Subscribers and ACM Professional members login here

PDF

April 2, 2013

Volume 11, issue 3

Discrimination in Online Ad Delivery

Google ads, black names and white names, racial discriminat…

Latanya Sweeney, Racial Discrimination in Online Ad Delivery
sites.law.berkeley.edu · 2013

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

Comment by: Margaret Hu

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240

Workshop draft abstract:

Investigating the appearanc…

How Much Does Your Name Matter?
freakonomics.com · 2013

Listen now:

Season 4, Episode 2

When Harvard professor Latanya Sweeney Googled her name one day, she noticed something strange: an ad for a background check website came up in the results, with the heading: “Latanya Sweeney, Arrested?” But …

Probing the Dark Side of Google’s Ad-Targeting System
technologyreview.com · 2015

That Google and other companies track our movements around the Web to target us with ads is well known. How exactly that information gets used is not—but a research paper presented last week suggests that some of the algorithmic judgments t…

Women less likely to be shown ads for high-paid jobs on Google, study shows
theguardian.com · 2015

Female job seekers are much less likely to be shown adverts on Google for highly paid jobs than men, researchers have found.

The team of researchers from Carnegie Mellon built an automated testing rig called AdFisher that pretended to be a …

Fighting Algorithmic Bias & Homogenous Thinking in AI
topbots.com · 2017

When Timnit Gebru attended a prestigious AI research conference last year, she counted 6 black people in the audience out of an estimated 8,500. And only one black woman: herself.

As a PhD candidate at Stanford University who has published …

Can computers be racist? Big data, inequality, and discrimination
fordfoundation.org · 2018

It seems like everyone is talking about the power of big data and how it is helping companies, governments, and organizations make better and more efficient decisions. But rarely do they mention that big data can actually perpetuate and exa…

Google exposes racial discrimination in online ads delivery - study
rt.com · 2019

Google’s search algorithms expose racial discrimination, a new study by Harvard professor purports. It claims ads related to criminal records are more likely to pop up when "black-sounding names" are ‘googled’.

­Latanya Sweeney, Professor o…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Gender Biases of Google Image Search

Google Image Search Has A Gender Bias Problem

Apr 2015 · 11 レポート
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

May 2016 · 22 レポート
Biased Google Image Results

'Black teenagers' vs. 'white teenagers': Why Google's algorithm displays racist results

Mar 2016 · 18 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Gender Biases of Google Image Search

Google Image Search Has A Gender Bias Problem

Apr 2015 · 11 レポート
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

May 2016 · 22 レポート
Biased Google Image Results

'Black teenagers' vs. 'white teenagers': Why Google's algorithm displays racist results

Mar 2016 · 18 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2023 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76