Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 49: AI Beauty Judge Did Not Like Dark Skin

概要: In 2016, after artificial inntelligence software Beauty.AI judged an international beauty contest and declared a majority of winners to be white, researchers found that Beauty.AI was racially biased in determining beauty.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
推定: Youth Laboratoriesが開発し提供したAIシステムで、People with Dark Skinに影響を与えた

インシデントのステータス

インシデントID
49
レポート数
10
インシデント発生日
2016-09-05
エディタ
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

49

Notes (special interest intangible harm)

Input any notes that may help explain your answers.
 

Beauty.ai determined mostly white applicants to be most attractive among all contestants.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2016

Date of Incident Month

The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank. Enter in the format of MM
 

08

Estimated Date

“Yes” if the data was estimated. “No” otherwise.
 

No

CSETv0 分類法のクラス

分類法の詳細

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

High

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

images of people's faces

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+6
Why An AI-Judged Beauty Contest Picked Nearly All White Winners
The First Ever Beauty Contest Judged by Artificial IntelligenceWhat Will Happen When Your Company’s Algorithms Go Wrong?Artificial Intelligence Has a Racism IssueArtificial Intelligence Has a Bias Problem, and It's Our Fault
Why An AI-Judged Beauty Contest Picked Nearly All White Winners

Why An AI-Judged Beauty Contest Picked Nearly All White Winners

motherboard.vice.com

Is AI RACIST? Robot-judged beauty contest picks mostly white winners out of 6,000 contestants

Is AI RACIST? Robot-judged beauty contest picks mostly white winners out of 6,000 contestants

dailymail.co.uk

A beauty contest was judged by AI and the robots didn't like dark skin

A beauty contest was judged by AI and the robots didn't like dark skin

theguardian.com

A beauty contest was judged by AI and the robots didn't like dark skin

A beauty contest was judged by AI and the robots didn't like dark skin

theguardian.com

The first AI-judged beauty contest taught us one thing: Robots are racist

The first AI-judged beauty contest taught us one thing: Robots are racist

thenextweb.com

AI judges of beauty contest branded racist

AI judges of beauty contest branded racist

trustedreviews.com

The First Ever Beauty Contest Judged by Artificial Intelligence

The First Ever Beauty Contest Judged by Artificial Intelligence

gineersnow.com

What Will Happen When Your Company’s Algorithms Go Wrong?

What Will Happen When Your Company’s Algorithms Go Wrong?

hbr.org

Artificial Intelligence Has a Racism Issue

Artificial Intelligence Has a Racism Issue

innotechtoday.com

Artificial Intelligence Has a Bias Problem, and It's Our Fault

Artificial Intelligence Has a Bias Problem, and It's Our Fault

au.pcmag.com

Why An AI-Judged Beauty Contest Picked Nearly All White Winners
motherboard.vice.com · 2016

Image: Flickr/Veronica Jauriqui

Beauty pageants have always been political. After all, what speaks more strongly to how we see each other than which physical traits we reward as beautiful, and which we code as ugly? It wasn't until 1983 tha…

Is AI RACIST? Robot-judged beauty contest picks mostly white winners out of 6,000 contestants
dailymail.co.uk · 2016

Only a few winners were Asian and one had dark skin, most were white

Just months after Microsoft's Tay artificial intelligence sent racist messages on Twitter, another AI seems to have followed suit.

More than 6,000 selfies of individuals w…

A beauty contest was judged by AI and the robots didn't like dark skin
theguardian.com · 2016

The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners

The first international beauty contest judged by “machines” was supposed to use objecti…

A beauty contest was judged by AI and the robots didn't like dark skin
theguardian.com · 2016

The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners

The first international beauty contest judged by “machines” was supposed to use objecti…

The first AI-judged beauty contest taught us one thing: Robots are racist
thenextweb.com · 2016

With more than 6,000 applicants from over 100 countries competing, the first international beauty contest judged entirely by artificial intelligence just came to an end. The results are a bit disheartening.

The team of judges, a five robot …

AI judges of beauty contest branded racist
trustedreviews.com · 2016

It’s not the first time artificial intelligence has been in the spotlight for apparent racism, but Beauty.AI’s recent competition results have caused controversy by clearly favouring light skin.

The competition, which ran online and was ope…

The First Ever Beauty Contest Judged by Artificial Intelligence
gineersnow.com · 2017

If you’re one who joins beauty pageants or merely watches them, what would you feel about a computer algorithm judging a person’s facial attributes? Perhaps we should ask those who actually volunteered to be contestants in a beauty contest …

What Will Happen When Your Company’s Algorithms Go Wrong?
hbr.org · 2017

An AI designed to do X will eventually fail to do X. Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric s…

Artificial Intelligence Has a Racism Issue
innotechtoday.com · 2017

It’s long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans’ emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way int…

Artificial Intelligence Has a Bias Problem, and It's Our Fault
au.pcmag.com · 2018

In 2016, researchers from Boston University and Microsoft were working on artificial intelligence algorithms when they discovered racist and sexist tendencies in the technology underlying some of the most popular and critical services we us…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Female Applicants Down-Ranked by Amazon Recruiting Tool

2018 in Review: 10 AI Failures

Aug 2016 · 33 レポート
Gender Biases in Google Translate

Semantics derived automatically from language corpora contain human-like biases

Apr 2017 · 10 レポート
TayBot

Danger, danger! 10 alarming examples of AI gone wild

Mar 2016 · 28 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Female Applicants Down-Ranked by Amazon Recruiting Tool

2018 in Review: 10 AI Failures

Aug 2016 · 33 レポート
Gender Biases in Google Translate

Semantics derived automatically from language corpora contain human-like biases

Apr 2017 · 10 レポート
TayBot

Danger, danger! 10 alarming examples of AI gone wild

Mar 2016 · 28 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2023 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 8b8f151