Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 12: Common Biases of Vector Embeddings

Description: Researchers from Boston University and Microsoft Research, New England demonstrated gender bias in the most common techniques used to embed words for natural language processing (NLP).

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Microsoft Research , Boston University and Google developed an AI system deployed by Microsoft Research and Boston University, which harmed Women and Minority Groups.

Incident Stats

Incident ID
12
Report Count
1
Incident Date
2016-07-21
Editors
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

12

CSETv0 Taxonomy Classifications

Taxonomy Details

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Lives Lost

Were human lives lost as a result of the incident?
 

No

Intent

Was the incident an accident, intentional, or is the intent unclear?
 

Unclear

Near Miss

Was harm caused, or was it a near miss?
 

Unclear/unknown

Ending Date

The date the incident ended.
 

2016-01-01T00:00:00.000Z

Beginning Date

The date the incident began.
 

2016-01-01T00:00:00.000Z

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

arxiv.org

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
arxiv.org · 2016

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
Personal voice assistants struggle with black voices, new study shows

Personal voice assistants struggle with black voices, new study shows

Mar 2020 · 2 reports
High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
Personal voice assistants struggle with black voices, new study shows

Personal voice assistants struggle with black voices, new study shows

Mar 2020 · 2 reports
High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 9d70fba