Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 469: Automated Adult Content Detection Tools Showed Bias against Women Bodies

Description: Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Microsoft , Google and Amazon developed an AI system deployed by Meta , LinkedIn , Instagram and Facebook, which harmed LinkedIn users , Instagram users and Facebook users.

Incident Stats

Incident ID
469
Report Count
3
Incident Date
2006-02-25
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceA Google Algorithm Seems To Think Brands Like Boohoo And Missguided Are Pretty ‘Racy’+1
‘There is no standard’: investigation finds AI algorithms objectify women’s bodies
A Google Algorithm Seems To Think Brands Like Boohoo And Missguided Are Pretty ‘Racy’

A Google Algorithm Seems To Think Brands Like Boohoo And Missguided Are Pretty ‘Racy’

graziadaily.co.uk

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies

theguardian.com

New Investigation Reveals AI Tools Are Sexualizing Women’s Bodies in Photos

New Investigation Reveals AI Tools Are Sexualizing Women’s Bodies in Photos

thestoryexchange.org

A Google Algorithm Seems To Think Brands Like Boohoo And Missguided Are Pretty ‘Racy’
graziadaily.co.uk · 2019

An investigation into the 'raciest' clothing from different fashion brands has highlighted the fact that Google uses software to rate imagery as part of a 'safe search' tool and scores clothing based on how 'skimpy or sheer' it is.

Google's…

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies
theguardian.com · 2023

Images posted on social media are analyzed by artificial intelligence (AI) algorithms that decide what to amplify and what to suppress. Many of these algorithms, a Guardian investigation has found, have a gender bias, and may have been cens…

New Investigation Reveals AI Tools Are Sexualizing Women’s Bodies in Photos
thestoryexchange.org · 2023

Many social media platforms such as Instagram and LinkedIn use content moderation systems to suppress images that are sexually explicit or deemed inappropriate for viewers. 

But what happens when these systems block images that are not at a…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Sexist and Racist Google Adsense Advertisements

Sexist and Racist Google Adsense Advertisements

Jan 2013 · 27 reports
Gender Biases of Google Image Search

Gender Biases of Google Image Search

Apr 2015 · 11 reports
Biased Google Image Results

Biased Google Image Results

Mar 2016 · 18 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Sexist and Racist Google Adsense Advertisements

Sexist and Racist Google Adsense Advertisements

Jan 2013 · 27 reports
Gender Biases of Google Image Search

Gender Biases of Google Image Search

Apr 2015 · 11 reports
Biased Google Image Results

Biased Google Image Results

Mar 2016 · 18 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76