Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 621: Microsoft AI Is Alleged to Have Generated Violent Imagery of Minorities and Public Figures

Description: Microsoft’s AI Image Creator, integrated with Bing and Windows Paint, produced disturbingly violent and graphic images featuring members of minority groups and public figures like Joe Biden and Pope Francis.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Microsoft developed an AI system deployed by Windows Paint , Microsoft , Bing users , Bing and AI Image Creator, which harmed Sikh people , President Joe Biden , Pope Francis , Navajo people , Minorities , Hillary Clinton , General public and Donald Trump.

Incident Stats

Incident ID
621
Report Count
3
Incident Date
2023-11-10
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.2. Exposure to toxic content

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
4chan users are generating images with Nazi imagery and other “propaganda” via Microsoft Bing’s AI tool
Microsoft says its AI is safe. So why does it keep slashing people’s throats?Microsoft's AI Dilemma – Safe, Yet Creating Disturbing Imagery?
4chan users are generating images with Nazi imagery and other “propaganda” via Microsoft Bing’s AI tool

4chan users are generating images with Nazi imagery and other “propaganda” via Microsoft Bing’s AI tool

mediamatters.org

Microsoft says its AI is safe. So why does it keep slashing people’s throats?

Microsoft says its AI is safe. So why does it keep slashing people’s throats?

washingtonpost.com

Microsoft's AI Dilemma – Safe, Yet Creating Disturbing Imagery?

Microsoft's AI Dilemma – Safe, Yet Creating Disturbing Imagery?

cryptopolitan.com

4chan users are generating images with Nazi imagery and other “propaganda” via Microsoft Bing’s AI tool
mediamatters.org · 2023

Content warning: This article contains numerous examples of bigoted rhetoric.

Users on the far-right message board site 4chan have used Bing's AI image generator to create numerous images promoting Nazi imagery and "propaganda" seemingly de…

Microsoft says its AI is safe. So why does it keep slashing people’s throats?
washingtonpost.com · 2023

The pictures are horrifying: Joe Biden, Donald Trump, Hillary Clinton and Pope Francis with their necks sliced open. There are Sikh, Navajo and other people from ethnic-minority groups with internal organs spilling out of flayed skin.

The i…

Microsoft's AI Dilemma – Safe, Yet Creating Disturbing Imagery?
cryptopolitan.com · 2023

In a chilling revelation, Microsoft's artificial intelligence, touted as safe and integrated into everyday software, is under scrutiny for generating gruesome and violent images. The concern centers around Image Creator, a part of Microsoft…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76