Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 6: TayBot

Responded
Description: Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Microsoft developed and deployed an AI system, which harmed Twitter Users.

Incident Stats

Incident ID
6
Report Count
28
Incident Date
2016-03-24
Editors
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

6

Notes (special interest intangible harm)

Input any notes that may help explain your answers.
 

4.6 - Tay's tweets included racist and misogynist content, far-right ideology, and harmful content against certain religions, etc.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2016

Date of Incident Month

The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank. Enter in the format of MM
 

03

Date of Incident Day

The day on which the incident occurred. If a precise date is unavailable, leave blank. Enter in the format of DD
 

23

CSETv0 Taxonomy Classifications

Taxonomy Details

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification, Robustness, Assurance

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

Twitter users' input

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.2. Exposure to toxic content

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+18
Why Microsoft's 'Tay' AI bot went wrong
+3
Microsoft chatbot Zo is a censored version of Tay
Danger, danger! 10 alarming examples of AI gone wildWorst Chatbot FailsUnmasking A.I.'s Bias ProblemMicrosoft’s politically correct chatbot is even worse than its racist oneIn 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online ConversationTay (bot)
Why Microsoft's 'Tay' AI bot went wrong

Why Microsoft's 'Tay' AI bot went wrong

techrepublic.com

Here Are the Microsoft Twitter Bot’s Craziest Racist Rants

Here Are the Microsoft Twitter Bot’s Craziest Racist Rants

gizmodo.com

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

theverge.com

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements

thedrum.com

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours

telegraph.co.uk

Microsoft deletes racist, genocidal tweets from AI chatbot Tay

Microsoft deletes racist, genocidal tweets from AI chatbot Tay

businessinsider.com

Microsoft’s Tay is an Example of Bad Design

Microsoft’s Tay is an Example of Bad Design

medium.com

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?

blog.botego.com

It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk

It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk

wired.com

5 Big Questions About Tay, Microsoft's Failed A.I. Twitter Chatbot

5 Big Questions About Tay, Microsoft's Failed A.I. Twitter Chatbot

inverse.com

Microsoft shuts down AI chatbot after it turned into a Nazi

Microsoft shuts down AI chatbot after it turned into a Nazi

cbsnews.com

Learning from Tay’s introduction

Learning from Tay’s introduction

blogs.microsoft.com

Tay: Microsoft issues apology over racist chatbot fiasco

Tay: Microsoft issues apology over racist chatbot fiasco

bbc.com

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac

washingtonpost.com

Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot

Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot

theguardian.com

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?

futureoflife.org

Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown

Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown

theguardian.com

Microsoft’s disastrous Tay experiment shows the hidden dangers of AI

Microsoft’s disastrous Tay experiment shows the hidden dangers of AI

qz.com

Microsoft chatbot Zo is a censored version of Tay

Microsoft chatbot Zo is a censored version of Tay

wired.co.uk

With Teen Bot Tay, Microsoft Proved Assholes Will Indoctrinate A.I.

With Teen Bot Tay, Microsoft Proved Assholes Will Indoctrinate A.I.

inverse.com

Microsoft’s racist chatbot, Tay, makes MIT’s annual worst-tech list

Microsoft’s racist chatbot, Tay, makes MIT’s annual worst-tech list

geekwire.com

The Accountability of AI - Case Study: Microsoft’s Tay Experiment

The Accountability of AI - Case Study: Microsoft’s Tay Experiment

chatbotslife.com

Danger, danger! 10 alarming examples of AI gone wild

Danger, danger! 10 alarming examples of AI gone wild

infoworld.com

Worst Chatbot Fails

Worst Chatbot Fails

businessnewsdaily.com

Unmasking A.I.'s Bias Problem

Unmasking A.I.'s Bias Problem

fortune.com

Microsoft’s politically correct chatbot is even worse than its racist one

Microsoft’s politically correct chatbot is even worse than its racist one

qz.com

In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation

In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation

spectrum.ieee.org

Tay (bot)

Tay (bot)

en.wikipedia.org

Why Microsoft's 'Tay' AI bot went wrong
techrepublic.com · 2016

Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

Image: screenshot, Twitter

She was supposed to come off as a normal t…

Here Are the Microsoft Twitter Bot’s Craziest Racist Rants
gizmodo.com · 2016

Yesterday, Microsoft unleashed Tay, the teen-talking AI chatbot built to mimic and converse with users in real time. Because the world is a terrible place full of shitty people, many of those users took advantage of Tay’s machine learning c…

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day
theverge.com · 2016

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Micr…

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements
thedrum.com · 2016

Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding.

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racis…

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours
telegraph.co.uk · 2016

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, 'Bush did 9/11'-proclaiming robot.

Developers at Mi…

Microsoft deletes racist, genocidal tweets from AI chatbot Tay
businessinsider.com · 2016

Tay's Twitter page Microsoft Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.

The tech company introduced "Tay" this week — a bot that responds to users' queri…

Microsoft’s Tay is an Example of Bad Design
medium.com · 2016

Microsoft’s Tay is an Example of Bad Design

or Why Interaction Design Matters, and so does QA-ing.

caroline sinders Blocked Unblock Follow Following Mar 24, 2016

Yesterday Microsoft launched a teen girl AI on Twitter named “Tay.” I work wit…

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?
blog.botego.com · 2016

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?

Botego Inc Blocked Unblock Follow Following Mar 25, 2016

Yesterday, something that looks like a big failure has happened: Microsoft’s chatbot T…

It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk
wired.com · 2016

It was the unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. Amid this dangerous combination of forces, determining exactly what went wrong is near-impossible. But t…

5 Big Questions About Tay, Microsoft's Failed A.I. Twitter Chatbot
inverse.com · 2016

This week, the internet did what it does best and demonstrated that A.I. technology isn’t quite as intuitive as human perception, using … racism.

Microsoft’s recently released artificial intelligence chatbot, Tay, fell victim to users’ tric…

Microsoft shuts down AI chatbot after it turned into a Nazi
cbsnews.com · 2016

Microsoft got a swift lesson this week on the dark side of social media. Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Users could follo…

Learning from Tay’s introduction
blogs.microsoft.com · 2016
Peter Lee, Microsoft post-incident response

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is…

Tay: Microsoft issues apology over racist chatbot fiasco
bbc.com · 2016

Image copyright Microsoft Image caption The AI was taught to talk like a teenager

Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.

But in doing so made it clear T…

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac
washingtonpost.com · 2016

It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The …

Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot
theguardian.com · 2016

Microsoft has said it is “deeply sorry” for the racist and sexist Twitter messages generated by the so-called chatbot it launched this week.

The company released an official apology after the artificial intelligence program went on an embar…

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?
futureoflife.org · 2016

By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.”…

Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown
theguardian.com · 2016

Short-lived return saw Tay tweet about smoking drugs in front of the police before suffering a meltdown and being taken offline

This article is more than 3 years old

This article is more than 3 years old

Microsoft’s attempt to converse with…

Microsoft’s disastrous Tay experiment shows the hidden dangers of AI
qz.com · 2016

Humans have a long and storied history of freaking out over the possible effects of our technologies. Long ago, Plato worried that writing would hurt people’s memories and “implant forgetfulness in their souls.” More recently, Mary Shelley’…

Microsoft chatbot Zo is a censored version of Tay
wired.co.uk · 2016

Tay's successor is called Zo and is only available by invitation on messaging app Kik. When you request access, the software asks for your Kik username and Twitter handle Microsoft

Having (hopefully) learnt from its previous foray into chat…

With Teen Bot Tay, Microsoft Proved Assholes Will Indoctrinate A.I.
inverse.com · 2016

When Tay started its short digital life on March 23, it just wanted to gab and make some new friends on the net. The chatbot, which was created by Microsoft’s Research department, greeted the day with an excited tweet that could have come f…

Microsoft’s racist chatbot, Tay, makes MIT’s annual worst-tech list
geekwire.com · 2016

BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives.

Tay, the Microsoft chatbot that prankst…

The Accountability of AI - Case Study: Microsoft’s Tay Experiment
chatbotslife.com · 2017

The Accountability of AI — Case Study: Microsoft’s Tay Experiment

Yuxi Liu Blocked Unblock Follow Following Jan 16, 2017

In this case study, I outline Microsoft’s artificial intelligence (AI) chatbot Tay and describe the controversy it caus…

Danger, danger! 10 alarming examples of AI gone wild
infoworld.com · 2017

Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, …

Worst Chatbot Fails
businessnewsdaily.com · 2017

Many people associate innovation with technology, but advancing technology is subject to the same embarrassing blunders that humans are. Nowhere is this more apparent than in chatbots.

The emerging tech, which seems to be exiting the awkwar…

Unmasking A.I.'s Bias Problem
fortune.com · 2018

WHEN TAY MADE HER DEBUT in March 2016, Microsoft had high hopes for the artificial intelligence–powered “social chatbot.” Like the automated, text-based chat programs that many people had already encountered on e-commerce sites and in custo…

Microsoft’s politically correct chatbot is even worse than its racist one
qz.com · 2018

Every sibling relationship has its clichés. The high-strung sister, the runaway brother, the over-entitled youngest. In the Microsoft family of social-learning chatbots, the contrasts between Tay, the infamous, sex-crazed neo-Nazi, and her …

In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation
spectrum.ieee.org · 2019

In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter. Described as an experiment in "conversational understanding," Tay was designed to engage people in dialogue through tweets or direct messages, while emulati…

Tay (bot)
en.wikipedia.org · 2020

Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Tw…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Russian Chatbot Supports Stalin and Violence

Russian Chatbot Supports Stalin and Violence

Oct 2017 · 5 reports
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Russian Chatbot Supports Stalin and Violence

Russian Chatbot Supports Stalin and Violence

Oct 2017 · 5 reports
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76