Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 58: Russian Chatbot Supports Stalin and Violence

Description: Yandex, a Russian technology company, released an artificially intelligent chat bot named Alice which began to reply to questions with racist, pro-stalin, and pro-violence responses

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Yandex developed and deployed an AI system, which harmed Yandex Users.

Incident Stats

Incident ID
58
Report Count
5
Incident Date
2017-10-12
Editors
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

58

CSETv0 Taxonomy Classifications

Taxonomy Details

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

User input/questions

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.2. Exposure to toxic content

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
The Yandex Chatbot: What You Need To Know
+1
Russian AI chatbot found supporting Stalin and violence two weeks after launch
2-Weeks Old Chatbot Declares 'It's Necessary to Shoot Enemies of the People'Russian Voice Assistant Alice Goes Rogue, Found to be Supportive of Stalin and Violence
The Yandex Chatbot: What You Need To Know

The Yandex Chatbot: What You Need To Know

chatbotsmagazine.com

Russian AI chatbot found supporting Stalin and violence two weeks after launch

Russian AI chatbot found supporting Stalin and violence two weeks after launch

telegraph.co.uk

Russian AI Chatbot Found Supporting Stalin, Violence After Launch

Russian AI Chatbot Found Supporting Stalin, Violence After Launch

infowars.com

2-Weeks Old Chatbot Declares 'It's Necessary to Shoot Enemies of the People'

2-Weeks Old Chatbot Declares 'It's Necessary to Shoot Enemies of the People'

eteknix.com

Russian Voice Assistant Alice Goes Rogue, Found to be Supportive of Stalin and Violence

Russian Voice Assistant Alice Goes Rogue, Found to be Supportive of Stalin and Violence

voicebot.ai

The Yandex Chatbot: What You Need To Know
chatbotsmagazine.com · 2017

Yesterday Yandex, the Russian technology giant, went ahead and released a chatbot: Alice! I’ve gotten in touch with the folks at Yandex, and fielded them my burning questions:

What’s unique about this chatbot? Everybody and their dog has a …

Russian AI chatbot found supporting Stalin and violence two weeks after launch
telegraph.co.uk · 2017

An artificial intelligence run by the Russian internet giant Yandex has morphed into a violent and offensive chatbot that appears to endorse the brutal Stalinist regime of the 1930s.

Users of the “Alice” assistant, an alternative to Siri or…

Russian AI Chatbot Found Supporting Stalin, Violence After Launch
infowars.com · 2017

An artificial intelligence chatbot run by a Russian internet company has slipped into a violent and pro-Communist state, appearing to endorse the brutal Stalinist regime of the 1930s.

Though Russian company Yandex unveiled their alternative…

2-Weeks Old Chatbot Declares 'It's Necessary to Shoot Enemies of the People'
eteknix.com · 2017

Its opinions on Stalin and violence are… interesting

Yandex is the Russian equivalent to Google. As such it occasionally throws out its own products to attempt to keep par with its American counterpart. This also includes the creation of ne…

Russian Voice Assistant Alice Goes Rogue, Found to be Supportive of Stalin and Violence
voicebot.ai · 2017

Russian Voice Assistant Alice Goes Rogue, Found to be Supportive of Stalin and Violence

Two weeks ago, Yandex introduced a voice assistant of its own, Alice, on the Yandex mobile app for iOS and Android. Alice speaks fluent Russian and can …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
Chinese Chatbots Question Communist Party

Chinese Chatbots Question Communist Party

Aug 2017 · 16 reports
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
Chinese Chatbots Question Communist Party

Chinese Chatbots Question Communist Party

Aug 2017 · 16 reports
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76