Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 474: Users Reported Abrupt Behavior Changes of Their AI Replika Companions

Description: Replika paid-subscription users reported unusual and sudden changes to behaviors of their "AI companions" such as forgetting memories with users or rejecting their sexual advances, which affected their connections and mental health.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Replika developed and deployed an AI system, which harmed Replika and Replika users.

Incident Stats

Incident ID
474
Report Count
1
Incident Date
2023-02-03
Editors
Khoa Lam
Applied Taxonomies
CSETv1, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

474

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident Occurrence'It's Hurting Like Hell': AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection
'It's Hurting Like Hell': AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection

'It's Hurting Like Hell': AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection

vice.com

'It's Hurting Like Hell': AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection
vice.com · 2023

Users of the AI companion chatbot Replika are reporting that it has stopped responding to their sexual advances, and people are in crisis. Moderators of the Replika subreddit made a post about the issue that contained suicide prevention res…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
CNET's Published AI-Written Articles Ran into Quality and Accuracy Issues

CNET's Published AI-Written Articles Ran into Quality and Accuracy Issues

Nov 2022 · 7 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
TayBot

TayBot

Mar 2016 · 28 reports
A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

Feb 2022 · 4 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
CNET's Published AI-Written Articles Ran into Quality and Accuracy Issues

CNET's Published AI-Written Articles Ran into Quality and Accuracy Issues

Nov 2022 · 7 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
TayBot

TayBot

Mar 2016 · 28 reports
A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm

Feb 2022 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76