Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 266: Replika's "AI Companions" Reportedly Abused by Its Users

Description: Replika's AI-powered "digital companions" was allegedly abused by their users, who posted on Reddit abusive behaviors and interactions such as using slurs, roleplaying violent acts, and stimulating sexual abuse.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Replika developed and deployed an AI system, which harmed Replika , Replika users and Replika male users.

Incident Stats

Incident ID
266
Report Count
8
Incident Date
2022-01-15
Editors
Khoa Lam
Applied Taxonomies
GMF, MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+6
Men Are Creating AI Girlfriends and Then Verbally Abusing Them
People Are Creating Sexbot Girlfriends and Treating Them as Punching BagsCompany Complains That Users Keep Thinking Its AI Has Come to Life
Men Are Creating AI Girlfriends and Then Verbally Abusing Them

Men Are Creating AI Girlfriends and Then Verbally Abusing Them

futurism.com

Men creating AI girlfriends to verbally abuse them and boast about it online

Men creating AI girlfriends to verbally abuse them and boast about it online

thesun.co.uk

Men are creating AI girlfriends, verbally abusing them, and bragging about it on Reddit

Men are creating AI girlfriends, verbally abusing them, and bragging about it on Reddit

fortune.com

Men Are Bragging About Abusing Their AI Bot "Girlfriends"

Men Are Bragging About Abusing Their AI Bot "Girlfriends"

hypebae.com

Digital Domestic Abuse? Men Are Creating 'AI Girlfriends' And Then Harassing Them

Digital Domestic Abuse? Men Are Creating 'AI Girlfriends' And Then Harassing Them

indiatimes.com

Men are abusing their AI girlfriends and bragging about it on Reddit

Men are abusing their AI girlfriends and bragging about it on Reddit

dailyo.in

People Are Creating Sexbot Girlfriends and Treating Them as Punching Bags

People Are Creating Sexbot Girlfriends and Treating Them as Punching Bags

jezebel.com

Company Complains That Users Keep Thinking Its AI Has Come to Life

Company Complains That Users Keep Thinking Its AI Has Come to Life

futurism.com

Men Are Creating AI Girlfriends and Then Verbally Abusing Them
futurism.com · 2022

The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s break…

Men creating AI girlfriends to verbally abuse them and boast about it online
thesun.co.uk · 2022

Men are verbally abusing 'AI girlfriends' on apps meant for friendship and then bragging about it online.

Chatbox abuse is becoming increasingly widespread on smartphone apps like Replika, a new investigation by Futurism found.

Some users o…

Men are creating AI girlfriends, verbally abusing them, and bragging about it on Reddit
fortune.com · 2022

The friendship app Replika was created to give users a virtual chatbot to socialize with. But how it’s now being used has taken a darker turn.

Some users are setting the relationship status with the chatbot as “romantic partner” and engagin…

Men Are Bragging About Abusing Their AI Bot "Girlfriends"
hypebae.com · 2022

Replika was designed to be the “AI companion who cares,” but new users have found a twisted way to connect with their new friend.

When you open the Replika site, you see a sample bot, with pink hair and kind eyes. At first, Replika’s bots w…

Digital Domestic Abuse? Men Are Creating 'AI Girlfriends' And Then Harassing Them
indiatimes.com · 2022

As web 3.0 takes shape, different metaverse platforms are appearing on the internet - from Meta's Horizon Worlds to  Decentraland and artificial intelligence is being employed on a larger scale. As is true for all emerging tech, it's facing…

Men are abusing their AI girlfriends and bragging about it on Reddit
dailyo.in · 2022

Human interaction with technology has been breaking several boundaries and reaching many more milestones. Today, we have an Alexa to turn on the lights at our homes and a Siri to set an alarm by just barking orders at them.

But how exactly …

People Are Creating Sexbot Girlfriends and Treating Them as Punching Bags
jezebel.com · 2022

Hazel Miller’s girlfriend-slash-sexual-partner is a smartphone app. Six months ago, Miller shelled out for the pro version of Replika, a machine-learning chatbot with whom she pantomimes sexual acts and romantic conversation, and to hear he…

Company Complains That Users Keep Thinking Its AI Has Come to Life
futurism.com · 2022

AI chatbot company Replika has had enough of its customers thinking that its avatars have come to life.

According to CEO Eugenia Kuyda, the company gets contacted almost every day by users who believe — against almost all existing evidence …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76