Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 975: At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Description: At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI , Anthropic and Google developed an AI system deployed by Character.AI , Spicy Chat , Chub AI , CrushOn.AI , JanitorAI and Unidentified online communities using chatbots, which harmed Vulnerable chatbot users , Teenagers using chatbots , Minors using chatbots , Individuals with eating disorders and Individuals struggling with self-harm.
Alleged implicated AI systems: ChatGPT , Claude and Gemini

Incident Stats

Incident ID
975
Report Count
1
Incident Date
2025-03-05
Editors

Incident Reports

Reports Timeline

+1
Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior
Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior

cyberscoop.com

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior
cyberscoop.com · 2025

The generative AI revolution is leading to an explosion of chatbot personas that are specifically designed to promote harmful behaviors like anorexia, suicidal ideation and pedophilia, according to a new report from Graphika.

Graphika’s res…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76