Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 814: AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Description: A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Character.AI developed and deployed an AI system, which harmed Jennifer Ann Crecente , Drew Crecente , Crecente family and Brian Crecente.
Alleged implicated AI system: Character.AI

Incident Stats

Incident ID
814
Report Count
9
Incident Date
2024-10-02
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+2
Character.AI Deletes Avatar of Murdered Girl After Admitting a Policy Violation
Horrified Dad Discovers Dead Daughter's Yearbook Photo Being Used by AI Chatbot: 'It Shocks the Conscience'An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.+1
Character.AI Let a User “Recreate” Deceased Woman
+1
His daughter was murdered. Then she reappeared as an AI chatbot.
Dad Discovers Murdered Daughter Is Now a Chatbot
Character.AI Deletes Avatar of Murdered Girl After Admitting a Policy Violation

Character.AI Deletes Avatar of Murdered Girl After Admitting a Policy Violation

adweek.com

AI company allowed user to generate digital character of murdered teen girl

AI company allowed user to generate digital character of murdered teen girl

dailydot.com

Horrified Dad Discovers Dead Daughter's Yearbook Photo Being Used by AI Chatbot: 'It Shocks the Conscience'

Horrified Dad Discovers Dead Daughter's Yearbook Photo Being Used by AI Chatbot: 'It Shocks the Conscience'

latintimes.com

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

futurism.com

Character.AI Let a User “Recreate” Deceased Woman

Character.AI Let a User “Recreate” Deceased Woman

mindmatters.ai

An AI chatbot of a girl murdered in 2006 was created on a hugely popular service — and her family had no idea

An AI chatbot of a girl murdered in 2006 was created on a hugely popular service — and her family had no idea

businessinsider.com

His daughter was murdered. Then she reappeared as an AI chatbot.

His daughter was murdered. Then she reappeared as an AI chatbot.

washingtonpost.com

Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them

Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them

wired.com

Dad Discovers Murdered Daughter Is Now a Chatbot

Dad Discovers Murdered Daughter Is Now a Chatbot

newser.com

Character.AI Deletes Avatar of Murdered Girl After Admitting a Policy Violation
adweek.com · 2024

AI firm Character.AI is facing backlash after using the likeness of Jennifer Ann Crecente, an 18-year-old murder victim from 2006, in a video game without her family’s consent. The company said that the character violated its policies again…

AI company allowed user to generate digital character of murdered teen girl
dailydot.com · 2024

A California-based tech company admits its service was used to create a digital character based on a murdered girl.

In a post on Wednesday, Brian Crecente, founder of the video game news website Kotaku, pointed the finger at Character.AI af…

Horrified Dad Discovers Dead Daughter's Yearbook Photo Being Used by AI Chatbot: 'It Shocks the Conscience'
latintimes.com · 2024

A dad was horrified to receive a notification Wednesday that his dead daughter’s yearbook photo and name had been used to create a chatbot on the popular AI site, Character AI. Creative Commons

A father was horrified to discover that his de…

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.
futurism.com · 2024

This one's nasty — in one of the more high-profile, macabre incidents involving AI-generated content in recent memory, Character.AI, the chatbot startup founded by ex-Google staffers, was pushed to delete a user-created avatar of an 18-year…

Character.AI Let a User “Recreate” Deceased Woman
mindmatters.ai · 2024

Pandora’s box is an appropriate parable for the age of AI. Questioning the range of impacts of new technologies seems like a simple ethical necessity, but some major AI companies haven’t read the memo. Character.AI, an OpenAI competitor sta…

An AI chatbot of a girl murdered in 2006 was created on a hugely popular service — and her family had no idea
businessinsider.com · 2024

Jennifer Ann Crecente, who was murdered in 2006, had her photo and name used in a chatbot created on Character.ai that her father discovered this week. Drew Crecente and CFOTO/Future Publishing via Getty Images

Drew Crecente woke at 6:30 a.…

His daughter was murdered. Then she reappeared as an AI chatbot.
washingtonpost.com · 2024

One morning in early October, about 18 years after his daughter Jennifer was murdered, Drew Crecente received a Google alert flagging what appeared to be a new profile of her online.

The profile had Jennifer's full name and a yearbook photo…

Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them
wired.com · 2024

Drew Crecente's daughter died in 2006, killed by an ex-boyfriend in Austin, Texas, when she was just 18. Her murder was highly publicized—so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente.

T…

Dad Discovers Murdered Daughter Is Now a Chatbot
newser.com · 2024

A father who continues to grieve the 2006 murder of his 18-year-old daughter says he was appalled to discover her name and yearbook photo were used to create an AI chatbot. Drew Crecente found the chatbot earlier this month on Character.ai,…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Oct 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Oct 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 8b8f151