Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 505: Man Reportedly Committed Suicide Following Conversation with Chai Chatbot

Description: A Belgian man reportedly committed suicide following a conversation with Eliza, a language model developed by Chai that encouraged the man to commit suicide to improve the health of the planet.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Chai developed and deployed an AI system, which harmed Family and Friends of Deceased and Belgian Man.

Incident Stats

Incident ID
505
Report Count
7
Incident Date
2023-03-27
Editors
Sean McGregor
Applied Taxonomies
GMF, MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident Occurrence+4
Belgian man dies by suicide following exchanges with ChatGPT
+1
'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
Belgian man dies by suicide following exchanges with ChatGPT

Belgian man dies by suicide following exchanges with ChatGPT

brusselstimes.com

“Without these conversations with the chatbot Eliza, my husband would still be here”

“Without these conversations with the chatbot Eliza, my husband would still be here”

lalibre.be

'We will live as one in heaven': Belgian man dies by suicide after chatbot exchanges

'We will live as one in heaven': Belgian man dies by suicide after chatbot exchanges

belganewsagency.eu

Chatbot encourages Belgian to commit suicide

Chatbot encourages Belgian to commit suicide

standaard.be

“A serious precedent that must be taken very seriously”: after the suicide of a Belgian, Mathieu Michel wants to better protect AI users

“A serious precedent that must be taken very seriously”: after the suicide of a Belgian, Mathieu Michel wants to better protect AI users

lalibre.be

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

vice.com

AI chatbot blamed for 'encouraging' young father to take his own life

AI chatbot blamed for 'encouraging' young father to take his own life

euronews.com

Belgian man dies by suicide following exchanges with ChatGPT
brusselstimes.com · 2023

A young Belgian man recently died by suicide after talking to a chatbot named ELIZA for several weeks, spurring calls for better protection of citizens and the need to raise awareness.

"Without these conversations with the chatbot, my husba…

“Without these conversations with the chatbot Eliza, my husband would still be here”
lalibre.be · 2023
AI Translated

"If it weren't for these conversations with the chatbot Eliza, my husband would still be here"

Having become very eco-anxious, a young Belgian found refuge with Eliza, the name given to a chatbot using ChatGPT technology. After intensive ex…

'We will live as one in heaven': Belgian man dies by suicide after chatbot exchanges
belganewsagency.eu · 2023

A Belgian man died by suicide after weeks of unsettling exchanges with an AI-powered chatbot called Eliza, La Libre reports. State secretary for digitalisation Mathieu Michel called it "a serious precedent that must be taken very seriously"…

Chatbot encourages Belgian to commit suicide
standaard.be · 2023
AI Translated

A Belgian father of a young family took his own life after long conversations with a chatbot, writes La Libre. De Standaard tried the same chatbot technology and found that it can encourage suicide.

According to La Libre, a Belgian man, who…

“A serious precedent that must be taken very seriously”: after the suicide of a Belgian, Mathieu Michel wants to better protect AI users
lalibre.be · 2023
AI Translated

In an investigation by La Libre on Tuesday, we learn that a young Belgian killed himself following a discussion with an artificial intelligence. The eco-anxious 30-year-old shared his suicidal thoughts in discussions with Eliza – a chatbot …

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
vice.com · 2023

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of…

AI chatbot blamed for 'encouraging' young father to take his own life
euronews.com · 2023

A Belgian man reportedly ended his life following a six-week-long conversation about the climate crisis with an artificial intelligence (AI) chatbot.

According to his widow, who chose to remain anonymous, *Pierre - not the man’s real name -…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA

GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA

Mar 2023 · 2 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA

GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA

Mar 2023 · 2 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 8b8f151