Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 998: ChatGPT Allegedly Defamed Norwegian User by Inventing Child Homicide and Imprisonment

Description: In August 2024, ChatGPT is reported to have falsely claimed that Norwegian citizen Arve Hjalmar Holmen had killed his two sons and been sentenced to 21 years in prison. The fabricated response allegedly included specific details about the supposed crime, despite Holmen never being accused or convicted of any offense. The incident prompted a GDPR complaint for defamation and inaccurate personal data.
Editor Notes: Timeline notes: The incident date of 08/15/2024 is an approximation; the incident is reported to have occurred sometime in August 2024. Reporting on this incident arose on 03/20/2025.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and ChatGPT developed and deployed an AI system, which harmed Arve Hjalmar Holmen.
Alleged implicated AI system: ChatGPT

Incident Stats

Incident ID
998
Report Count
3
Incident Date
2024-08-15
Editors

Incident Reports

Reports Timeline

Incident Occurrence+2
ChatGPT falsely told man he killed his children
ChatGPT falsely told man he killed his children

ChatGPT falsely told man he killed his children

bbc.com

Norwegian man files complaint against ChatGPT for falsely saying he killed his sons

Norwegian man files complaint against ChatGPT for falsely saying he killed his sons

abc.net.au

Norwegian files complaint after ChatGPT falsely said he had murdered his children

Norwegian files complaint after ChatGPT falsely said he had murdered his children

theguardian.com

ChatGPT falsely told man he killed his children
bbc.com · 2025

A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.

Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot's maker, Ope…

Norwegian man files complaint against ChatGPT for falsely saying he killed his sons
abc.net.au · 2025

A Norwegian man has filed a complaint after artificial intelligence (AI) chatbot ChatGPT falsely claimed he was convicted of murdering two of his children. 

Arve Hjalmar Holmen was given the false information after he used ChatGPT to ask if…

Norwegian files complaint after ChatGPT falsely said he had murdered his children
theguardian.com · 2025

A Norwegian man has filed a complaint against the company behind ChatGPT after the chatbot falsely claimed he had murdered two of his children.

Arve Hjalmar Holmen, a self-described "regular person" with no public profile in Norway, asked C…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Passport checker Detects Asian man's Eyes as Closed

Passport checker Detects Asian man's Eyes as Closed

Dec 2016 · 22 reports
Robot kills worker at German Volkswagen plant

Robot kills worker at German Volkswagen plant

Jul 2014 · 27 reports
Alexa Recommended Dangerous TikTok Challenge to Ten-Year-Old Girl

Alexa Recommended Dangerous TikTok Challenge to Ten-Year-Old Girl

Dec 2021 · 3 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Passport checker Detects Asian man's Eyes as Closed

Passport checker Detects Asian man's Eyes as Closed

Dec 2016 · 22 reports
Robot kills worker at German Volkswagen plant

Robot kills worker at German Volkswagen plant

Jul 2014 · 27 reports
Alexa Recommended Dangerous TikTok Challenge to Ten-Year-Old Girl

Alexa Recommended Dangerous TikTok Challenge to Ten-Year-Old Girl

Dec 2021 · 3 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b