Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 712: Meta AI Hallucinates Harassment Allegations Against New York Politicians

Description: Meta's AI chatbot in Facebook Messenger falsely accused multiple state lawmakers of sexual harassment, fabricating incidents, investigations, and consequences that never occurred. These fabricated stories, discovered by City & State, sparked outrage among the affected lawmakers and raised concerns about the reliability of the chatbot. Meta acknowledged the errors and committed to ongoing improvements.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Meta and Facebook users developed an AI system deployed by Meta, which harmed Meta , Facebook users , Kristen Gonzalez , Clyde Vanel and New York lawmakers.

Incident Stats

Incident ID
712
Report Count
2
Incident Date
2024-04-26
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

3.1. False or misleading information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Misinformation

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
Meta AI falsely claims lawmakers were accused of sexual harassment
Meta AI chatbot fabricates sexual harassment allegations against US politicians
Meta AI falsely claims lawmakers were accused of sexual harassment

Meta AI falsely claims lawmakers were accused of sexual harassment

cityandstateny.com

Meta AI chatbot fabricates sexual harassment allegations against US politicians

Meta AI chatbot fabricates sexual harassment allegations against US politicians

the-decoder.com

Meta AI falsely claims lawmakers were accused of sexual harassment
cityandstateny.com · 2024

If you're looking for information about state lawmakers, maybe don't trust Facebook's new Meta AI -- it may hallucinate about sexual harassment. 

Facebook's chatbot, which launched in September as the latest in the trend of generative artif…

Meta AI chatbot fabricates sexual harassment allegations against US politicians
the-decoder.com · 2024

Meta's new chatbot invents sexual harassment allegations against US politicians. The allegations are fictitious, but the chatbot backs them up with a ton of details.

City & State obtained a screenshot of a Meta AI conversation in which the …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Inappropriate Gmail Smart Reply Suggestions

Inappropriate Gmail Smart Reply Suggestions

Nov 2015 · 22 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
TayBot

TayBot

Mar 2016 · 28 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Inappropriate Gmail Smart Reply Suggestions

Inappropriate Gmail Smart Reply Suggestions

Nov 2015 · 22 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
TayBot

TayBot

Mar 2016 · 28 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76