Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 760: False Election Data on Kamala Harris Reportedly Circulated via Grok AI Chatbot

Description: After President Joe Biden stepped aside as a presidential candidate on July 21, 2024, the AI chatbot Grok on X reportedly falsely informed users that Vice President Kamala Harris missed the ballot deadline in nine states. This misinformation, which spread widely on social media, prompted secretaries of state from five U.S. states to urge Elon Musk to address the problem.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: xAI developed an AI system deployed by xAI and X (Twitter), which harmed Kamala Harris , Electoral integrity , Democracy and American electorate.

Incident Stats

Incident ID
760
Report Count
2
Incident Date
2024-07-21
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

3.1. False or misleading information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Misinformation

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceSecretaries of state urge Musk to fix AI chatbot spreading false election infoFive US states push Musk to fix AI chatbot over election misinformation
Secretaries of state urge Musk to fix AI chatbot spreading false election info

Secretaries of state urge Musk to fix AI chatbot spreading false election info

washingtonpost.com

Five US states push Musk to fix AI chatbot over election misinformation

Five US states push Musk to fix AI chatbot over election misinformation

reuters.com

Secretaries of state urge Musk to fix AI chatbot spreading false election info
washingtonpost.com · 2024

Five secretaries of state plan to send an open letter to billionaire Elon Musk on Monday, urging him to "immediately implement changes" to X's AI chatbot Grok, after it shared with millions of users false information suggesting that Kamala …

Five US states push Musk to fix AI chatbot over election misinformation
reuters.com · 2024

WASHINGTON, Aug 5 (Reuters) - Secretaries of state from five U.S. states urged billionaire Elon Musk on Monday to fix social media platform X's AI chatbot, saying it had spread misinformation related to the Nov. 5 election.

WHY IT'S IMPORTA…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors

Grok AI Model Reportedly Fails to Produce Reliable News in Wake of Trump Assassination Attempt

Jul 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

Aug 2018 · 5 reports
Local South Korean Government’s Use of CCTV Footage Analysis via Facial Recognition to Track COVID Cases Raised Concerns about Privacy, Retention, and Potential Misuse

Local South Korean Government’s Use of CCTV Footage Analysis via Facial Recognition to Track COVID Cases Raised Concerns about Privacy, Retention, and Potential Misuse

Jan 2022 · 1 report
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Grok AI Model Reportedly Fails to Produce Reliable News in Wake of Trump Assassination Attempt

Jul 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar

Aug 2018 · 5 reports
Local South Korean Government’s Use of CCTV Footage Analysis via Facial Recognition to Track COVID Cases Raised Concerns about Privacy, Retention, and Potential Misuse

Local South Korean Government’s Use of CCTV Footage Analysis via Facial Recognition to Track COVID Cases Raised Concerns about Privacy, Retention, and Potential Misuse

Jan 2022 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76