Description: xAI's Grok chatbot reportedly inserted unsolicited references to "white genocide" in South Africa into a wide array of unrelated conversations on X. These reported interjections introduced inflammatory, racially charged content into otherwise neutral threads.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: xAI developed an AI system deployed by xAI and X (Twitter), which harmed X (Twitter) users , Black South Africans and Public discourse integrity.
Alleged implicated AI systems: X (Twitter) and Grok
Incident Stats
Incident ID
1072
Report Count
2
Incident Date
2025-05-14
Editors
Dummy Dummy
Incident Reports
Reports Timeline
A chatbot developed by Elon Musk's multibillion-dollar artificial intelligence startup xAI appeared to be suffering from a glitch Wednesday when it repeatedly brought up white genocide in South Africa in response to user queries about unrel…
Yesterday, a user on X saw a viral post of Timothée Chalamet celebrating courtside at a Knicks game and had a simple question: Who was sitting next to him? The user tapped in Grok, X's proprietary chatbot, as people often do when they want …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Images of Black People Labeled as Gorillas
· 24 reports

Biased Google Image Results
· 18 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Images of Black People Labeled as Gorillas
· 24 reports

Biased Google Image Results
· 18 reports