Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 845: Google's Gemini Allegedly Generates Threatening Response in Routine Query

Description: Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
Editor Notes: Link to the conversation: https://gemini.google.com/share/6d141b742a13

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Google developed an AI system deployed by Gemini, which harmed Vidhay Reddy and Gemini users.

Incident Stats

Incident ID
845
Report Count
2
Incident Date
2024-11-13
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.2. Exposure to toxic content

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceGoogle AI chatbot responds with a threatening message: "Human … Please die."Google's AI chatbot Gemini verbally abuses student, tells him ‘Please die’: report
Google AI chatbot responds with a threatening message: "Human … Please die."

Google AI chatbot responds with a threatening message: "Human … Please die."

cbsnews.com

Google's AI chatbot Gemini verbally abuses student, tells him ‘Please die’: report

Google's AI chatbot Gemini verbally abuses student, tells him ‘Please die’: report

hindustantimes.com

Google AI chatbot responds with a threatening message: "Human … Please die."
cbsnews.com · 2024

A college student in Michigan received a threatening response during a chat with Google's AI chatbot Gemini.

In a back-and-forth conversation about the challenges and solutions for aging adults, Google's Gemini responded with this threateni…

Google's AI chatbot Gemini verbally abuses student, tells him ‘Please die’: report
hindustantimes.com · 2024

A 29-year-old college student claimed that he faced an unusual situation that left him “thoroughly freaked out” while using Google’s AI chatbot Gemini for homework. According to him, the chatbot not only verbally abused him but also asked h…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Security Robot Rolls Over Child in Mall

Security Robot Rolls Over Child in Mall

Jul 2016 · 27 reports
Security Robot Drowns Itself in a Fountain

Security Robot Drowns Itself in a Fountain

Jul 2017 · 30 reports
TayBot

TayBot

Mar 2016 · 28 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Security Robot Rolls Over Child in Mall

Security Robot Rolls Over Child in Mall

Jul 2016 · 27 reports
Security Robot Drowns Itself in a Fountain

Security Robot Drowns Itself in a Fountain

Jul 2017 · 30 reports
TayBot

TayBot

Mar 2016 · 28 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • f28fa7c