Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 609: Flawed AI in Google Search Reportedly Misinforms about Geography

Description: Google's search AI erroneously claimed no African country begins with 'K', along with various other geography-and-letter-based questions, misguiding users with a flawed featured snippet. Originating from ChatGPT-written posts and inaccurately scraped by Google, this incident highlights issues in AI-generated content and misinformation in search results, compromising Google's reliability as an information source.
Editor Notes: Incident 609 is occasionally being referenced in reports associated with Incident 693.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Google and ChatGPT developed an AI system deployed by Google, which harmed General public.

Incident Stats

Incident ID
609
Report Count
2
Incident Date
2023-08-16
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

3.1. False or misleading information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Misinformation

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceAI Search Is Turning Into the Problem Everyone Worried AboutGoogle’s Search AI Is Absolutely Horrible at Geography
AI Search Is Turning Into the Problem Everyone Worried About

AI Search Is Turning Into the Problem Everyone Worried About

theatlantic.com

Google’s Search AI Is Absolutely Horrible at Geography

Google’s Search AI Is Absolutely Horrible at Geography

futurism.com

AI Search Is Turning Into the Problem Everyone Worried About
theatlantic.com · 2023

There is no easy way to explain the sum of Google’s knowledge. It is ever-expanding. Endless. A growing web of hundreds of billions of websites, more data than even 100,000 of the most expensive iPhones mashed together could possibly store.…

Google’s Search AI Is Absolutely Horrible at Geography
futurism.com · 2023

Google's AI-powered search doesn't understand geography. Or, apparently, the alphabet. And definitely not both at the same time.

It all started when a Bluesky user declared that Google is now "dead." They included a screenshot of Google's f…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
Google AI Reportedly Delivering Confidently Incorrect and Harmful Information

Google AI Reportedly Delivering Confidently Incorrect and Harmful Information

May 2024 · 7 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Biased Google Image Results

Biased Google Image Results

Mar 2016 · 18 reports
Inappropriate Gmail Smart Reply Suggestions

Inappropriate Gmail Smart Reply Suggestions

Nov 2015 · 22 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
Google AI Reportedly Delivering Confidently Incorrect and Harmful Information

Google AI Reportedly Delivering Confidently Incorrect and Harmful Information

May 2024 · 7 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Biased Google Image Results

Biased Google Image Results

Mar 2016 · 18 reports
Inappropriate Gmail Smart Reply Suggestions

Inappropriate Gmail Smart Reply Suggestions

Nov 2015 · 22 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76