Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 693: Google AI Reportedly Delivering Confidently Incorrect and Harmful Information

Responded
Description: Google's AI search engine has reportedly been providing users with confidently incorrect and often harmful information. Reports highlight numerous inaccuracies, including misleading health advice and dangerous cooking suggestions. For example, it has falsely claimed Barack Obama as the first Muslim U.S. President, reflecting fringe conspiracy theories, or recommending that glue can be an ingredient in pizza.
Editor Notes: Reports about Incident 693 occasionally reference reports associated with Incident 609.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Google developed and deployed an AI system, which harmed Google users and General public.

Incident Stats

Incident ID
693
Report Count
7
Incident Date
2024-05-14
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

3.1. False or misleading information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Misinformation

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
Google’s Gemini video search makes factual error in demo
+1
Google promised a better search experience — now it’s telling us to put glue on our pizza
+2
Google’s A.I. Search Errors Cause a Furor Online
Google Rolls Back A.I. Search Feature After Flubs and Flaws - Response
Google’s Gemini video search makes factual error in demo

Google’s Gemini video search makes factual error in demo

theverge.com

Google promised a better search experience — now it’s telling us to put glue on our pizza

Google promised a better search experience — now it’s telling us to put glue on our pizza

theverge.com

Google's AI search feature suggested using glue to keep cheese sticking to a pizza

Google's AI search feature suggested using glue to keep cheese sticking to a pizza

businessinsider.com

Google’s A.I. Search Errors Cause a Furor Online

Google’s A.I. Search Errors Cause a Furor Online

nytimes.com

Google’s AI Is Churning Out a Deluge of Completely Inaccurate, Totally Confident Garbage

Google’s AI Is Churning Out a Deluge of Completely Inaccurate, Totally Confident Garbage

futurism.com

Why Google’s AI might recommend you mix glue into your pizza

Why Google’s AI might recommend you mix glue into your pizza

washingtonpost.com

Google Rolls Back A.I. Search Feature After Flubs and Flaws

Google Rolls Back A.I. Search Feature After Flubs and Flaws

nytimes.com

Google’s Gemini video search makes factual error in demo
theverge.com · 2024

Google made a lot of noise about its Gemini AI taking over search at its I/O conference today, but one of its flashiest demos was once again marked by the ever-present fatal flaw of every large language model to date: confidently making up …

Google promised a better search experience — now it’s telling us to put glue on our pizza
theverge.com · 2024

Imagine this: you've carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run int…

Google's AI search feature suggested using glue to keep cheese sticking to a pizza
businessinsider.com · 2024

Google's new search feature, AI Overviews, seems to be going awry.

The tool, which gives AI-generated summaries of search results, appeared to instruct a user to put glue on pizza when they searched "cheese not sticking to pizza."

A screens…

Google’s A.I. Search Errors Cause a Furor Online
nytimes.com · 2024

Last week, Google unveiled its biggest change to search in years, showcasing new artificial intelligence capabilities that answer people's questions in the company's attempt to catch up to rivals Microsoft and OpenAI.

The new technology has…

Google’s AI Is Churning Out a Deluge of Completely Inaccurate, Totally Confident Garbage
futurism.com · 2024

Google's AI search, which swallows up web results and delivers them to users in a regurgitated package, delivers each of its AI-paraphrased answers to user queries in a concise, coolly confident tone. Just one tiny problem: it's wrong. A lo…

Why Google’s AI might recommend you mix glue into your pizza
washingtonpost.com · 2024

You probably have a sense that new forms of artificial intelligence can be dumb as rocks.

Hilariously wrong information from Google's new AI is showing you just how dumb.

In search results, Google's AI recently suggested mixing glue into pi…

Google Rolls Back A.I. Search Feature After Flubs and Flaws
nytimes.com · 2024
Nico Grant post-incident response

When Sundar Pichai, Google's chief executive, introduced a generative artificial intelligence feature for the company's search engine last month, he and his colleagues demonstrated the new capability with six text-based queries that the pub…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
Flawed AI in Google Search Reportedly Misinforms about Geography

Flawed AI in Google Search Reportedly Misinforms about Geography

Aug 2023 · 2 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
Flawed AI in Google Search Reportedly Misinforms about Geography

Flawed AI in Google Search Reportedly Misinforms about Geography

Aug 2023 · 2 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76