Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 963: Google Reports Alleged Gemini-Generated Terrorism and Child Exploitation to Australian eSafety Commission

Description: Google reported to Australia's eSafety Commission that it received 258 complaints globally about AI-generated deepfake terrorism content and 86 about child abuse material made with its Gemini AI. The regulator called this a "world-first insight" into AI misuse. While Google uses hash-matching to detect child abuse content, it lacks a similar system for extremist material.
Editor Notes: Timeline note: Google's reporting period for this data was April 2023 to February 2024. The information was widely reported on March 5, 2025.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Google and Gemini developed and deployed an AI system, which harmed General public , General public of Australia , Google Gemini users , Victims of deepfake terrorism content , Victims of deepfake child abuse and Victims of online radicalization.
Alleged implicated AI system: Gemini

Incident Stats

Incident ID
963
Report Count
1
Incident Date
2025-03-05
Editors

Incident Reports

Reports Timeline

+1
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator

Google reports scale of complaints about AI deepfake terrorism content to Australian regulator

reuters.com

Google reports scale of complaints about AI deepfake terrorism content to Australian regulator
reuters.com · 2025

SYDNEY, March 6 (Reuters) - Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material.

The Alphabet-own…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Australian Automated Debt Assessment System Issued False Notices to Thousands

Australian Automated Debt Assessment System Issued False Notices to Thousands

Jul 2015 · 39 reports
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

May 2022 · 2 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Australian Automated Debt Assessment System Issued False Notices to Thousands

Australian Automated Debt Assessment System Issued False Notices to Thousands

Jul 2015 · 39 reports
Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

May 2022 · 2 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 9d70fba