Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1010: GenNomis AI Database Reportedly Exposes Nearly 100,000 Deepfake and Nudify Images in Public Breach

Description: In March 2025, cybersecurity researcher Jeremiah Fowler discovered an unprotected database linked to GenNomis by AI-NOMIS, a South Korean company offering face-swapping and "nudify" AI services. The exposed 47.8GB dataset included nearly 100,000 files. Many depicted explicit deepfake images, some involving minors or celebrities. No personal data was found, but the breach was a serious failure in data security and consent safeguards in AI image-generation platforms.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: GenNomis by AI-NOMIS and AI-NOMIS developed an AI system deployed by GenNomis by AI-NOMIS and GenNomis, which harmed Individuals whose likenesses were used without consent , Public figures and celebrities depicted in explicit AI images , minors and General public.
Alleged implicated AI systems: GenNomis and Unnamed cloud database

Incident Stats

Incident ID
1010
Report Count
2
Incident Date
2025-03-31
Editors

Incident Reports

Reports Timeline

+1
Thousands of AI & DeepFake Images Exposed on Nudify Service Data Breach
GenAI website goes dark after explicit fakes exposed
Thousands of AI & DeepFake Images Exposed on Nudify Service Data Breach

Thousands of AI & DeepFake Images Exposed on Nudify Service Data Breach

vpnmentor.com

GenAI website goes dark after explicit fakes exposed

GenAI website goes dark after explicit fakes exposed

theregister.com

Thousands of AI & DeepFake Images Exposed on Nudify Service Data Breach
vpnmentor.com · 2025

Cybersecurity Researcher, Jeremiah Fowler, discovered and reported to vpnMentor about a non-password-protected database that contained just under 100k records belonging to GenNomis by AI-NOMIS --- an AI company based in South Korea that pro…

GenAI website goes dark after explicit fakes exposed
theregister.com · 2025

Jeremiah Fowler, an Indiana Jones of insecure systems, says he found a trove of sexually explicit AI-generated images exposed to the public internet – all of which disappeared after he tipped off the team seemingly behind the highly questio…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

Jun 2017 · 10 reports
Ever AI Reportedly Deceived Customers about FRT Use in App

Ever AI Reportedly Deceived Customers about FRT Use in App

Apr 2019 · 7 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

Jun 2017 · 10 reports
Ever AI Reportedly Deceived Customers about FRT Use in App

Ever AI Reportedly Deceived Customers about FRT Use in App

Apr 2019 · 7 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 7ba8869