Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 494: Female Celebrities' Faces Shown in Sexually Suggestive Ads for Deepfake App

Description: Sexually suggestive videos featuring faces of female celebrities such as Emma Watson and Scarlett Johansson were rolled out as ads on social media for an app allowing users to create deepfakes.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Facemega developed and deployed an AI system, which harmed Scarlett Johansson , female celebrities and Emma Watson.

Incident Stats

Incident ID
494
Report Count
5
Incident Date
2023-03-05
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident Occurrence+1
Hundreds of sexual deepfake ads using Emma Watson’s face ran on Facebook and Instagram in the last two days
Disturbing App That Advertised Emma Watson Deepfake Was Removed From App Stores+1
Demand for deepfake pornography is exploding. We aren’t ready
Hundreds of sexual deepfake ads using Emma Watson’s face ran on Facebook and Instagram in the last two days

Hundreds of sexual deepfake ads using Emma Watson’s face ran on Facebook and Instagram in the last two days

nbcnews.com

Why the Facemega deepfake app is a slippery slope

Why the Facemega deepfake app is a slippery slope

thred.com

Disturbing App That Advertised Emma Watson Deepfake Was Removed From App Stores

Disturbing App That Advertised Emma Watson Deepfake Was Removed From App Stores

jezebel.com

Demand for deepfake pornography is exploding. We aren’t ready

Demand for deepfake pornography is exploding. We aren’t ready

theguardian.com

A face-swap app promoted sexually suggestive ads with Emma Watson's face. An attorney says this is how deepfake tech can be used as a 'weapon' against women.

A face-swap app promoted sexually suggestive ads with Emma Watson's face. An attorney says this is how deepfake tech can be used as a 'weapon' against women.

insider.com

Hundreds of sexual deepfake ads using Emma Watson’s face ran on Facebook and Instagram in the last two days
nbcnews.com · 2023

In a Facebook ad, a woman with a face identical to actress Emma Watson’s face smiles coyly and bends down in front of the camera, appearing to initiate a sexual act. But the woman isn’t Watson, the “Harry Potter” star. The ad was part of a …

Why the Facemega deepfake app is a slippery slope
thred.com · 2023

Looking at the video of a deep fake’d Emma Watson above, you can see how realistic the technology is.

Seriously, I went to the critically-acclaimed ABBA holographic concert in London recently and the lifelike movements in that clip are stri…

Disturbing App That Advertised Emma Watson Deepfake Was Removed From App Stores
jezebel.com · 2023

Earlier this week, an NBC report unearthed a celebrity face-swapping app, Facemega, with the potential to easily create deepfake porn depicting famous or public-facing women. Deepfake porn refers to fake but highly realistic, often AI-gener…

Demand for deepfake pornography is exploding. We aren’t ready
theguardian.com · 2023

In the ad, a woman in a white lace dress makes suggestive faces at the camera, and then kneels. There’s something a bit uncanny about her; a quiver at the side of her temple, a peculiar stillness of her lip. But if you saw the video in the …

A face-swap app promoted sexually suggestive ads with Emma Watson's face. An attorney says this is how deepfake tech can be used as a 'weapon' against women.
insider.com · 2023

Multiple online stores and Meta have removed a controversial face swap app that promoted a sexually suggestive ad featuring the face of the "Harry Potter" actor Emma Watson imposed onto someone else. 

The app for creating deepfakes, called …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
FaceApp Racial Filters

FaceApp Racial Filters

Apr 2017 · 23 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
FaceApp Racial Filters

FaceApp Racial Filters

Apr 2017 · 23 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76