Description: In 2017, Noelle Martin discovered explicit deepfake videos online that used AI technology to superimpose her face onto pornographic scenes. This incident was a continuation of the abuse she had experienced since at least 2012, when she first found doctored still images of herself in similar contexts. Despite the initial lack of legal protections, her advocacy efforts were instrumental in making image-based abuse a criminal offense in Australia.
Editor Notes: Incidents 771 and 772 are closely related in terms of narrative overlap and discussion.
Entities
View all entitiesAlleged: Stanford University , Max Planck Institute , University of Erlangen-Nuremberg , Face2Face , FaceApp and Zao developed an AI system deployed by Unknown deepfake creators, which harmed Noelle Martin.
Incident Stats
Incident ID
771
Report Count
1
Incident Date
2020-02-06
Editors
Incident Reports
Reports Timeline
elle.com · 2020
- View the original report at its source
- View the report at the Internet Archive
'There's deepfakes of you,' the email read. Instantly, my pulse quickened. Who was this? How did they get my email address? What was a deepfake?
As panic began to set in, I Googled the term and watched, horrified, as clips of celebrities in…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.