Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses
DISCLAIMER: This post is not legal advice or commentary and should not be construed as such.
2023 saw an increase in AI-generated child sexual abuse materials (CSAM), along with prosecutions of offenders, a variety of legislative attempts to combat AI deepfakes targeting minors, and the Biden administration's executive order on artificial intelligence.
Deepfakes can be broadly classified into two main categories, each with its own subset related to CSAM. The first category includes deepfakes of actual individuals, where the predominant harm associated with CSAM arises from the generation of deepfake pornography featuring real children. The second category encompasses deepfakes where the subjects are entirely virtual yet convincingly realistic. In this category, CSAM concerns are primarily linked to the creation of inappropriate synthetic audiovisual media depicting virtual children. Collectively, these two taxonomic categories demonstrate the various troubling ways deepfake technology can be employed, especially in the generation and proliferation of CSAM.
This article provides a snapshot of some of the work of the AI Incident Database in tracking these emerging incidents, along with a survey of some of the nascent legislative responses.
Three specific court cases—one from South Korea, one from Quebec, and one from North Carolina—center on men who used AI to generate deepfake pornography of children.
In South Korea, the unnamed defendant generated 360 images of children in sexual situations. During the trial, prosecutors emphasized that the legal definition of sexually exploitative material should extend to depictions involving “virtual humans” and not be limited to images of real children. They argued that with the advent of “high level” technology, such as AI, imagery realistic enough to resemble actual minors can constitute sexually abusive content. This stance was supported by the ruling in the case, aligning with the prosecutor’s viewpoint on modern technological capabilities and their implications in law.
In the Quebec case, Steven Larouche from Sherbrooke was sentenced to over three years in prison for producing synthetic child pornography using deepfake technology. He admitted to creating at least seven videos by superimposing faces onto other bodies. Additionally, Larouche was convicted of possessing a vast collection of child pornography, leading to an extra four and a half years in prison. Provincial court judge Benoit Gagnon noted this as the country's first deepfake-related child exploitation case, expressing concerns about the potential misuse of this technology to manipulate social media images of children.
The North Carolina case involved the child psychiatrist David Tatum. Among his other charges, evidence presented at his trial revealed that Tatum employed AI to transform clothed images of minors into explicit material, including altering photos from a school dance and a first-day-of-school celebration using a web-based AI application.
Analyzing the media discourse yields a divergence in the kinds of incidents emerging as a result of this technology. The above cases align with longstanding reporting on child predators and their sentencing, albeit with the emerging presence of AI as part of the overall crime profile. Below, two major cases of a different kind demonstrate the proliferation in local communities (whether towns or schools) of girls’ likenesses being used in deepfake pornography.
In Almendralejo, Spain, AI was used to create and distribute images of young girls in which they were made to appear naked. The images, involving several local girls, were reportedly shared widely, leading to a police probe. Mothers of the girls raised awareness about the issue, concerned about the potential to upload these images to pornographic sites. Regional officials confirmed an ongoing investigation, with some suspects identified. This incident, described as gender-based digital violence, drew widespread condemnation.
Similarly, at Westfield High School in New Jersey, AI-generated pornographic images of students caused significant distress and sparked an investigation. Dorota Mani, whose daughter Francesca was targeted, expressed her concerns about the impact of AI on children and filed a police report, and the two of them went on CNN to discuss the incident. The school’s response may in time be viewed as one of the first major case studies for responding to these incidents. Around the same time as Westfield, Issaquah High School in Washington State was experiencing its own similar problem with deepfakes.
Most of the examples surveyed above seem primarily to involve the misappropriation of real children’s likenesses. In contrast, the South Korean case is an example of technology used to generate synthetic audiovisual media depicting children who do not exist in real life, which also presents significant problems.
Recently, an incident came to light when a LinkedIn user reported that an AI app known as PicSo_ai was generating inappropriate content, alarmingly, with a focus on “girls.” This was not a standalone case. A search on Instagram recommended troubling auto-suggestions related to AI-generated images of underage girls, marked as “popular.” This discovery points toward a concerning pattern where AI is being exploited in a gray zone, and where, according to a recent Wall Street Journal investigation, Instagram’s Reels algorithm recommended sexually suggestive and explicit content related to children and adults to users following young influencers. This raised significant concerns about the platform’s content moderation and safety, with major brands’ ads appearing alongside such content. Despite Meta’s efforts to implement safety tools, challenges in algorithmic content curation and digital safety on social media are persistent.
The Wall Street Journal also recently reported on the New Mexico Attorney General filing a lawsuit against Meta, alleging Facebook and Instagram algorithms steered predators and pornographic content to minor-themed test accounts. The investigation involved AI-generated images of fictional children, resulting in the accounts receiving explicit messages and sexual propositions. The lawsuit claims Meta’s platforms have become a marketplace for predators and criticizes their failure to protect underage users, citing several criminal cases of exploitation via these platforms. In this case, it was the investigators who generated the images of the fictional minors, adapting new technology to old techniques in this specific field of law enforcement.
A recent study by David Thiel of the Stanford Internet Observatory details the presence of CSAM in the training data of generative machine learning models, focusing on the LAION-5B dataset used for models like Stable Diffusion. Through various methods, including PhotoDNA and ML classifiers, Thiel identified numerous known and new instances of CSAM in the dataset. The findings are timely as they showcase the need for more rigorous data curation and model training practices to prevent the perpetuation of harmful content, in line with the concerns raised by the incidents on social media platforms and emphasizing the importance of further responsible AI development on this front.
Currently, legal efforts to address CSAM deepfakes have been reactive and piecemeal. President Biden’s executive order on AI, though, aims to establish strict standards to prevent AI misuse, focusing on national security and individual safety, which entails authenticating digital content and labeling synthetic media, especially to protect children from AI-enabled harm. Developers must share AI safety test results before public use, targeting issues like CSAM creation. The order directs developing standards for content authentication and AI detection, and addresses AI-generated CSAM and non-consensual sexualized images. Preceding the executive order, U.S. attorneys general urged Congress to investigate AI’s role in child exploitation, emphasizing the need for comprehensive data privacy legislation.
As of yet, no overarching federal bill exists, but efforts have been made (e.g., H.R.3230 DEEP FAKES Accountability Act). Below are four examples of state-level legislation:
-
California, AB 602, Section 1708.86: Modifies existing law to not only allow action against those who create and share sexually explicit material without consent but also specifically emphasizes legal action against individuals who distribute such material they did not create, provided they are aware that the depicted person did not consent to its creation. This facet of the bill is important as it extends responsibility to those who propagate non-consensual explicit content, regardless of their role in its original creation.
-
Florida, CS/CS/SB 1798: Addresses sexual offenses involving altered images, specifically deepfakes. It criminalizes the promotion of altered sexual depictions without consent, defining these as third-degree felonies. The bill expands the definition of “child pornography” to include digitally modified images depicting minors in sexual conduct. It also increases penalties for sexual cyberharassment and sexual contact with animals, including the possession of child-like sex dolls. Exemptions are made for certain entities, like news media and law enforcement, under specified conditions. The bill emphasizes protecting minors and addressing emerging digital sexual exploitation trends.
-
Illinois, House Bill 2123: Redefines “sexual image” in the Civil Remedies for Nonconsensual Dissemination of Private Sexual Images Act to include images showing nudity or sexual conduct, whether real or digitally altered. It allows individuals in these images to take legal action against those who share or threaten to share these images, particularly when there’s no consent. The amendment specifies that disclosing an image’s digital alteration isn’t a defense. It also removes certain liabilities from interactive computer services and clarifies that adding a political message to such images doesn’t make them a matter of public concern. Additionally, courts can grant orders to stop the dissemination of these images.
-
Virginia, Code 18.2-386.2: Makes it a Class 1 misdemeanor to maliciously share or sell nude or sexually explicit images of someone without their consent, especially if done to harass, coerce, or intimidate. This includes images that have been digitally altered to depict a person. Internet service providers are not held responsible for the content shared by others. Legal action can be pursued where the unlawful act took place or where the image was handled. Other legal charges may also apply.
At the AI Incident Database (AIID), we have been cataloging and researching these recent CSAM incidents with concern. Like all technology, generative AI poses risks and opportunities—and the risks to children in this case are serious. If you would like to join us in our mission of documenting AI incidents with the goal of learning from past mistakes to mitigate future risks, you can connect with us through our contact page. We welcome submissions reporting any and all AI incidents using our submission page; however, please be advised that while we actively track and analyze CSAM trends and incidents, the AIID is not the direct reporting destination for actual CSAM. Please direct the reporting of CSAM itself to the Department of Justice Child Exploitation and Obscenity Section.