概要: At the end of August 2024, South Korean authorities began investigating a significant surge in the creation and dissemination, often via Telegram, of explicit deepfake pornography created without consent from the stolen social media content of female classmates, teachers, and neighbors.
Editor Notes: In one report, seven suspects were arrested, six of whom were teenagers. Another report mentions a graduate of Seoul National University in his 40s. One of the Telegram channels dedicated to the deepfakes was reported to have 220,000 members. One report indicates that between January 1 and August 25, 781 deepfake victims sought assistance from the state agency handling digital sex crimes, with 288 of those victims, or approximately 37%, being minors. This incident ID is for cataloguing the reporting on this pronounced flurry of deepfake pornography incidents in South Korea at the end of summer 2024.
Alleged: Unnamed deepfake technology developers developed an AI system deployed by Unnamed deepfake creators, which harmed South Korean women.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional