Description: An 18-year-old Argentine student at the pre-university institute of Manuel Belgrano of Córdoba allegedly used AI tools to generate explicit fake images of at least 22 female classmates by combining their faces with other bodies. These images, posted on pornography websites, included the victims' names, leading to harassment and significant psychological harm. Legal authorities charged the student with serious injuries aggravated by gender violence.
Editor Notes: Reconstructing the timeline of events: (1) Mid-2024: Two female students at the Manuel Belgrano Institute began receiving suspicious Instagram messages from men aged 40–50. (2) July 2024: Victims discovered explicit AI-generated images of themselves on pornographic websites, featuring their names and altered appearances. (3) August 2024: Families of victims reported the student to authorities, who launched an investigation and conducted a search of the accused's home. (4) October 18, 2024: Charges were filed against the accused for gender-based violence and psychological harm; legal experts advocated for updated laws addressing AI misuse. This fourth date is being marked as the incident date, even though the initial events are reported to have begun sometime in the middle of 2024.
Alleged: Unknown deepfake technology creators developed an AI system deployed by Unnamed 18-year-old Manuel Belgrano male student, which harmed Unnamed Manuel Belgrano female students.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional