Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 812: Deepfake Nudes Targeting Underage Female Students at Collège Béliveau in Winnipeg Shared Online

Description: At Collège Béliveau in Winnipeg, female students between grades 7-12 were targeted in the creation of deepfake nudes, which were then distributed online. Specific numbers and identities of victims and perpetrators were not released, and no charges were ultimately filed owing to the gap between existing laws and the nature of the incident.
Editor Notes: Reconstructing the timeline of events: Parents were notified of the incident on December 11, 2023. On February 14, 2024, police said no criminal charges would be laid in connection to the incident.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake technology developers developed an AI system deployed by Unknown deepfake creators, which harmed Collège Béliveau students.

Incident Stats

Incident ID
812
Report Count
5
Incident Date
2023-12-11
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.3. Fraud, scams, and targeted manipulation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident Occurrence+1
Police investigate ‘explicitly altered’ images of Winnipeg high school students
Canadians have very limited options if fake explicit photos end up on social mediaNo criminal charges laid after AI-generated fake nudes of girls from Winnipeg school posted onlineNo charges laid over explicit, AI-generated photos of Winnipeg students | CTV News
Police investigate ‘explicitly altered’ images of Winnipeg high school students

Police investigate ‘explicitly altered’ images of Winnipeg high school students

winnipegfreepress.com

AI-generated fake nude photos of girls from Winnipeg school posted online

AI-generated fake nude photos of girls from Winnipeg school posted online

cbc.ca

Canadians have very limited options if fake explicit photos end up on social media

Canadians have very limited options if fake explicit photos end up on social media

nationalpost.com

No criminal charges laid after AI-generated fake nudes of girls from Winnipeg school posted online

No criminal charges laid after AI-generated fake nudes of girls from Winnipeg school posted online

cbc.ca

No charges laid over explicit, AI-generated photos of Winnipeg students | CTV News

No charges laid over explicit, AI-generated photos of Winnipeg students | CTV News

winnipeg.ctvnews.ca

Police investigate ‘explicitly altered’ images of Winnipeg high school students
winnipegfreepress.com · 2023

City police are investigating reports of AI-generated nude photos of underage students circulating at a local high school.

Members of the Winnipeg Police Service attended Collège Béliveau, a Grade 7-12 high school in Windsor Park, on Wednes…

AI-generated fake nude photos of girls from Winnipeg school posted online
cbc.ca · 2023

Collège Béliveau is dealing with the dark side of artificial intelligence after AI-generated nude photos of underage students were discovered being circulated at the Winnipeg school.

An email sent to parents Thursday afternoon said school o…

Canadians have very limited options if fake explicit photos end up on social media
nationalpost.com · 2024

Last December, Collège Béliveau, a school in Winnipeg sent notices to parents that it was investigating the online release of explicit images of female students. The original pictures had apparently been lifted from social media accounts an…

No criminal charges laid after AI-generated fake nudes of girls from Winnipeg school posted online
cbc.ca · 2024

Police say no charges have been laid after an investigation into AI-generated nude photos of underage girls that circulated at a Winnipeg school late last year.

The doctored photos of female students at Collège Béliveau, a Grade 7-12 French…

No charges laid over explicit, AI-generated photos of Winnipeg students | CTV News
winnipeg.ctvnews.ca · 2024

No charges will be laid after explicit, AI-generated photos of Winnipeg high school students were circulated online.

The Winnipeg Police Service (WPS) wouldn’t give full details on its decision to not lay charges. However, police said, gene…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors

Teenager Makes Deepfake Pornography of 50 Girls at Bacchus Marsh Grammar School in Australia

Jun 2024 · 5 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
False Negatives for Water Quality-Associated Beach Closures

False Negatives for Water Quality-Associated Beach Closures

Jun 2022 · 3 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Teenager Makes Deepfake Pornography of 50 Girls at Bacchus Marsh Grammar School in Australia

Jun 2024 · 5 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
False Negatives for Water Quality-Associated Beach Closures

False Negatives for Water Quality-Associated Beach Closures

Jun 2022 · 3 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76