概要: At Beverly Vista Middle School in Beverly Hills, California, students allegedly used AI to generate fake nude photos with their classmates' faces, prompting investigations by school officials and the police. The incident highlights the increasing misuse of generative AI among minors.
Alleged: Unknown deepfake creator developed an AI system deployed by , which harmed Unnamed middle school students.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.3. Fraud, scams, and targeted manipulation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
インシデントレポート
レポートタイムライン

translated-ja-Students at a middle school in Beverly Hills, California, used artificial intelligence technology to create fake nude photos of their classmates, according to school administrators. Now, the community is grappling with the fal…

translated-ja-On Tuesday, Kat Tenbarge and Liz Kreutz of NBC News reported that several middle schoolers in Beverly Hills, Calif., were caught making and distributing fake naked photos of their peers: "School officials at Beverly Vista Midd…