Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 913: Yahoo Boys Allegedly Using AI-Generated News Videos to Blackmail Sextortion Victims

Description: Scammers, allegedly linked to the Yahoo Boys, are using AI-generated news videos to blackmail victims in sextortion schemes. The videos impersonate news organizations, featuring fabricated reports that accuse victims of crimes, including explicit content distribution. Tutorials for creating these clips are reportedly shared on Telegram, with scammers leveraging the fake broadcasts to pressure victims into paying.
Editor Notes: The Yahoo Boys are reportedly not so much a centralized group as a loosely connected network of individuals and small clusters engaging in cybercrime schemes. In French-language reporting, they are usually referred to as "brouteurs," while in English-language reporting, they are often identified as "Yahoo Boys." While the Yahoo Boys have reportedly been experimenting with deepfake technology since sometime in 2021 or 2022, with these blackmail incidents reportedly beginning in earnest in 2023 and picking up in 2024. The incident date of 01/27/2024 is based off of a WIRED report's publication on the specific vector of impersonating news organizations for the sextortion schemes. Refer as well to the Network Contagion Research Institute's report of 01/30/2024, "A Digital Pandemic: Uncovering the Role of ‘Yahoo Boys’ in the Surge of Social Media-Enabled Financial Sextortion Targeting Minors," available at https://networkcontagion.us/reports/yahoo-boys/. See Incident 911 for more general reporting on Yahoo Boys using deepfakes for romance scams. See Incident 901 for a specific example of how three Yahoo Boys allegedly defrauded a French woman of $850,000 by posing as Brad Pitt. See Incident 913 for information pertaining to their use of AI to allegedly impersonate news organizations to blackmail victims. Refer to Incident 551 for more on the FBI having reported a surge of AI-related sextortion cases.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake technology developers developed an AI system deployed by Yahoo Boys , Scammers from West Africa , Scammers from Nigeria , Scammers from Ghana and Brouteurs, which harmed Unnamed victims in sextortion schemes , Teenagers targeted in sextortion scams , News organizations impersonated by scammers and CNN.
Alleged implicated AI system: Unknown deepfake apps

Incident Stats

Incident ID
913
Report Count
1
Incident Date
2025-01-27
Editors

Incident Reports

Reports Timeline

+1
Scammers Are Creating Fake News Videos to Blackmail Victims
Scammers Are Creating Fake News Videos to Blackmail Victims

Scammers Are Creating Fake News Videos to Blackmail Victims

wired.com

Scammers Are Creating Fake News Videos to Blackmail Victims
wired.com · 2025

When online romance and sextortion scammers sense they've found a victim who may send them money, they'll use all kinds of villainous methods to get paid. They'll frequently stoop to blackmail---and are constantly creating more devious appr…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Content Using Bestiality Thumbnails Allegedly Evaded YouTube’s Thumbnail Monitoring System

Content Using Bestiality Thumbnails Allegedly Evaded YouTube’s Thumbnail Monitoring System

Apr 2018 · 2 reports
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Content Using Bestiality Thumbnails Allegedly Evaded YouTube’s Thumbnail Monitoring System

Content Using Bestiality Thumbnails Allegedly Evaded YouTube’s Thumbnail Monitoring System

Apr 2018 · 2 reports
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 14 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76