Fraudsters
Incidents involved as Deployer
Incident 8342 Rapports
China Targets AI-Driven Fraud and Deepfake Scandals with New Crackdowns
2024-07-04
Chinese law enforcement has targeted a rise in AI-driven crimes. The crimes include deepfake and voice synthesis used for fraud, identity theft, and unauthorized personality rights usage. In particular, "AI undressing" scams, fake relationships using synthesized voices, and game hacking software make up many of these cases. In response, authorities have prosecuted multiple cases and implemented stricter regulations to control AI misuse.
PlusIncident 7211 Rapport
Fake AI-Generated Students Are Reportedly Enrolling in Online College Classes
2024-06-04
Reportedly, an adjunct professor at an unspecified community college suspects that some students in his online art history and art appreciation courses are AI-powered spambots. These "students" allegedly submit peculiar assignments, such as analyses of non-existent artworks and descriptions of sculptures using painting terminology. Additionally, their engagement with the college portal is minimal. The professor believes the spambot students aim to fraudulently obtain financial aid by remaining enrolled in courses.
PlusIncident 8641 Rapport
Generative AI Allegedly Used to Facilitate $255,000 Real Estate Fraud Scheme
2024-08-23
A real estate scam is reported to have used AI-generated phishing emails to impersonate a title company lawyer, tricking homebuyer Raegan Bartlo into wiring $255,000 to a fraudulent account. The emails were alleged to be convincing, with no grammatical errors or tone issues. Bartlo recovered part of the funds but lost $112,000.
PlusIncident 8771 Rapport
HTML/Nomani Deepfake Phishing Campaigns Allegedly Use AI-Generated Content to Defraud Social Media Users
2024-12-16
AI-generated deepfakes were reportedly used in the "HTML/Nomani" phishing campaign to mimic legitimate platforms like booking services and lured victims into investment scams. These scams allegedly leveraged realistic fake content to deceive users on social media for the purposes of financial fraud. This campaign was part of the rising misuse of AI in cybercrime during the second half of 2024.
Plus