インシデント 840: translated-ja-AI-Generated Media Reportedly Used in Russian Disinformation Campaign in Moldova
概要: translated-ja-Russian-linked entities allegedly deployed AI-generated images and videos to spread disinformation aimed at swaying Moldova’s referendum on E.U. membership. The AI-enhanced media campaign included fabricated stories and doctored visuals, the purpose of which was reportedly designed to amplify fear and undermine pro-E.U. sentiment in the days leading up to the referendum.
Editor Notes: Reconstructing some of the timeline of events: (1) September 18, 2024: Microsoft’s Threat Analysis Center (MTAC) publicly acknowledges ongoing monitoring of Russian disinformation efforts in Moldova, mentioning that it has collaborated with the Moldovan government to defend against influence operations. Russia’s tactics reportedly include AI-generated media, cyberattacks, and inauthentic social media accounts aimed at amplifying pro-Kremlin narratives and undermining Moldova’s pro-E.U. sentiment. (2) October 1, 2024: A manipulated video surfaced online that is reported to have falsely depicted Dumitru Alaiba, Moldova's Minister of Economic Development and Digitalization, in compromising situations. Alaiba denounced the video as a "poor quality fake" and filed a police complaint to address the disinformation. (See Incident 841.) (3) October 7, 2024: Fake Ministry of Culture letters appeared, circulated by dubious social media accounts, falsely claiming Moldova would host an “LGBT festival” under E.U. influence. (4) October 17, 2024: Moldovan officials, with support from social media companies, reportedly began taking action against disinformation, with Facebook removing numerous fake accounts, groups, and pages. (5) October 17, 2024: U.S. Senator Benjamin Cardin urged Meta and Alphabet executives to better enforce anti-disinformation measures in Moldova. (6) October 20, 2024: Telegram suspended Ilan Shor’s “Stop EU” channel for violating local laws. (7) Around October 22, 2024: With the referendum nearing, Moldovan police continued monitoring and dismantling disinformation efforts while also citing Russian influence in a last-minute push against pro-E.U. messaging.
Alleged: Unknown AI developers developed an AI system deployed by Russia-backed influencers , Maria Zakharova , Ilan Shor と Government of Russia, which harmed Pro-EU Moldovans , Moldovan general public , Maia Sandu , Government of Moldova , Electoral integrity , Democracy と Dumitru Alaiba.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional