概要: AI tools linked to China were used to disseminate disinformation targeting voters in the U.S. and Taiwan, according to a Microsoft report. These operations included AI-generated imagery and audio to influence political perceptions and election outcomes, originating from the APT Storm-1376 (also known as Spamouflage and Dragonbridge).
推定: Storm-1376 , Spamouflage , Dragonbridge と Chinese Communist Partyが開発し提供したAIシステムで、U.S. voters , Taiwanese voters , General public , Election integrity と Democracyに影響を与えた
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
インシデントレポート
レポートタイムライン
translated-ja-SAN FRANCISCO---Online actors linked to the Chinese government are increasingly leveraging artificial intelligence to target voters in the U.S., Taiwan and elsewhere with disinformation, according to new cybersecurity research…