Incidente 674: translated-es-Manipulated Media via AI Disinformation and Deepfakes in 2024 Elections Erode Trust Across More Than 50 Countries
Descripción: translated-es-AI-driven election disinformation is escalating globally, leveraging easy-to-use generative AI tools to create convincing deepfakes that mislead voters. This shift has simplified the process for individuals to generate fake content, having already eroded trust in elections by undermining public trust and manipulating voter perceptions. Evidence has, for example, been documented in incidents across the U.S., Moldova, Slovakia, Bangladesh, and Taiwan.
Editor Notes: This incident ID is for collective incident reports that detail and survey worldwide AI disinformation campaigns rather than just nation, state, or local incidents. Such related incident IDs should when possible be marked as a similar incident ID so that we can connect this ID to the others.
Entidades
Ver todas las entidadesAlleged: Unknown deepfake creators , OpenAI y Google developed an AI system deployed by Russian government , Political operatives , Political consultants y Chinese Communist Party, which harmed Voters , Public trust , Political figures , General public , Electoral integrity , Democracy y Civic society.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Informes del Incidente
Cronología de Informes
translated-es-LONDON (AP) --- Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake -- but convincing -- content aimed …
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Selected by our editors
Manipulated Deepfake Video of Lai Ching-te Endorsing Rivals in Lead-up to January Presidential Elections
· 1 informe

Deepfake of Long-Deceased Suharto Circulating in Run-up to February 2024 Indonesian Elections
· 1 informe
Incidentes Similares
Selected by our editors
Manipulated Deepfake Video of Lai Ching-te Endorsing Rivals in Lead-up to January Presidential Elections
· 1 informe

Deepfake of Long-Deceased Suharto Circulating in Run-up to February 2024 Indonesian Elections
· 1 informe