Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 650: AI-Generated Images of Trump with Black Voters Spread as Disinformation Before U.S. Primary Elections

Description: In the run-up to the U.S. primary elections, supporters of Donald Trump shared AI-generated images showing him with Black voters in an attempt to sway African-American votes. These deepfakes, including Trump's distorted hand visuals, were initially created by satirical accounts but were later misappropriated for political disinformation, misleading millions on social media platforms.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Various social media accounts and Trump supporters developed and deployed an AI system, which harmed Public discourse integrity , General public , Democracy , African-American voters and Black voters.

Incident Stats

Incident ID
650
Report Count
1
Incident Date
2024-03-04
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

4.1. Disinformation, surveillance, and influence at scale

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Malicious Actors & Misuse

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+1
AI images of Donald Trump with black voters spread before election
AI images of Donald Trump with black voters spread before election

AI images of Donald Trump with black voters spread before election

thetimes.co.uk

AI images of Donald Trump with black voters spread before election
thetimes.co.uk · 2024

Donald Trump supporters have been sharing AI-generated images of the Republican frontrunner posing with black voters.

In an apparent effort to encourage African-Americans to vote for Trump, dozens of deepfakes have been circulated portrayin…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
AI-Generated Fake News Targets Black Celebrities on YouTube

AI-Generated Fake News Targets Black Celebrities on YouTube

Jan 2024 · 1 report

GOP Pollster Shares AI-Generated Images to Fabricate Appearance of Black Voter Support

Feb 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Biased Google Image Results

Biased Google Image Results

Mar 2016 · 18 reports
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
AI-Generated Fake News Targets Black Celebrities on YouTube

AI-Generated Fake News Targets Black Celebrities on YouTube

Jan 2024 · 1 report

GOP Pollster Shares AI-Generated Images to Fabricate Appearance of Black Voter Support

Feb 2024 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Deepfake Obama Introduction of Deepfakes

Deepfake Obama Introduction of Deepfakes

Jul 2017 · 29 reports
Biased Google Image Results

Biased Google Image Results

Mar 2016 · 18 reports
Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Facial Recognition Trial Performed Poorly at Notting Hill Carnival

Aug 2017 · 4 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b