Twitter Users
Affecté par des incidents
Incident 628 Rapports
TayBot
2016-03-24
Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.
PlusIncident 54317 Rapports
Deepfake of Explosion Near US Military Administration Building Reportedly Causes Stock Dip
2023-05-22
An apparent deepfake image posted by a false Bloomberg news account to Twitter depicted an explosion near the pentagon office complex near Washington DC.
PlusIncident 49911 Rapports
Parody AI Images of Donald Trump Being Arrested Reposted as Misinformation
2023-03-21
AI-generated photorealistic images depicting Donald Trump being detained by the police which were originally posted on Twitter as parody were unintentionally shared across social media platforms as factual news, lacking the intended context.
PlusIncident 4865 Rapports
AI Video-Making Tool Abused to Deploy Pro-China News on Social Media
2022-12-01
Synthesia's AI-generated video-making tool was reportedly used by Spamouflage to disseminate pro-China propaganda news on social media using videos featuring highly realistic fictitious news anchors.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
unknown
Incidents impliqués en tant que développeur et déployeur
- Incident 54317 Rapports
Deepfake of Explosion Near US Military Administration Building Reportedly Causes Stock Dip
- Incident 2432 Rapports
Bots Allegedly Made up Roughly Half of Twitter Accounts in Discussions Surrounding COVID-19 Related Issues
Incidents involved as Developer
Incidents impliqués en tant que développeur et déployeur
- Incident 1035 Rapports
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias
- Incident 2963 Rapports
Twitter Recommender System Amplified Right-Leaning Tweets