General public
影響を受けたインシデント
インシデント 61643 Report
Sports Illustrated Is Alleged to Have Used AI to Invent Fake Authors and Their Articles
2023-11-27
Sports Illustrated, managed by The Arena Group, allegedly used AI-generated authors and content, compromising journalistic integrity. Profiles of these fictitious authors, complete with AI-generated headshots, appeared alongside articles, misleading readers. The issue was exposed when inconsistencies in author identities and writing quality were noticed, leading to the removal of this content from the publication's website.
もっとインシデント 70136 Report
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
もっとインシデント 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
もっとインシデント 63231 Report
Significant Increase in Deepfake Nudes of Taylor Swift Circulating on Social Media
2024-01-24
AI-generated sexually explicit images of Taylor Swift circulated on X, garnering over 45 million views before removal. Originating from a Telegram group, these deepfakes challenge content moderation, as X's policies against synthetic media and nonconsensual nudity were violated.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
unknown
開発者と提供者の両方の立場で関わったインシデント
- インシデント 60615 レポート
Deepfaked Advertisements Using the Likenesses of Celebrities Such as Tom Hanks and Gayle King Without Their Consent
- インシデント 6762 レポート
Deepfake Audio Falsely Depicts Philippines President Ferdinand Marcos Jr. Ordering Military Action
Incidents involved as Developer
TikTok
開発者と提供者の両方の立場で関わったインシデント
影響を受けたインシデント
Incidents involved as Developer
Incidents involved as Deployer
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 6862 レポート
Meta AI Image Generator Reportedly Fails to Accurately Represent Interracial Relationships
Incidents involved as Developer
- インシデント 96824 レポート
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
- インシデント 5781 レポート
Alleged Exploitation of Meta's Open-Source LLaMA Model for NSFW and Violent Content
Incidents involved as Deployer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7881 レポート
Instagram's Algorithm Reportedly Recommended Sexual Content to Teenagers' Accounts
- インシデント 5831 レポート
Instagram Algorithms Allegedly Promote Accounts Facilitating Child Sex Abuse Content
Incidents involved as Deployer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 64535 レポート
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- インシデント 6937 レポート
Google AI Reportedly Delivering Confidently Incorrect and Harmful Information
Incidents involved as Developer
ChatGPT
Incidents involved as Developer
- インシデント 6803 レポート
Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
- インシデント 6092 レポート
Flawed AI in Google Search Reportedly Misinforms about Geography
Incidents involved as Deployer
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- インシデント 6781 レポート
ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
Incidents implicated systems
Microsoft
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 6213 レポート
Microsoft AI Is Alleged to Have Generated Violent Imagery of Minorities and Public Figures
Incidents involved as Developer
Donald Trump
影響を受けたインシデント
- インシデント 6213 レポート
Microsoft AI Is Alleged to Have Generated Violent Imagery of Minorities and Public Figures
- インシデント 7421 レポート
Grok AI Model Reportedly Fails to Produce Reliable News in Wake of Trump Assassination Attempt
Incidents involved as Deployer
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7181 レポート
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
影響を受けたインシデント
Incidents involved as Developer
Unknown voice cloning technology
Incidents involved as Developer
Incidents implicated systems
Unknown deepfake creators
開発者と提供者の両方の立場で関わったインシデント
Incidents involved as Developer
Incidents involved as Deployer
xAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7631 レポート
Grok AI Chatbot Reportedly Spreads Unfounded Rumors About Trump’s Dentures
Incidents involved as Developer
Perplexity
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7501 レポート
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News