Incidents impliqués en tant que développeur et déployeur
Incident 64535 Rapports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
PlusIncident 4529 Rapports
Defamation via AutoComplete
2011-04-05
Google's autocomplete feature alongside its image search results resulted in the defamation of people and businesses.
PlusIncident 7128 Rapports
Google admits its self driving car got it wrong: Bus crash was caused by software
2016-09-26
On February 14, 2016, a Google autonomous test vehicle partially responsible for a low-speed collision with a bus on El Camino Real in Google’s hometown of Mountain View, CA.
PlusIncident 1927 Rapports
Sexist and Racist Google Adsense Advertisements
2013-01-23
Advertisements chosen by Google Adsense are reported as producing sexist and racist results.
PlusAffecté par des incidents
Incident 46714 Rapports
Google's Bard Shared Factually Inaccurate Info in Promo Video
2023-02-07
Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.
PlusIncident 5671 Rapport
Deepfake Voice Exploit Compromises Retool's Cloud Services
2023-08-27
In August 2023, a hacker reportedly was successful in breaching Retool, an IT company specializing in business software solutions, impacting 27 cloud customers. The attacker appears to have initiated the breach by sending phishing SMS messages to employees and later used an AI-generated deepfake voice in a phone call to obtain multi-factor authentication codes. The breach seems to have exposed vulnerabilities in Google's Authenticator app, specifically its cloud-syncing function, further enabling unauthorized access to internal systems.
PlusIncident 9561 Rapport
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
2025-02-28
A dataset used to train large language models allegedly contained 12,000 live API keys and authentication credentials. Some of these were reportedly still active and allowed unauthorized access. Truffle Security found these secrets in a December 2024 Common Crawl archive, which spans 250 billion web pages. The affected credentials could have been exploited for unauthorized data access, service disruptions, financial fraud, and a variety of other malicious uses.
PlusIncident 7911 Rapport
Google AI Error Prompts Parents to Use Fecal Matter in Child Training Exercise
2024-09-09
Google's AI Overview feature mistakenly advised parents to use human feces in a potty training exercise, misinterpreting a method that uses shaving cream or peanut butter as a substitute. This incident is another example of an AI failure in grasping contextual nuances that can lead to potentially harmful, and in this case unsanitary, recommendations. Google has acknowledged the error.
PlusIncidents involved as Developer
Incident 96824 Rapports
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
PlusIncident 62312 Rapports
Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case
2023-12-12
Michael Cohen, former lawyer for Donald Trump, claims to have used Google Bard, an AI chatbot, to generate legal case citations. These false citations were unknowingly included in a court motion by Cohen's attorney, David M. Schwartz. The AI's misuse highlights emerging risks in legal technology, as AI-generated content increasingly infiltrates professional domains.
PlusIncident 4693 Rapports
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
PlusIncident 9342 Rapports
NHK Terminates AI Translation Service After Geopolitical Naming Error
2025-02-10
NHK announced the termination of its AI-powered multilingual subtitle service after an automatic translation error rendered "Senkaku Islands" as "Diaoyu Islands," the Chinese designation. The mistake, discovered on February 10th, 2025 during a news segment, led NHK to deem the service inappropriate. The AI-based subtitles, powered by Google Translate, had been in use since 2020.
PlusIncidents implicated systems
Incident 83920 Rapports
AI-Driven Phishing Scam Uses Spoofed Google Call to Attempt Gmail Breach of Security Expert
2024-10-07
Scammers used an AI-generated voice to impersonate a Google representative in an attempt to steal Gmail account credentials from security expert Sam Mitrovic. The AI-driven phishing call used a spoofed Google phone number and a fabricated email, making the scam appear legitimate. Mitrovic noted that the caller’s professional demeanor, coupled with AI-generated speech and a Google-related number, could easily deceive unsuspecting users.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
Delphi Technologies
Incidents impliqués en tant que développeur et déployeur
Affecté par des incidents
Microsoft
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 1022 Report
Personal voice assistants struggle with black voices, new study shows
Affecté par des incidents
Incidents involved as Developer
Amazon
Incidents impliqués en tant que développeur et déployeur
- Incident 1022 Report
Personal voice assistants struggle with black voices, new study shows
- Incident 9671 Report
Amazon and Google AI Allegedly Promote Mein Kampf as ‘a True Work of Art’ in Search Results
Incidents involved as Developer
OpenAI
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 3671 Report
iGPT, SimCLR Learned Biased Associations from Internet Training Data
Incidents involved as Developer
Meta
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 7181 Report
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
Incidents involved as Developer
Incidents involved as Deployer
Gemini
Incidents involved as Deployer
- Incident 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- Incident 8452 Report
Google's Gemini Allegedly Generates Threatening Response in Routine Query
Incidents implicated systems
You.com
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
xAI
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Perplexity
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 7501 Report
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Mistral
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 8591 Report
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Incidents involved as Developer
Inflection
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Anthropic
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 8591 Report
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Incidents involved as Developer
YouTube
Incidents impliqués en tant que développeur et déployeur
- Incident 8731 Report
YouTube Algorithms Allegedly Amplify Eating Disorder Content to Adolescent Girls