Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 711: NHTSA Opens New Probe into Tesla’s Autopilot Following More than a Dozen Fatal Accidents

Description: The NHTSA has linked Tesla's Autopilot to over a dozen fatalities and hundreds of crashes, prompting a new investigation into the adequacy of Tesla's December recall of 2 million vehicles. The probe reports that Tesla’s driver-assist system led to avoidable crashes involving visible hazards, suggesting a critical safety gap between driver expectations and the system’s capabilities. The investigation will assess if Tesla’s recall remedies were sufficient to address these safety risks.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Tesla developed an AI system deployed by Tesla and Tesla drivers, which harmed Tesla drivers , Drivers and General public.

Incident Stats

Incident ID
711
Report Count
2
Incident Date
2024-04-26
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+2
U.S. Regulators Tie Tesla’s Autopilot to More than a Dozen Fatalities, Hundreds of Crashes
U.S. Regulators Tie Tesla’s Autopilot to More than a Dozen Fatalities, Hundreds of Crashes

U.S. Regulators Tie Tesla’s Autopilot to More than a Dozen Fatalities, Hundreds of Crashes

wsj.com

Tesla Autopilot feature was involved in 13 fatal crashes, US regulator says

Tesla Autopilot feature was involved in 13 fatal crashes, US regulator says

theguardian.com

U.S. Regulators Tie Tesla’s Autopilot to More than a Dozen Fatalities, Hundreds of Crashes
wsj.com · 2024

Federal auto-safety regulators have opened an investigation into the adequacy of Tesla's December recall of 2 million vehicles equipped with Autopilot software, tying the technology to at least 14 fatalities, several dozen injuries and hund…

Tesla Autopilot feature was involved in 13 fatal crashes, US regulator says
theguardian.com · 2024

US auto-safety regulators said on Friday that their investigation into Tesla's Autopilot had identified at least 13 fatal crashes in which the feature had been involved. The investigation also found the electric carmaker's claims did not ma…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
Google admits its self driving car got it wrong: Bus crash was caused by software

Google admits its self driving car got it wrong: Bus crash was caused by software

Sep 2016 · 28 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Uber AV Killed Pedestrian in Arizona

Uber AV Killed Pedestrian in Arizona

Mar 2018 · 25 reports
Google admits its self driving car got it wrong: Bus crash was caused by software

Google admits its self driving car got it wrong: Bus crash was caused by software

Sep 2016 · 28 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 9d70fba