Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 292: Apple’s AVs Reportedly Struggled to Navigate Streets in Silicon Valley Test Drives

Description: Apple’s autonomous cars were reported to have bumped into curbs and struggled to stay in their lanes after crossing intersections during an on-road test drives near the company’s Silicon Valley headquarters.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Apple developed and deployed an AI system, which harmed Silicon Valley traffic participants and Silicon Valley residents.

Incident Stats

Incident ID
292
Report Count
3
Incident Date
2021-09-01
Editors
Khoa Lam
Applied Taxonomies
GMF, MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Pre-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident Occurrence+2
Apple’s self-driving cars ‘smacked into curbs, veered out of lanes’
Apple’s self-driving cars ‘smacked into curbs, veered out of lanes’

Apple’s self-driving cars ‘smacked into curbs, veered out of lanes’

nypost.com

Inside Apple’s Eight-Year Struggle to Build a Self-Driving Car

Inside Apple’s Eight-Year Struggle to Build a Self-Driving Car

theinformation.com

The Apple Car's First Test Drive Was A Total Disaster

The Apple Car's First Test Drive Was A Total Disaster

slashgear.com

Apple’s self-driving cars ‘smacked into curbs, veered out of lanes’
nypost.com · 2022

Apple’s self-driving cars had trouble navigating streets, frequently bumped into curbs and veered out of lanes in the middle of intersections during test drives near the company’s Silicon Valley headquarters, according to a report.

Apple ha…

Inside Apple’s Eight-Year Struggle to Build a Self-Driving Car
theinformation.com · 2022

Last August, Apple sent several of its prototype self-driving cars on a roughly 40-mile trek through Montana. Aerial drones filmed the drive, from Bozeman to the ski resort town of Big Sky, so that Apple managers could produce a polished fi…

The Apple Car's First Test Drive Was A Total Disaster
slashgear.com · 2022

Several of the automotive and technology industry's biggest names are racing to get the first self-driving car on the road, so it's no surprise Apple, the world's most valuable technology company, has thrown its hat into the ring. Although …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Uber Autonomous Cars Running Red Lights

Uber Autonomous Cars Running Red Lights

Aug 2014 · 10 reports
Google admits its self driving car got it wrong: Bus crash was caused by software

Google admits its self driving car got it wrong: Bus crash was caused by software

Sep 2016 · 28 reports
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Uber Autonomous Cars Running Red Lights

Uber Autonomous Cars Running Red Lights

Aug 2014 · 10 reports
Google admits its self driving car got it wrong: Bus crash was caused by software

Google admits its self driving car got it wrong: Bus crash was caused by software

Sep 2016 · 28 reports
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b