Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 187: YouTuber Tested Tesla on Self Driving Mode, Colliding with Street Pylons

Description: A YouTuber who was a Tesla’s employee conducted an on-road review of Tesla's Full Self Driving (FSD) Beta, showing its navigation in various road environments in San Jose and collision with a bollards during Autopilot, allegedly causing his dismissal from the company.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Tesla developed an AI system deployed by AI Addict, which harmed John Bernal and San Jose public.

Incident Stats

Incident ID
187
Report Count
3
Incident Date
2022-02-04
Editors
Sean McGregor, Khoa Lam
Applied Taxonomies
GMF, MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

6.5. Governance failure

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Socioeconomic & Environmental Harms

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident OccurrenceTesla Full Self Driving Crash+1
Tesla fired an employee after he posted driverless tech reviews on YouTube
Tesla Full Self Driving Crash

Tesla Full Self Driving Crash

youtu.be

Tesla fired an employee after he posted driverless tech reviews on YouTube

Tesla fired an employee after he posted driverless tech reviews on YouTube

cnbc.com

Tesla fired employee who reviewed its driver assist features on YouTube

Tesla fired employee who reviewed its driver assist features on YouTube

theverge.com

Tesla Full Self Driving Crash
youtu.be · 2022

Hey YouTube AI addict here welcome to another FSD Beta video. Today we are on 10.10 in downtown San Jose for another downtown stress test. I'm so sorry you guys haven't seen us here in a while; it's been maybe a month since we made a video …

Tesla fired an employee after he posted driverless tech reviews on YouTube
cnbc.com · 2022

Tesla has fired a former Autopilot employee named John Bernal after he shared candid video reviews on his YouTube channel, AI Addict, showing how the company’s Full Self Driving Beta system worked in different locations around Silicon Valle…

Tesla fired employee who reviewed its driver assist features on YouTube
theverge.com · 2022

John Bernal shared clips of close calls and crashes on his channel, AI Addict

Tesla has a complicated relationship with customers who pay to test the beta version of its “Full Sell Driving” software. Often, these people are diehard fans, ke…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b