Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 5: Collection of Robotic Surgery Malfunctions

Description: Study on database reports of robotic surgery malfunctions (8,061), including those ending in injury (1,391) and death (144), between 2000 and 2013.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Intuitive Surgical developed an AI system deployed by Hospitals and Doctors, which harmed patients.

Incident Stats

Incident ID
5
Report Count
12
Incident Date
2015-07-13
Editors
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

5

AI Tangible Harm Level Notes

Notes about the AI tangible harm level assessment
 

No evidence that robots used AI. Robots are guided by surgeons to make precise cuts.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

no

Notes (AI special interest intangible harm)

If for 5.5 you select unclear or leave it blank, please provide a brief description of why. You can also add notes if you want to provide justification for a level.
 

Only tangible, no intangible, harm was reported

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2000

Estimated Date

“Yes” if the data was estimated. “No” otherwise.
 

Yes

CSETv0 Taxonomy Classifications

Taxonomy Details

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Robustness, Assurance

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Other:Medical system

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Low

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Expert

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

Surgeon's directions, medical procedures

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+2
Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data
Robotic Surgery Linked To 144 Deaths Since 2000+5
Robotic Surgery Has Been Connected to 144 U.S. Deaths Since 2000
+1
Robotic surgery linked to 144 deaths in the US
Robotic surgeries: Really safe?
Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data

researchgate.net

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data

arxiv.org

Robotic Surgery Linked To 144 Deaths Since 2000

Robotic Surgery Linked To 144 Deaths Since 2000

technologyreview.com

Robotic Surgery Has Been Connected to 144 U.S. Deaths Since 2000

Robotic Surgery Has Been Connected to 144 U.S. Deaths Since 2000

gizmodo.com

Study Finds 'Nonnegligible' Number Of Complications During Robotic Surgery, 144 Deaths Since 2000

Study Finds 'Nonnegligible' Number Of Complications During Robotic Surgery, 144 Deaths Since 2000

techtimes.com

Study looks at problems experienced in robotic surgery

Study looks at problems experienced in robotic surgery

physicstoday.scitation.org

Robotic Surgery Involved in 144 Deaths in 14 Years

Robotic Surgery Involved in 144 Deaths in 14 Years

nbcnews.com

Botched Robotic Surgeries Have Been Linked to 144 Patient Deaths

Botched Robotic Surgeries Have Been Linked to 144 Patient Deaths

io9.gizmodo.com

Robotic surgery may be the future, but right now it’s consistently janky

Robotic surgery may be the future, but right now it’s consistently janky

splinternews.com

Robotic surgery linked to 144 deaths in the US

Robotic surgery linked to 144 deaths in the US

bbc.com

Robot surgeons kill 144 patients, hurt 1,391, malfunction 8,061 times

Robot surgeons kill 144 patients, hurt 1,391, malfunction 8,061 times

theregister.co.uk

Robotic surgeries: Really safe?

Robotic surgeries: Really safe?

christiantoday.com

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data
researchgate.net · 2015

Copyrig ht © 2015: Author s. !

21

Appendix

Underre porti ng

The underr eportin g in da ta col lection is a fairly common prob lem in social science s, publ ic heal th, c riminolo gy, and

microe conomi cs. It occur s whe n the coun ting of s…

Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data
arxiv.org · 2015

Importance: Understanding the causes and patient impacts of surgical adverse events will help improve systems and operational practices to avoid incidents in the future.

Objective: To determine the frequency, causes, and patient impact of a…

Robotic Surgery Linked To 144 Deaths Since 2000
technologyreview.com · 2015

Robotic surgeons were involved in the deaths of 144 people between 2000 and 2013, according to records kept by the U.S. Food and Drug Administration. And some forms of robotic surgery are much riskier than others: the death rate for head, n…

Robotic Surgery Has Been Connected to 144 U.S. Deaths Since 2000
gizmodo.com · 2015

All surgery carries risk, and that’s also true when it involves robots. A new study of U.S. Food and Drug Administration data reveals that a variety of malfunctions have been linked to 144 deaths during robotic surgery in the last 14 years.…

Study Finds 'Nonnegligible' Number Of Complications During Robotic Surgery, 144 Deaths Since 2000
techtimes.com · 2015

Close

The use of robotic systems for some forms of surgery is still a relatively new area, but they have been in use long enough for researchers from MIT, the University of Illinois at Urbana-Champaign, and Rush University Medical Center in…

Study looks at problems experienced in robotic surgery
physicstoday.scitation.org · 2015

MIT Technology Review: According to a recent study, most of the robotic surgical procedures performed over the past 14 years have gone smoothly. However, a significant number have suffered some sort of adverse event, even if it did not resu…

Robotic Surgery Involved in 144 Deaths in 14 Years
nbcnews.com · 2015

Breaking News Emails Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.

July 21, 2015, 4:04 PM GMT / Updated July 21, 2015, 7:53 PM GMT By Keith Wagstaff

Robotic surgery is on the ris…

Botched Robotic Surgeries Have Been Linked to 144 Patient Deaths
io9.gizmodo.com · 2015

An independent analysis of reports gathered by the U.S. Food and Drug Administration since 2000 shows that robotic surgery isn’t as safe as some people might assume.

Surgery involving robots, where a surgeon guides the steady and precise mo…

Robotic surgery may be the future, but right now it’s consistently janky
splinternews.com · 2015

The Food and Drug Administration keeps meticulous records concerning instances of medical devices, including robots, malfunctioning or acting in ways that they aren’t supposed to.

Those records are stored in the Manufacturer and User Facili…

Robotic surgery linked to 144 deaths in the US
bbc.com · 2015

Image copyright Science Photo Library Image caption Surgical robots allow doctors to improve recovery time and minimise scarring

A study into the safety of surgical robots has linked the machines' use to at least 144 deaths and more than 1,…

Robot surgeons kill 144 patients, hurt 1,391, malfunction 8,061 times
theregister.co.uk · 2015

Surgery on humans using robots has been touted by some as a safer way to get your innards repaired – and now the figures are in for you to judge.

A team of university eggheads have counted up the number of medical cockups in America reporte…

Robotic surgeries: Really safe?
christiantoday.com · 2015

A series of reports submitted to the U.S. Food and Drug Administration since 2000 were analyzed and it was found that robotic surgeries are not safe after all.

In recent years, the use of surgical robots in the medical community has increas…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Racist AI behaviour is not a new problem

Racist AI behaviour is not a new problem

Mar 1998 · 4 reports
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Racist AI behaviour is not a new problem

Racist AI behaviour is not a new problem

Mar 1998 · 4 reports
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1ce4c40