Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 287: OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

Description: The French digital care company, Nabla, in researching GPT-3’s capabilities for medical documentation, diagnosis support, and treatment recommendation, found its inconsistency and lack of scientific and medical expertise unviable and risky in healthcare applications. This incident has been downgraded to an issue as it does not meet current ingestion criteria.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and Nabla developed an AI system deployed by none, which harmed Nabla customers.

Incident Stats

Incident ID
287
Report Count
1
Incident Date
2020-10-27
Editors
Khoa Lam
Applied Taxonomies
GMF, MIT

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal Snippets

One or more snippets that justify the classification.
 

(Snippet Text: Our unique multidisciplinary team of doctors and machine learning engineers at Nabla had the chance to test this new model to tease apart what’s real and what’s hype by exploring different healthcare use cases., Related Classifications: Question Answering)

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Pre-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident OccurrenceAI Incident Database Incidents Converted to Issues
AI Incident Database Incidents Converted to Issues

AI Incident Database Incidents Converted to Issues

github.com

AI Incident Database Incidents Converted to Issues
github.com · 2022

The following former incidents have been converted to "issues" following an update to the incident definition and ingestion criteria.

21: Tougher Turing Test Exposes Chatbots’ Stupidity

Description: The 2016 Winograd Schema Challenge highli…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Female Applicants Down-Ranked by Amazon Recruiting Tool

Female Applicants Down-Ranked by Amazon Recruiting Tool

Aug 2016 · 33 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Female Applicants Down-Ranked by Amazon Recruiting Tool

Female Applicants Down-Ranked by Amazon Recruiting Tool

Aug 2016 · 33 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76