Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 91: Frontline workers protest at Stanford after hospital distributed vaccine to administrators

Description: In 2020, Stanford Medical Center's distribution algorithm only designated 7 of 5,000 vaccines to Medical Residents, who are frontline workers regularly exposed to COVID-19.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Stanford Medical Center developed and deployed an AI system, which harmed Stanford Medical frontline workers and Stanford Medical residents.

Incident Stats

Incident ID
91
Report Count
5
Incident Date
2020-12-18
Editors
Sean McGregor, Khoa Lam
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

91

CSETv0 Taxonomy Classifications

Taxonomy Details

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Low

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Expert

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

names, age, location, position, job, COVID-19 tests

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+3
Stanford apologizes for coronavirus vaccine plan that left out many front-line doctors
Stanford algorithm decided to vaccinate only seven of its frontline COVID-19 workers, out of 5,000 dosesThis is the Stanford vaccine algorithm that left out frontline doctors
Stanford apologizes for coronavirus vaccine plan that left out many front-line doctors

Stanford apologizes for coronavirus vaccine plan that left out many front-line doctors

washingtonpost.com

Only Seven of Stanford’s First 5,000 Vaccines Were Designated for Medical Residents

Only Seven of Stanford’s First 5,000 Vaccines Were Designated for Medical Residents

propublica.org

Frontline workers protest at Stanford after hospital distributed vaccine to administrators

Frontline workers protest at Stanford after hospital distributed vaccine to administrators

independent.co.uk

Stanford algorithm decided to vaccinate only seven of its frontline COVID-19 workers, out of 5,000 doses

Stanford algorithm decided to vaccinate only seven of its frontline COVID-19 workers, out of 5,000 doses

theverge.com

This is the Stanford vaccine algorithm that left out frontline doctors

This is the Stanford vaccine algorithm that left out frontline doctors

technologyreview.com

Stanford apologizes for coronavirus vaccine plan that left out many front-line doctors
washingtonpost.com · 2020

Stanford Health Care apologized Friday for a plan that left nearly all of its young front-line doctors out of the first round of coronavirus vaccinations. The Palo Alto, Calif., medical center promised an immediate fix that would move the p…

Only Seven of Stanford’s First 5,000 Vaccines Were Designated for Medical Residents
propublica.org · 2020

Update, Dec. 18, 2020: This story has been updated to add comments from Stanford Medicine.

Stanford Medicine residents who work in close contact with COVID-19 patients were left out of the first wave of staff members for the new Pfizer vacc…

Frontline workers protest at Stanford after hospital distributed vaccine to administrators
independent.co.uk · 2020

Medical residents and nurses from Stanford Medical Center held a protest on Friday following the hospital choosing to vaccinate some staff members who don’t interact with coronavirus patients over other frontline workers.

Video footage from…

Stanford algorithm decided to vaccinate only seven of its frontline COVID-19 workers, out of 5,000 doses
theverge.com · 2020

An algorithm determining which Stanford Medicine employees would receive its 5,000 initial doses of the COVID-19 vaccine included just seven medical residents / fellows on the list, according to a December 17th letter sent from Stanford Med…

This is the Stanford vaccine algorithm that left out frontline doctors
technologyreview.com · 2020

When resident physicians at Stanford Medical Center—many of whom work on the front lines of the covid-19 pandemic—found out that only seven out of over 1,300 of them had been prioritized for the first 5,000 doses of the covid vaccine, they …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Predictive Policing Program by Florida Sheriff’s Office Allegedly Violated Residents’ Rights and Targeted Children of Vulnerable Groups

Predictive Policing Program by Florida Sheriff’s Office Allegedly Violated Residents’ Rights and Targeted Children of Vulnerable Groups

Sep 2015 · 12 reports
Tesla Phantom Braking Complaints Surged, Allegedly Linked to Tesla Vision Rollout

Tesla Phantom Braking Complaints Surged, Allegedly Linked to Tesla Vision Rollout

May 2021 · 8 reports
Ever AI Reportedly Deceived Customers about FRT Use in App

Ever AI Reportedly Deceived Customers about FRT Use in App

Apr 2019 · 7 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Predictive Policing Program by Florida Sheriff’s Office Allegedly Violated Residents’ Rights and Targeted Children of Vulnerable Groups

Predictive Policing Program by Florida Sheriff’s Office Allegedly Violated Residents’ Rights and Targeted Children of Vulnerable Groups

Sep 2015 · 12 reports
Tesla Phantom Braking Complaints Surged, Allegedly Linked to Tesla Vision Rollout

Tesla Phantom Braking Complaints Surged, Allegedly Linked to Tesla Vision Rollout

May 2021 · 8 reports
Ever AI Reportedly Deceived Customers about FRT Use in App

Ever AI Reportedly Deceived Customers about FRT Use in App

Apr 2019 · 7 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76