Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 124: Algorithmic Health Risk Scores Underestimated Black Patients’ Needs

概要: Optum's algorithm deployed by a large academic hospital was revealed by researchers to have under-predicted the health needs of black patients, effectively de-prioritizing them in extra care programs relative to white patients with the same health burden.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: Optum developed an AI system deployed by unnamed large academic hospital, which harmed Black patients.

インシデントのステータス

インシデントID
124
レポート数
7
インシデント発生日
2019-10-24
エディタ
Sean McGregor, Khoa Lam
Applied Taxonomies
CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

124

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.3. Unequal performance across groups

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+4
A Health Care Algorithm Offered Less Care to Black Patients
These Algorithms Look at X-Rays-and Somehow Detect Your Race'Racism is America’s oldest algorithm': How bias creeps into health care AIAlgorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism
A Health Care Algorithm Offered Less Care to Black Patients

A Health Care Algorithm Offered Less Care to Black Patients

wired.com

Racial bias in a medical algorithm favors white patients over sicker black patients

Racial bias in a medical algorithm favors white patients over sicker black patients

washingtonpost.com

Millions of black people affected by racial bias in health-care algorithms

Millions of black people affected by racial bias in health-care algorithms

nature.com

New York Insurance Regulator to Probe Optum Algorithm for Racial Bias

New York Insurance Regulator to Probe Optum Algorithm for Racial Bias

fiercehealthcare.com

These Algorithms Look at X-Rays-and Somehow Detect Your Race

These Algorithms Look at X-Rays-and Somehow Detect Your Race

wired.com

'Racism is America’s oldest algorithm': How bias creeps into health care AI

'Racism is America’s oldest algorithm': How bias creeps into health care AI

statnews.com

Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism

Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism

aclu.org

A Health Care Algorithm Offered Less Care to Black Patients
wired.com · 2019

Care for some of the sickest Americans is decided in part by algorithm. New research shows that software guiding care for tens of millions of people systematically privileges white patients over black patients. Analysis of records from a ma…

Racial bias in a medical algorithm favors white patients over sicker black patients
washingtonpost.com · 2019

A widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine, researchers have …

Millions of black people affected by racial bias in health-care algorithms
nature.com · 2019

An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, a sweeping analysis has found.

The study, published in Science on 24 October, concluded that the algor…

New York Insurance Regulator to Probe Optum Algorithm for Racial Bias
fiercehealthcare.com · 2019

New York's Financial Services and Health departments sent a letter to UnitedHealth Group’s CEO David Wichmann Friday regarding an algorithm developed by Optum, The Wall Street Journal reported. The investigation is in response to a study pu…

These Algorithms Look at X-Rays-and Somehow Detect Your Race
wired.com · 2021

Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these …

'Racism is America’s oldest algorithm': How bias creeps into health care AI
statnews.com · 2022

Artificial intelligence and medical algorithms are deeply intertwined with our modern health care system. These technologies mimic the thought processes of doctors to make medical decisions and are designed to help providers determine who n…

Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism
aclu.org · 2022

Artificial intelligence (AI) and algorithmic decision-making systems — algorithms that analyze massive amounts of data and make predictions about the future — are increasingly affecting Americans’ daily lives. People are compelled to includ…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

May 2016 · 22 レポート
Kidney Testing Method Allegedly Underestimated Risk of Black Patients

How an Algorithm Blocked Kidney Transplants to Black Patients

Mar 1999 · 3 レポート
Northpointe Risk Models

Machine Bias - ProPublica

May 2016 · 15 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

May 2016 · 22 レポート
Kidney Testing Method Allegedly Underestimated Risk of Black Patients

How an Algorithm Blocked Kidney Transplants to Black Patients

Mar 1999 · 3 レポート
Northpointe Risk Models

Machine Bias - ProPublica

May 2016 · 15 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2023 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76