Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 101: Dutch Families Wrongfully Accused of Tax Fraud Due to Discriminatory Algorithm

概要: A childcare benefits system in the Netherlands falsely accused thousands of families of fraud, in part due to an algorithm that treated having a second nationality as a risk factor.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: unknown developed an AI system deployed by Dutch Tax Authority, which harmed Dutch Tax Authority と Dutch families.

インシデントのステータス

インシデントID
101
レポート数
6
インシデント発生日
2018-09-01
エディタ
Sean McGregor, Khoa Lam
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

101

AI Tangible Harm Level Notes

Notes about the AI tangible harm level assessment
 

financial harm and intangible harm. Fraud detection model described as a self-learning black box algorithm.

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

Other

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

Incident Occurrence+1
How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud
Dutch scandal serves as a warning for Europe over risks of using algorithmsThe Dutch Tax Authority Was Felled by AI—What Comes Next?+1
This Algorithm Could Ruin Your Life
How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

vice.com

The Dutch benefits scandal: a cautionary tale for algorithmic enforcement

The Dutch benefits scandal: a cautionary tale for algorithmic enforcement

eulawenforcement.com

Dutch scandal serves as a warning for Europe over risks of using algorithms

Dutch scandal serves as a warning for Europe over risks of using algorithms

politico.eu

The Dutch Tax Authority Was Felled by AI—What Comes Next?

The Dutch Tax Authority Was Felled by AI—What Comes Next?

spectrum-ieee-org.cdn.ampproject.org

This Algorithm Could Ruin Your Life

This Algorithm Could Ruin Your Life

wired.com

Inside the Suspicion Machine

Inside the Suspicion Machine

wired.com

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud
vice.com · 2021

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud parti…

The Dutch benefits scandal: a cautionary tale for algorithmic enforcement
eulawenforcement.com · 2021

On January 15, the Dutch government was forced to resign amidst a scandal around its child-care benefits scheme. Systems that were meant to detect misuse of the benefits scheme, mistakenly labelled over 20,000 parents as fraudsters. More cr…

Dutch scandal serves as a warning for Europe over risks of using algorithms
politico.eu · 2022

Chermaine Leysner’s life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the…

The Dutch Tax Authority Was Felled by AI—What Comes Next?
spectrum-ieee-org.cdn.ampproject.org · 2022

Until recently, it wasn’t possible to say that AI had a hand in forcing a government to resign. But that’s precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoesla…

This Algorithm Could Ruin Your Life
wired.com · 2023

From the outside, Rotterdam’s welfare algorithm appears complex. The system, which was originally developed by consulting firm Accenture before the city took over development in 2018, is trained on data collected by Rotterdam’s welfare depa…

Inside the Suspicion Machine
wired.com · 2023

Every year, the city of Rotterdam in the Netherlands gives some 30,000 people welfare benefits to help them make rent, buy food, and pay essential bills. And every year, thousands of those people are investigated under suspicion of committi…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

May 2016 · 22 レポート
Predictive Policing Biases of PredPol

Policing the Future

Nov 2015 · 17 レポート
Facebook’s Hate Speech Detection Algorithms Allegedly Disproportionately Failed to Remove Racist Content towards Minority Groups

Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show

Nov 2021 · 2 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

May 2016 · 22 レポート
Predictive Policing Biases of PredPol

Policing the Future

Nov 2015 · 17 レポート
Facebook’s Hate Speech Detection Algorithms Allegedly Disproportionately Failed to Remove Racist Content towards Minority Groups

Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show

Nov 2021 · 2 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2023 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76