Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 546: Algorithm to Distribute Social Welfare Reported for Oversimplifying Economic Vulnerability

Description: Takaful cash transfer program's algorithm which ranks families by their economic vulnerability level to determine financial assistance reportedly oversimplified people's economic situation, fueling social tension and perceptions of unfairness.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: The World Bank , UNICEF and World Food Programme developed an AI system deployed by National Aid Fund, which harmed Jordanians in poverty.

Incident Stats

Incident ID
546
Report Count
3
Incident Date
2019-05-31
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

Incident Occurrence+2
An algorithm intended to reduce poverty might disqualify people in need
An algorithm intended to reduce poverty might disqualify people in need

An algorithm intended to reduce poverty might disqualify people in need

technologyreview.com

Automated Neglect

Automated Neglect

hrw.org

An Algorithm Aimed To Help Jordan's Poor. It Excluded Some In Need, Report Finds

An Algorithm Aimed To Help Jordan's Poor. It Excluded Some In Need, Report Finds

forbesafrica.com

An algorithm intended to reduce poverty might disqualify people in need
technologyreview.com · 2023

An algorithm funded by the World Bank to determine which families should get financial assistance in Jordan likely excludes people who should qualify, according to an investigation published this morning by Human Rights Watch. 

The algorith…

Automated Neglect
hrw.org · 2023

Summary

Governments worldwide are turning to automation to help them deliver essential public services, such as food, housing, and cash assistance. But some forms of automation are excluding people from services and singling them out for in…

An Algorithm Aimed To Help Jordan's Poor. It Excluded Some In Need, Report Finds
forbesafrica.com · 2023

The World Bank is increasingly incentivizing countries to develop technologies that can find and rank people in poverty so they can be provided with cash transfer and social assistance programs, according to a report by the Human Rights Wat…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76