Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 11: Northpointe Risk Models

Description: An algorithm developed by Northpointe and used in the penal system is two times more likely to incorrectly label a black person as a high-risk re-offender and is two times more likely to incorrectly label a white person as low-risk for reoffense according to a ProPublica review.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Northpointe developed and deployed an AI system, which harmed Accused People.

Incident Stats

Incident ID
11
Report Count
15
Incident Date
2016-05-23
Editors
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

11

CSETv0 Taxonomy Classifications

Taxonomy Details

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Unknown/unclear

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Expert

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

Yes

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

137-question survey

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+5
ProPublica analysis finds bias in COMPAS criminal justice risk scoring system
Even algorithms are biased against black menAre criminal risk assessment scores racist?A New Program Judges If You’re a Criminal From Your Facial Features+1
ProPublica Is Wrong In Charging Racial Bias In An Algorithm
Algorithmic InjusticeSentence by Numbers: The Scary Truth Behind Risk Assessment AlgorithmsNew York City Takes on Algorithmic DiscriminationYes, artificial intelligence can be racistCan you make AI fairer than a judge? Play our courtroom algorithm game
ProPublica analysis finds bias in COMPAS criminal justice risk scoring system

ProPublica analysis finds bias in COMPAS criminal justice risk scoring system

privacyinternational.org

How We Analyzed the COMPAS Recidivism Algorithm

How We Analyzed the COMPAS Recidivism Algorithm

propublica.org

U.S. Courts Are Using Algorithms Riddled With Racism to Hand Out Sentences

U.S. Courts Are Using Algorithms Riddled With Racism to Hand Out Sentences

mic.com

Machine Bias - ProPublica

Machine Bias - ProPublica

propublica.org

The Hidden Discrimination In Criminal Risk-Assessment Scores

The Hidden Discrimination In Criminal Risk-Assessment Scores

npr.org

Even algorithms are biased against black men

Even algorithms are biased against black men

theguardian.com

Are criminal risk assessment scores racist?

Are criminal risk assessment scores racist?

brookings.edu

A New Program Judges If You’re a Criminal From Your Facial Features

A New Program Judges If You’re a Criminal From Your Facial Features

vice.com

ProPublica Is Wrong In Charging Racial Bias In An Algorithm

ProPublica Is Wrong In Charging Racial Bias In An Algorithm

acsh.org

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

theatlantic.com

Algorithmic Injustice

Algorithmic Injustice

thenewatlantis.com

Sentence by Numbers: The Scary Truth Behind Risk Assessment Algorithms

Sentence by Numbers: The Scary Truth Behind Risk Assessment Algorithms

digitalethics.org

New York City Takes on Algorithmic Discrimination

New York City Takes on Algorithmic Discrimination

aclu.org

Yes, artificial intelligence can be racist

Yes, artificial intelligence can be racist

vox.com

Can you make AI fairer than a judge? Play our courtroom algorithm game

Can you make AI fairer than a judge? Play our courtroom algorithm game

technologyreview.com

ProPublica analysis finds bias in COMPAS criminal justice risk scoring system
privacyinternational.org · 2016

Computer programs that perform risk assessments of crime suspects are increasingly common in American courtrooms, and are used at every stage of the criminal justice systems to determine who may be set free or granted parole, and the size o…

How We Analyzed the COMPAS Recidivism Algorithm
propublica.org · 2016

Across the nation, judges, probation and parole officers are increasingly using algorithms to assess a criminal defendant’s likelihood of becoming a recidivist – a term used to describe criminals who re-offend. There are dozens of these ris…

U.S. Courts Are Using Algorithms Riddled With Racism to Hand Out Sentences
mic.com · 2016

For years, the criminal justice community has been worried. Courts across the country are assigning bond amounts sentencing the accused based on algorithms, and both lawyers and data scientists warn that these algorithms could be poisoned b…

Machine Bias - ProPublica
propublica.org · 2016

On a spring afternoon in 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kid’s blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried…

The Hidden Discrimination In Criminal Risk-Assessment Scores
npr.org · 2016

The Hidden Discrimination In Criminal Risk-Assessment Scores

Courtrooms across the country are increasingly using a defendant's "risk assessment score" to help make decisions about bond, parole and sentencing. The companies behind these sco…

Even algorithms are biased against black men
theguardian.com · 2016

One of my most treasured possessions is The Art of Computer Programming by Donald Knuth, a computer scientist for whom the word “legendary” might have been coined. In a way, one could think of his magnum opus as an attempt to do for compute…

Are criminal risk assessment scores racist?
brookings.edu · 2016

Imagine you were found guilty of a crime and were waiting to learn your sentence. Would you rather have your sentence determined by a computer algorithm, which dispassionately weights factors that predict your future risk of crime (such as …

A New Program Judges If You’re a Criminal From Your Facial Features
vice.com · 2016

Like a more crooked version of the Voight-Kampff test from Blade Runner, a new machine learning paper from a pair of Chinese researchers has delved into the controversial task of letting a computer decide on your innocence. Can a computer k…

ProPublica Is Wrong In Charging Racial Bias In An Algorithm
acsh.org · 2018

Predicting the future is not only the provenance of fortune tellers or media pundits. Predictive algorithms, based on extensive datasets and statistics have overtaken wholesale and retail operations as any online shopper knows. And in the l…

A Popular Algorithm Is No Better at Predicting Crimes Than Random People
theatlantic.com · 2018

Caution is indeed warranted, according to Julia Dressel and Hany Farid from Dartmouth College. In a new study, they have shown that COMPAS is no better at predicting an individual’s risk of recidivism than random volunteers recruited from t…

Algorithmic Injustice
thenewatlantis.com · 2018

Don’t blame the algorithm — as long as there are racial disparities in the justice system, sentencing software can never be entirely fair.

For generations, the Maasai people of eastern Africa have passed down the story of a tireless old man…

Sentence by Numbers: The Scary Truth Behind Risk Assessment Algorithms
digitalethics.org · 2018

Although crime rates have fallen steadily since the 1990s, rates of recidivism remain a factor in the areas of both public safety and prisoner management. The National Institute of Justice defines recidivism as “criminal acts that resulted …

New York City Takes on Algorithmic Discrimination
aclu.org · 2018

Invisible algorithms increasingly shape the world we live in, and not always for the better. Unfortunately, few mechanisms are in place to ensure they’re not causing more harm than good.

That might finally be changing: A first-in-the-nation…

Yes, artificial intelligence can be racist
vox.com · 2019

Open up the photo app on your phone and search “dog,” and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog “looks” like.

This modern-day marvel is the result of machine learning, a form of a…

Can you make AI fairer than a judge? Play our courtroom algorithm game
technologyreview.com · 2019

As a child, you develop a sense of what “fairness” means. It’s a concept that you learn early on as you come to terms with the world around you. Something either feels fair or it doesn’t.

But increasingly, algorithms have begun to arbitrate…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76