Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 815: Police Use of Facial Recognition Software Causes Wrongful Arrests Without Defendant Knowledge

Description: Police departments across the U.S. have used facial recognition software to identify suspects in criminal investigations, leading to multiple false arrests and wrongful detentions. The software's unreliability, especially in identifying people of color, has resulted in misidentifications that were not disclosed to defendants. In some cases, individuals were unaware that facial recognition played a role in their arrest, violating their legal rights and leading to unjust detentions.
Editor Notes: This collective incident ID, based on a Washington Post investigation, details many harm events, the overarching theme of which is widespread facial recognition technology assisting in arrests made by police departments across the United States combined with a lack of transparency about the technology's use in making the arrests. Some of the documented incidents in the Washington Post's investigation are as follows: (1) 2019: Facial recognition technology used to misidentify Francisco Arteaga in New Jersey, which led to his wrongful detention for four years (see Incident 816). (2) 2020-2024: Miami Police Department conducts 2,500 facial recognition searches, leading to at least 186 arrests and 50 convictions. Less than 7% of defendants were informed of the technology's use. (3) 2022: Quran Reid is wrongfully arrested in Louisiana due to a facial recognition match, despite never visiting the state (see Incident 515). (4) June 2023: New Jersey appeals court rules that a defendant has the right to information regarding the use of facial recognition technology in their case. (5) July 2023: Miami Police Department acknowledges that they may not have informed prosecutors about the use of facial recognition in many cases. (6) October 6, 2024: The Washington Post publishes its investigation on these incidents and practices.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Clearview AI developed an AI system deployed by Police departments , Evansville PD , Pflugerville PD , Jefferson Parish Sheriff’s Office , Miami PD , West New York PD , NYPD , Coral Springs PD and Arvada PD, which harmed Quran Reid , Francisco Arteaga and Defendants wrongfully accused by facial recognition.

Incident Stats

Incident ID
815
Report Count
1
Incident Date
2024-10-06
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.4. Lack of transparency or interpretability

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

+1
Police seldom disclose use of facial recognition despite false arrests
Police seldom disclose use of facial recognition despite false arrests

Police seldom disclose use of facial recognition despite false arrests

washingtonpost.com

Police seldom disclose use of facial recognition despite false arrests
washingtonpost.com · 2024

Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial techno…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76