Incidente 816: translated-es-Cross-Jurisdictional Facial Recognition Misidentification by NYPD Leads to Wrongful Arrest and Four-Year Jail Time in New Jersey
Descripción: translated-es-In 2019, facial recognition technology misidentified Francisco Arteaga as a suspect in an armed robbery in New Jersey. The incident led to nearly four years of pretrial incarceration. Despite having an alibi, Arteaga was charged based on the flawed identification. The legal battle that followed resulted in a court ruling requiring police to reveal details about the algorithms used in facial recognition. The process exposed significant gaps in transparency and accountability.
Editor Notes: See Incident 815 for a broader overview of these specific kinds of harms. Reconstructing the timeline of events: (1) November 29, 2019: An armed robbery occurs at the Buenavista Multiservices store in West New York, New Jersey. Police submit surveillance footage for facial recognition analysis. (2) December 2019: The West New York Police Department sends surveillance footage to the NYPD's Real Time Crime Center, which identifies Francisco Arteaga as a possible match using facial recognition technology. (3) 2019-2022: Arteaga spends nearly four years in pretrial detention while fighting the charges, despite having an alibi. (4) May 13, 2022: A trial judge denies Arteaga’s motion for discovery on details of the facial recognition technology used in his case. (5) June 7, 2023: A New Jersey appellate court rules that Arteaga is entitled to information on the facial recognition technology used in his case, including the algorithm, error rates, and other relevant details.
Entidades
Ver todas las entidadesAlleged: Clearview AI developed an AI system deployed by West New York PD , NYPD y Real Time Crime Center, which harmed Francisco Arteaga.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes

Francisco Arteaga estaba encarcelado, esperando presentarse a una audiencia judicial el otoño pasado, cuando vio a un tipo enorme mirándolo fijamente desde el otro lado de la celda del juzgado.
“Los brazos de este tipo son así, ¿verdad?”, d…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program
· 20 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program
· 20 informes