Incidente 110: translated-es-Arkansas's Opaque Algorithm to Allocate Health Care Excessively Cut Down Hours for Beneficiaries
Descripción: translated-es-Beneficiaries of the Arkansas Department of Human Services (DHS)'s Medicaid waiver program were allocated excessively fewer hours of caretaker visit via an algorithm deployed to boost efficiency, which reportedly contained errors and whose outputs varied wildly despite small input changes.
Entidades
Ver todas las entidadesAlleged: InterRAI developed an AI system deployed by Arkansas Department of Human Services, which harmed Arkansas Medicaid waiver program beneficiaries y Arkansas healthcare workers.
Clasificaciones de la Taxonomía CSETv1
Detalles de la TaxonomíaIncident Number
The number of the incident in the AI Incident Database.
110
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes
:format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/59105949/wjoel_180320_2353_healthcare_001.0.jpg)
Durante la mayor parte de su vida, Tammy Dobbs, que tiene parálisis cerebral, dependió de su familia en Missouri para recibir atención. Pero en 2008, se mudó a Arkansas, donde se inscribió en un programa estatal que proporcionaba un cuidado…

La inteligencia artificial (IA) y los sistemas algorítmicos de toma de decisiones, algoritmos que analizan cantidades masivas de datos y hacen predicciones sobre el futuro, están afectando cada vez más la vida cotidiana de los estadounidens…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Northpointe Risk Models
· 15 informes

Predictive Policing Biases of PredPol
· 17 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents

Northpointe Risk Models
· 15 informes

Predictive Policing Biases of PredPol
· 17 informes