Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Incidente 43: Racist AI behaviour is not a new problem

Descripción: From 1982 to 1986, St George's Hospital Medical School used a program to automate a portion of their admissions process that resulted in discrimination against women and members of ethnic minorities.

Herramientas

Nuevo InformeNuevo InformeNueva RespuestaNueva RespuestaDescubrirDescubrirVer HistorialVer Historial

Entidades

Ver todas las entidades
Alleged: Dr. Geoffrey Franglen developed an AI system deployed by St George's Hospital Medical School, which harmed Women y Minority Groups.

Estadísticas de incidentes

ID
43
Cantidad de informes
4
Fecha del Incidente
1998-03-05
Editores
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

Clasificaciones de la Taxonomía CSETv1

Detalles de la Taxonomía

Incident Number

The number of the incident in the AI Incident Database.
 

43

Notes (special interest intangible harm)

Input any notes that may help explain your answers.
 

The Commission for Racial Equality found St. George's Hospital Medical School guilty of discrimination against women and members of ethnic minorities.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

1979

Clasificaciones de la Taxonomía CSETv0

Detalles de la Taxonomía

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

Standardized university admission form, Previous admission and regection decisions

Clasificaciones de la Taxonomía MIT

Machine-Classified
Detalles de la Taxonomía

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Informes del Incidente

Cronología de Informes

+1
Una mancha en la profesión
Computadoras que magnifican nuestros prejuicios+1
El comportamiento racista de la IA no es un problema nuevo
Una mancha en la profesión

Una mancha en la profesión

europepmc.org

Computadoras que magnifican nuestros prejuicios

Computadoras que magnifican nuestros prejuicios

marginalrevolution.com

El comportamiento racista de la IA no es un problema nuevo

El comportamiento racista de la IA no es un problema nuevo

natbuckley.co.uk

racista en la máquina

racista en la máquina

read.dukeupress.edu

Una mancha en la profesión
europepmc.org · 1998
Traducido por IA

Una mancha en la profesión

Durante mucho tiempo se sospechó de la discriminación en la medicina contra las mujeres y los miembros de minorías étnicas, pero ahora se ha demostrado. La Escuela de Medicina del Hospital St George's ha sido decl…

Computadoras que magnifican nuestros prejuicios
marginalrevolution.com · 2013
Traducido por IA

A medida que la IA se propague, este se convertirá en un tema cada vez más importante y controvertido:

Para una universidad británica, lo que comenzó como un ejercicio de ahorro de tiempo terminó en desgracia cuando un modelo de computadora…

El comportamiento racista de la IA no es un problema nuevo
natbuckley.co.uk · 2016
Traducido por IA

La profesora Margaret Boden, investigadora de inteligencia artificial y ciencias cognitivas, se tomó el tiempo para hablarme en 2010 sobre computadoras, inteligencia artificial, moralidad y el futuro. Una de las historias que me contó vuelv…

racista en la máquina
read.dukeupress.edu · 2016
Traducido por IA

Las empresas y los gobiernos deben prestar atención a los sesgos inconscientes e institucionales que se filtran en sus algoritmos, argumenta la experta en ciberseguridad Megan García. Los datos distorsionados pueden sesgar los resultados en…

Variantes

Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.

Incidentes Similares

Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Female Applicants Down-Ranked by Amazon Recruiting Tool

2018 in Review: 10 AI Failures

Aug 2016 · 33 informes
AI Beauty Judge Did Not Like Dark Skin

A beauty contest was judged by AI and the robots didn't like dark skin

Sep 2016 · 10 informes
Sexist and Racist Google Adsense Advertisements

Discrimination in Online Ad Delivery

Jan 2013 · 27 informes
Incidente AnteriorSiguiente Incidente

Incidentes Similares

Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Female Applicants Down-Ranked by Amazon Recruiting Tool

2018 in Review: 10 AI Failures

Aug 2016 · 33 informes
AI Beauty Judge Did Not Like Dark Skin

A beauty contest was judged by AI and the robots didn't like dark skin

Sep 2016 · 10 informes
Sexist and Racist Google Adsense Advertisements

Discrimination in Online Ad Delivery

Jan 2013 · 27 informes

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2023 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b