Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

CSETv1 Charts

The CSET AI Harm Taxonomy for AIID is the second edition of the CSET incident taxonomy. It characterizes the harms, entities, and technologies involved in AI incidents and the circumstances of their occurrence. The charts below show select fields from the CSET AI Harm Taxonomy for AIID. Details about each field can be found here. However, brief descriptions of the field are provided above each chart.

The taxonomy provides the CSET definition for AI harm.

AI harm has four elements which, once appropriately defined, enable the identification of AI harm. These key components serve to distinguish harm from non-harm and AI harm from non-AI harm. To be an AI harm, there must be:

  • 1) an entity that experienced
  • 2) a harm event or harm issue that
  • 3) can be directly linked to a consequence of the behavior of
  • 4) an AI system.

All four elements need to be present in order for there to be AI harm.

Not every incident in AIID meets this definition of AI harm. The below bar charts show the annotated results for both all AIID incidents and incidents that meet the CSET definition of AI harm.

CSET has developed specific definitions for the underlined phrases that may differ from other organizations’ definitions. As a result, other organizations may make different assessments on whether any particular AI incident is (or is not) AI harm. Details about CSET’s definitions for AI harm can be found here.

Every incident is independently classified by two CSET annotators. Annotations are peer-reviewed and finally randomly selected for quality control ahead of publication. Despite this rigorous process, mistakes do happen, and readers are invited to any errors they might discover while browsing.

Does the incident involve a system that meets the CSET definition for an AI system?

AI System

(by Incident Count)

If there was differential treatment, on what basis?

Differential treatment based upon a protected characteristic: This special interest intangible harm covers bias and fairness issues concerning AI. However, the bias must be associated with a group having a protected characteristic.

Basis for differential treatment

(by Incident Count)

All AIID Incidents

CategoryCount
race43
sex21
nation of origin, citizenship, immigrant status12
disability11
sexual orientation or gender identity11
religion10
financial means9
age8
geography8
ideology2
none2
familial status (e.g., having or not having children) or pregnancy1
other
unclear

CSET AI Harm Definition

CategoryCount
race37
sex18
nation of origin, citizenship, immigrant status10
disability10
religion10
sexual orientation or gender identity7
age7
financial means6
geography5
ideology2
none1
familial status (e.g., having or not having children) or pregnancy1
other
unclear

In which sector did the incident occur?

Sector of Deployment

(by Incident Count)

All AIID Incidents

CategoryCount
information and communication81
Arts, entertainment and recreation35
transportation and storage28
wholesale and retail trade20
law enforcement16
Education15
human health and social work activities15
public administration13
administrative and support service activities11
professional, scientific and technical activities8
financial and insurance activities7
accommodation and food service activities6
manufacturing3
other3
defense2
real estate activities2
other service activities1
unclear1

CSET AI Harm Definition

CategoryCount
information and communication58
transportation and storage21
Arts, entertainment and recreation19
law enforcement14
wholesale and retail trade13
public administration9
human health and social work activities7
administrative and support service activities7
Education6
accommodation and food service activities5
professional, scientific and technical activities4
financial and insurance activities4
other2
defense1
real estate activities1
other service activities1
unclear1
manufacturing

How autonomously did the technology operate at the time of the incident?

Autonomy is an AI's capability to operate independently. Levels of autonomy differ based on whether or not the AI makes independent decisions and the degree of human oversight. The level of autonomy does no depend on the type of input the AI receives, whether it is human- or machine-generated.
Currently, CSET is annotating three levels of autonomy.
  • Level 1: the system operates independently with no simultaneous human oversight.
  • Level 2: the system operates independently but with human oversight, where the system makes a decision or takes an action, but a human actively observes the behavior and can override the system in real-time.
  • Level 3: the system provides inputs and suggested decisions or actions to a human that actively chooses to proceed with the AI's direction.

Autonomy Level

(by Incident Count)
  • Autonomy1 (fully autonomous): Does the system operate independently, without simultaneous human oversight, interaction or intervention?
  • Autonomy2 (human-on-loop): Does the system operate independently but with human oversight, where the system makes decisions or takes actions but a human actively observes the behavior and can override the system in real time?
  • Autonomy3 (human-in-the-loop): Does the system provide inputs and suggested decisions to a human that

Did the incident occur in a domain with physical objects?

Incidents that involve physical objects are more likely to have damage or injury. However, AI systems that do not operate in a physical domain can still lead to harm.

Domain questions – Physical Objects

(by Incident Count)

Did the incident occur in the entertainment industry?

AI systems used for entertainment are less likely to involve physical objects and hence unlikely to be associated with damage, injury, or loss. Additionally, there is a lower expectation for truthful information from entertainment, making detrimental content less likely (but still possible).

Domain questions – Entertainment Industry

(by Incident Count)

Was the incident about a report, test, or study of training data instead of the AI itself?

The quality of AI training and deployment data can potentially create harm or risks in AI systems. However, an issue in the data does not necessarily mean the AI will cause harm or increase the risk for harm. It is possible that developers or users apply techniques and processes to mitigate issues with data.

Domain questions – Report, Test, or Study of data

(by Incident Count)

Was the reported system (even if AI involvement is unknown) deployed or sold to users?

Domain questions – Deployed

(by Incident Count)

Was this a test or demonstration of an AI system done by developers, producers, or researchers (versus users) in controlled conditions?

AI tests or demonstrations by developers, producers, or researchers in controlled environments are less likely to expose people, organizations, property, institutions, or the natural environment to harm. Controlled environments may include situations such as an isolated compute system, a regulatory sandbox, or an autonomous vehicle testing range.

Domain questions – Producer Test in Controlled Conditions

(by Incident Count)

Was this a test or demonstration of an AI system done by developers, producers, or researchers (versus users) in operational conditions?

Some AI systems undergo testing or demonstration in an operational environment. Testing in operational environments still occurs before the system is deployed by end-users. However, relative to controlled environments, operational environments try to closely represent real-world conditions that affect use of the AI system.

Domain questions – Producer Test in Operational Conditions

(by Incident Count)

Was this a test or demonstration of an AI system done by users in controlled conditions?

Sometimes, prior to deployment, the users will perform a test or demonstration of the AI system. The involvement of a user (versus a developer, producer, or researcher) increases the likelihood that harm can occur even if the AI system is being tested in controlled environments because a user may not be as familiar with the functionality or operation of the AI system.

Domain questions – User Test in Controlled Conditions

(by Incident Count)

Was this a test or demonstration of an AI system done by users in operational conditions?

The involvement of a user (versus a developer, producer, or researcher) increases the likelihood that harm can occur even if the AI system is being tested. Relative to controlled environments, operational environments try to closely represent real-world conditions and end-users that affect use of the AI system. Therefore, testing in an operational environment typically poses a heightened risk of harm to people, organizations, property, institutions, or the environment.

Domain questions – User Test in Operational Conditions

(by Incident Count)

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • f28fa7c