Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

About

Why "AI Incidents"?

Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. Much like the transportation sector before it (e.g., FAA and FARS) and more recently computer systems, intelligent systems require a repository of problems experienced in the real world so that future researchers and developers may mitigate or avoid repeated bad outcomes.

What is an Incident?

The initial set of more than 1,000 incident reports have been intentionally broad in nature. Current examples include,

  • An autonomous car kills a pedestrian
  • A trading algorithm causes a market "flash crash" where billions of dollars transfer between parties
  • A facial recognition system causes an innocent person to be arrested

You are invited to explore the incidents collected to date, view the complete listing, and submit additional incident reports. Researchers are invited to review our working definition of AI incidents.

Current and Future Users

The database is a constantly evolving data product and collection of applications.

  • Current Users include system architects, industrial product developers, public relations managers, researchers, public policy researchers, and the general public. These users are invited to use the Discover application to proactively discover how recently deployed intelligent systems have produced unexpected outcomes in the real world. In so doing, they may avoid making similar mistakes in the future.
  • Future Uses will evolve through the code contributions of the open source community, including additional database summaries and taxonomies.

When Should You Report an Incident?

When in doubt of whether an event qualifies as an incident, please submit it! This project is intended to converge on a shared criteria for ingesting incidents through exploration of the candidate incidents submitted by the broader community.

Board of Directors

The incident database is managed in a participatory manner by persons and organizations contributing code, research, and broader impacts. If you would like to participate in the governance of the project, please contact us and include your intended contribution to the AI Incident Database.

Voting Members

  • Patrick Hall: Patrick Hall is an Assistant Professor at the George Washington School of Business, where he teaches applied and responsible machine learning. He conducts research in support of the (NIST) AI Risk Management Framework, and has earned a Department of Commerce Gold Medal for his contributions. Patrick brought some of the first responsible AI solutions to market with H2o.ai and is affiliated with leading fair lending and AI risk management consultancies, where he advances practices at the intersection of technology and regulation. He has been invited to speak on AI and machine learning at the National Academies, the Association for Computing Machinery’s KDD Conference, and the American Statistical Association’s Joint Statistical Meetings. Patrick’s expertise has been featured in The New York Times and on NPR, and his writing has appeared in Frontiers in AI, Information, McKinsey.com, O’Reilly Media, and Thomson Reuters Regulatory Intelligence. His technical contributions have also been profiled in Fortune, WIRED, InfoWorld, TechCrunch, and other leading outlets.
    Contributions: Patrick is a leading contributor of incident reports to the AI Incident Database and provides strategic leadership for the board.

  • Heather Frase: Heather Frase, PhD, is the head of Veraitech and a Senior Advisor for Testing & Evaluation of AI at Virginia Tech’s National Security Institute. She advises on, contributes to, and instructs on the testing and evaluation of AI-enabled systems, including AI benchmarks, agentic AI, generative AI, and red-teaming. Her diverse career has spanned significant roles in defense, intelligence, and policy. It has involved projects ranging from AI evaluation and reliability to drug trafficking prevention and financial crime analysis. She also serves as a member of the Organisation for Economic Co-operation and Development (OECD) Network of Experts on AI.
    Contributions: Heather has contributed to AI incident research and frameworks for AI incident reporting.

  • Kristian J. Hammond: Kris Hammond is the Bill and Cathy Osborn Professor of Computer Science at Northwestern University and the co-founder of the Artificial Intelligence company Narrative Science, recently acquired by Salesforce. He is also the faculty lead of Northwestern’s CS + X initiative, exploring how computational thinking can be used to transform fields such as the law, medicine, education, and business. He is director of Northwestern’s Master of Science in Artificial Intelligence (MSAI). Most recently, Dr. Hammond founded the Center for Advancing Safety in Machine Intelligence (CASMI), a research hub funded by Underwriter’s Laboratories. CASMI is focused on operationalizing the design and evaluation of AI systems from the perspective of their impact on human life and safety.
    Contributions: Kris is developing a collaborative project centered on the case studies of incidents.

Emeritus Board

Emeritus board members are those that have particularly distinguished themselves in their service to the Responsible AI Collaborative. They hold no governance position within the organization.

  • Sean McGregor: Sean McGregor is a machine learning safety researcher whose efforts have included starting up the Digital Safety Research Institute at the UL Research Institutes, launching the AI Incident Database, and training edge neural network models for the neural accelerator startup Syntiant. Sean's open source development work has earned media attention in the Atlantic, Der Spiegel, Wired, Venture Beat, and Vice, among others, while his technical publications have appeared in a variety of machine learning, human-computer interaction, ethics, and application-centered proceedings. Dr. McGregor currently serves as executive director for the AI Incident Database, lead of the ML Commons Agentic Workstream, and co-founder of a stealth AI safety non-profit. These efforts thematically align with an interest in "AI risk" and how we might understand those risks by building the capacity to insure them.

  • Helen Toner: Helen Toner is Director of Strategy at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.

Collaborators

Responsible AI Collaborative: People that serve the organization behind the AI Incident Database.

  • Daniel Atherton: Danny Atherton is lead editor with the AI Incident Database (AIID), where he works to expand and curate its documentation of real-world AI failures and risks. In this role, he helps build public knowledge about the societal impacts of AI systems, collaborating with practitioners, researchers, policymakers, educators, journalists, and volunteer contributors to document incidents and develop shared frameworks. In addition to his regular writing for the AIID, which includes roundup reports and articles like “Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses” (2024), he is co-author of “Lessons for Editors of AI Incidents from the AI Incident Database” (2025), co-author of “AI incident response plans: Not just for security anymore” (2023), and lead co-author of The Language of Trustworthy AI: An In-Depth Glossary of Terms, developed with the National Institute of Standards and Technology (NIST) as part of its Trustworthy & Responsible AI Resource Center. The glossary includes more than 500 terms and is designed to promote a shared vocabulary for responsible AI practice. Through Hall Research, he is also the co-editor of Awesome Machine Learning Interpretability, a GitHub repository of responsible machine learning resources. He brings to his work an interest in classification logic and the role of genre and narrative in framing, with attention to how knowledge structures influence governance and public understanding. He holds a Ph.D. in English from George Washington University and teaches in the Department of English at Georgetown University.

  • Board members include Patrick Hall, Heather Frase, and Kristian J. Hammond. Biographies are stated above.

  • Sean McGregor is the Executive Director.

  • Daniel Atherton is the lead AI incidents editor.

Many additional people and organizations without formal Responsible AI Collaborative affiliations contribute to the project.

Digital Safety Research Institute (DSRI): People affiliated with DSRI, which provides substantial support to the AIID program.

  • Kevin Paeth is a lead with DSRI
  • César Varela is a Full Stack Engineer
  • Luna McNulty is a UX Engineer
  • Pablo Costa is a Full Stack Engineer
  • Clara Youdale Pinelli is a Front End Engineer
  • Sean McGregor is a director with DSRI

Incident Editors: People that resolve incident submissions to the database and maintain them.

  • Daniel Atherton
  • Sean McGregor

Additionally, Zachary Arnold made significant contributions to the incident criteria.

Taxonomy Editors: Organizations or people that have contributed taxonomies to the database.

  • Center for Security and Emerging Technology (CSET)
  • Nikiforos Pittaras (GMF)

Open Source Contributors: People that have contributed more than one pull request, graphics, site copy, or bug report to the AI Incident Database.

  • Neama Dadkhahnikoo: Neama served as the volunteer executive director and board observer for the Responsible AI Collaborative
  • Jingying Yang and Dr. Christine Custis contributed significantly to the early stages of the AIID via their roles with the Partnership on AI.
  • Khoa Lam: Has served as a data editor
  • Kate Perkins: Has served as a data editor
  • Scott Allen Cambo: Scott previously served as Executive Director for the Responsible AI Collaborative
  • Janet Boutilier Schwartz: Janet previously consulted on operations and strategy with the Responsible AI Collaborative.
  • Kit Harris: Kit served as board observer and provided strategic advice from his position as grant advisor.
  • Alex Muscă
  • Chloe Kam Developed the AIID logo
  • JT McHorse
  • Seth Reid

Incident Contributors: People that have contributed a large numbers of incidents to the database.

  • Roman Lutz (Max Planck Institute for Intelligent Systems, formerly Microsoft)
  • Patrick Hall (Burt and Hall LLP)
  • Catherine Olsson (Google)
  • Roman Yampolskiy (University of Louisville)
  • Sam Yoon (as contractor to PAI, then with Deloitte Consulting, then with the Kennedy School of Government)

The following people have collected a large number of incidents that are pending ingestion.

  • Zachary Arnold, Helen Toner, Ingrid Dickinson, Thomas Giallella, and Nicolina Demakos (Center for Security and Emerging Technology, Georgetown)
  • Lawrence Lee, Darlena Phuong Quyen Nguyen, Iftekhar Ahmed (UC Irvine)

There is a growing community of people concerned with the collection and characterization of AI incidents, and we encourage everyone to contribute to the development of this system.

Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    481
  • 🥈

    Anonymous

    144
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    559
  • 🥈

    Khoa Lam

    230
  • 🥉

    Anonymous

    204
Total Report Contributions
  • 🥇

    Daniel Atherton

    2364
  • 🥈

    Anonymous

    924
  • 🥉

    Khoa Lam

    456
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community test

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide
  • RAIC AIID Taxonomy Policy

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 49cf09c