Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

About

Why "AI Incidents"?

Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. Much like the transportation sector before it (e.g., FAA and FARS) and more recently computer systems, intelligent systems require a repository of problems experienced in the real world so that future researchers and developers may mitigate or avoid repeated bad outcomes.

What is an Incident?

The initial set of more than 1,000 incident reports have been intentionally broad in nature. Current examples include,

  • An autonomous car kills a pedestrian
  • A trading algorithm causes a market "flash crash" where billions of dollars transfer between parties
  • A facial recognition system causes an innocent person to be arrested

You are invited to explore the incidents collected to date, view the complete listing, and submit additional incident reports. Researchers are invited to review our working definition of AI incidents.

Current and Future Users

The database is a constantly evolving data product and collection of applications.

  • Current Users include system architects, industrial product developers, public relations managers, researchers, and public policy researchers. These users are invited to use the Discover application to proactively discover how recently deployed intelligent systems have produced unexpected outcomes in the real world. In so doing, they may avoid making similar mistakes in their development.
  • Future Uses will evolve through the code contributions of the open source community, including additional database summaries and taxonomies.

When Should You Report an Incident?

When in doubt of whether an event qualifies as an incident, please submit it! This project is intended to converge on a shared definition of "AI Incident" through exploration of the candidate incidents submitted by the broader community.

Board of Directors

The incident database is managed in a participatory manner by persons and organizations contributing code, research, and broader impacts. If you would like to participate in the governance of the project, please contact us and include your intended contribution to the AI Incident Database.

Voting Members

  • Helen Toner: Helen Toner is Director of Strategy at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.
    Contributions: AI incident research and oversight of the CSET taxonomy.
  • Patrick Hall: Patrick is principal scientist at bnh.ai, a D.C.-based law firm specializing in AI and data analytics. Patrick also serves as visiting faculty at the George Washington University School of Business. Before co-founding bnh.ai, Patrick led responsible AI efforts at the machine learning software firm H2O.ai, where his work resulted in one of the world’s first commercial solutions for explainable and fair machine learning. Among other academic and technology media writing, Patrick is the primary author of popular e-books on explainable and responsible machine learning. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
    Contributions: Patrick is the leading contributor of incident reports to the AI Incident Database Project.
  • Sean McGregor: Sean McGregor founded the AI Incident Database project and recently left a position as machine learning architect at the neural accelerator startup Syntiant so he could focus on the assurance of intelligent systems full time. Dr. McGregor's work spans neural accelerators for energy efficient inference, deep learning for speech and heliophysics, and reinforcement learning for wildfire suppression policy. Outside his paid work, Sean organized a series of workshops at major academic AI conferences on the topic of "AI for Good" and is currently developing an incentives-based approach to making AI safer through audits and insurance. Contributions: Sean volunteers as a project maintainer and editor of the AI Incident Database (AIID) project.

Non-Voting Members

  • Neama Dadkhahnikoo: Neama Dadkhahnikoo is an expert in artificial intelligence and entrepreneurship, with over 15 years of experience in technology development at startups, non profits, and large companies. He currently serves as a Product Manager for Vertex AI, Google Cloud’s unified platform for end-to-end machine learning. Previously, Mr. Dadkhahnikoo was the Director of AI and Data Operations at the XPRIZE Foundation, CTO of CaregiversDirect (AI startup for home care), co-founder and COO of Textpert (AI startup for mental health), and a startup consultant. He started his career as a Software Developer for The Boeing Company. Mr. Dadkhahnikoo holds an MBA from UCLA Anderson; an MS in Project Management from Stevens Institute of Technology; and a BA in Applied Mathematics and Computer Science, with a minor in Physics, from UC Berkeley.
    Contributions: Neama serves as Executive Director to the Responsible AI Collaborative outside his role as product manager at Google.
  • Kit Harris: Kit leads on grant investigations and researches promising fields at Longview Philanthropy. He also writes about Longview recommendations and research for Longview client philanthropists. Prior to focusing on high-impact philanthropic work, Kit worked as a credit derivatives trader with J.P. Morgan. During that time, he donated the majority of his income to high-impact charities. Kit holds a first class degree in mathematics from the University of Oxford.
    Contributions: Kit serves as board observer, provides strategic advice, and is RAIC's point of contact at Longview.

Collaborators

Open Source Contributors: People that have contributed more than one pull request, graphics, site copy, or bug report to the AI Incident Database.

  • César Varela is a Full Stack Engineer with the Responsible AI Collaborative
  • Luna McNulty is a UX Engineer with the Responsible AI Collaborative
  • Pablo Costa is a Full Stack Engineer with the Responsible AI Collaborative
  • Clara Youdale Pinelli is a Front End Engineer with the Responsible AI Collaborative
  • Alex Muscă
  • Chloe Kam Developed the AIID logo
  • JT McHorse
  • Seth Reid

Responsible AI Collaborative: People that serve the organization behind the AI Incident Database.

  • Janet Boutilier Schwartz

Incident Editors: People that resolve incident submissions to the database and maintain them.

  • Sean McGregor
  • Khoa Lam
  • Kate Perkins
  • Janet Boutilier Schwartz
  • Daniel Atherton

Additionally, Zachary Arnold made significant contributions to the incident criteria.

Taxonomy Editors: Organizations or people that have contributed taxonomies to the database.

  • Center for Security and Emerging Technology (CSET)

Partnership on AI staff members:
Jingying Yang and Dr. Christine Custis contributed significantly to the early stages of the AIID.

Incident Contributors: People that have contributed a large numbers of incidents to the database.

  • Roman Lutz (Max Planck Institute for Intelligent Systems, formerly Microsoft)
  • Patrick Hall (Burt and Hall LLP)
  • Catherine Olsson (Google)
  • Roman Yampolskiy (University of Louisville)
  • Sam Yoon (as contractor to PAI, then with Deloitte Consulting, then with the Kennedy School of Government)

The following people have collected a large number of incidents that are pending ingestion.

  • Zachary Arnold, Helen Toner, Ingrid Dickinson, Thomas Giallella, and Nicolina Demakos (Center for Security and Emerging Technology, Georgetown)
  • Charlie Pownall via AI, algorithmic and automation incident and controversy repository (AIAAIC)
  • Lawrence Lee, Darlena Phuong Quyen Nguyen, Iftekhar Ahmed (UC Irvine)

There is a growing community of people concerned with the collection and characterization of AI incidents, and we encourage everyone to contribute to the development of this system.

インシデントレポート投稿ランキング

インシデントレポートの作成や投稿に貢献した人や組織です。詳細についてはランキングページを参照してください。

新しいインシデントの投稿
  • 🥇

    Daniel Atherton

    424
  • 🥈

    Anonymous

    142
  • 🥉

    Khoa Lam

    93
既存のインシデントに追加されたレポート
  • 🥇

    Daniel Atherton

    501
  • 🥈

    Khoa Lam

    230
  • 🥉

    Anonymous

    200
レポートへの協力の総計
  • 🥇

    Daniel Atherton

    2176
  • 🥈

    Anonymous

    918
  • 🥉

    Khoa Lam

    456
Responsible AI Collaborative

インシデントデータベースはAIインシデントデータベースの推進を目的として設立されたResponsible AI Collaborativeによるプロジェクトです。Collaborativeのガバナンスはこの重要なプログラムの参加者を中心に構成されています。詳細については設立レポートを読んでください。さらに詳しい情報はand learn more on ourにあります

Responsible AI CollaborativeのForm 990と免税申請をご覧。

組織協賛スポンサー
データベース協賛スポンサー
スポンサーと助成
協力スポンサー

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2023 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1ce4c40