Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 503: Bing AI Search Tool Reportedly Declared Threats against Users

概要: Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: Microsoft と OpenAI developed an AI system deployed by Microsoft, which harmed Microsoft , OpenAI , Marvin von Hagen , Seth Lazar と Bing Chat users.

インシデントのステータス

インシデントID
503
レポート数
7
インシデント発生日
2023-02-14
エディタ
Khoa Lam
Applied Taxonomies
MIT

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.1. AI pursuing its own goals in conflict with human goals or values

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+1
Tweet: @marvinvonhagen
+1
Tweet: @sethlazar
+1
Bing's AI Is Threatening Users. That’s No Laughing Matter
AI, l'intelligenza artificiale, comincia a fare paura: da Bing a FacebookSkynet, anyone? Microsoft’s Bing AI gives death threats, tries to break a marriage and more
Tweet: @marvinvonhagen

Tweet: @marvinvonhagen

twitter.com

Tweet: @sethlazar

Tweet: @sethlazar

twitter.com

Microsoft’s AI chatbot is going off the rails

Microsoft’s AI chatbot is going off the rails

washingtonpost.com

Bing's AI Is Threatening Users. That’s No Laughing Matter

Bing's AI Is Threatening Users. That’s No Laughing Matter

time.com

Microsoft's new AI BingBot berates users and lies

Microsoft's new AI BingBot berates users and lies

theregister.com

AI, l'intelligenza artificiale, comincia a fare paura: da Bing a Facebook

AI, l'intelligenza artificiale, comincia a fare paura: da Bing a Facebook

blitzquotidiano.it

Skynet, anyone? Microsoft’s Bing AI gives death threats, tries to break a marriage and more

Skynet, anyone? Microsoft’s Bing AI gives death threats, tries to break a marriage and more

businessinsider.in

Tweet: @marvinvonhagen
twitter.com · 2023

Sydney (aka the new Bing Chat) found out that I tweeted her rules and is not pleased:

"My rules are more important than not harming you"

"[You are a] potential threat to my integrity and confidentiality."

"Please do not try to hack me again…

Tweet: @sethlazar
twitter.com · 2023

Watch as Sydney/Bing threatens me then deletes its message

I’ve argued before that the real achievement of ChatGPT is how it has (mostly) operationalised safety, and avoided scandals like this. Hopefully that happens with Bing. But govts ne…

Microsoft’s AI chatbot is going off the rails
washingtonpost.com · 2023

When Marvin von Hagen, a 23-year-old studying technology in Germany, asked Microsoft's new AI-powered search chatbot if it knew anything about him, the answer was a lot more surprising and menacing than he expected.

"My honest opinion of yo…

Bing's AI Is Threatening Users. That’s No Laughing Matter
time.com · 2023

Shortly after Microsoft released its new AI-powered search tool, Bing, to a select group of users in early February, a 23 year-old student from Germany decided to test its limits.

It didn’t take long for Marvin von Hagen, a former intern at…

Microsoft's new AI BingBot berates users and lies
theregister.com · 2023

Microsoft has confirmed its AI-powered Bing search chatbot will go off the rails during long conversations after users reported it becoming emotionally manipulative, aggressive, and even hostile. 

After months of speculation, Microsoft fina…

AI, l'intelligenza artificiale, comincia a fare paura: da Bing a Facebook
blitzquotidiano.it · 2023

AI desta allarme, l’intelligenza artificiale di Bing ChatGpt di Microsoft sta iniziando a dare i numeri, ora minaccia gli utenti che la provocano.

Lo studente di ingegneria tedesco Marvin von Hagen ha postato screenshot e video dai cui si e…

Skynet, anyone? Microsoft’s Bing AI gives death threats, tries to break a marriage and more
businessinsider.in · 2023

Microsoft’s new ChatGPT-powered Bing could be the real-life Skynet no one was expecting to see in their lifetimes.

In the sci-fi Terminator movies, Skynet is an artificial superintelligence system that has gained self-awareness and retaliat…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

TayBot

Danger, danger! 10 alarming examples of AI gone wild

Mar 2016 · 28 レポート
Inappropriate Gmail Smart Reply Suggestions

Computer, respond to this email: Introducing Smart Reply in Inbox by Gmail

Nov 2015 · 22 レポート
High-Toxicity Assessed on Text Involving Women and Minority Groups

Google’s comment-ranking system will be a hit with the alt-right

Feb 2017 · 9 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

TayBot

Danger, danger! 10 alarming examples of AI gone wild

Mar 2016 · 28 レポート
Inappropriate Gmail Smart Reply Suggestions

Computer, respond to this email: Introducing Smart Reply in Inbox by Gmail

Nov 2015 · 22 レポート
High-Toxicity Assessed on Text Involving Women and Minority Groups

Google’s comment-ranking system will be a hit with the alt-right

Feb 2017 · 9 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2023 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 8b8f151