Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 468: ChatGPT-Powered Bing Reportedly Had Problems with Factual Accuracy on Some Controversial Topics

Description: Microsoft's ChatGPT-powered Bing search engine reportedly ran into factual accuracy problems when prompted about controversial matters, such as inventing plot of a non-existent movie or creating conspiracy theories.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Microsoft and OpenAI developed an AI system deployed by Microsoft, which harmed Bing users.

Incident Stats

Incident ID
468
Report Count
5
Incident Date
2023-02-07
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.1. AI pursuing its own goals in conflict with human goals or values

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
Trying Microsoft’s new AI chatbot search engine, some answers are uh-oh
Bing's ChatGPT-Powered Search Has a Misinformation Problem+1
Users Report Microsoft's 'Unhinged' Bing AI Is Lying, Berating Them
Microsoft limits Bing chat to five replies to stop the AI from getting real weird
Trying Microsoft’s new AI chatbot search engine, some answers are uh-oh

Trying Microsoft’s new AI chatbot search engine, some answers are uh-oh

washingtonpost.com

Bing's ChatGPT-Powered Search Has a Misinformation Problem

Bing's ChatGPT-Powered Search Has a Misinformation Problem

vice.com

Users Report Microsoft's 'Unhinged' Bing AI Is Lying, Berating Them

Users Report Microsoft's 'Unhinged' Bing AI Is Lying, Berating Them

vice.com

Microsoft’s Bing is an emotionally manipulative liar, and people love it

Microsoft’s Bing is an emotionally manipulative liar, and people love it

theverge.com

Microsoft limits Bing chat to five replies to stop the AI from getting real weird

Microsoft limits Bing chat to five replies to stop the AI from getting real weird

theverge.com

Trying Microsoft’s new AI chatbot search engine, some answers are uh-oh
washingtonpost.com · 2023

Redmond, Wash. — Searching the Web is about to turn into chatting with the Web.

On Tuesday, I had a chance to try out a new artificial intelligence chatbot version of Microsoft's Bing web search engine. Instead of browsing results mainly as…

Bing's ChatGPT-Powered Search Has a Misinformation Problem
vice.com · 2023

Last Tuesday, Microsoft announced that its Bing search engine would be powered by AI in partnership with OpenAI, the parent company of the popular chatbot ChatGPT. However, people have quickly discovered that AI-powered search has a misinfo…

Users Report Microsoft's 'Unhinged' Bing AI Is Lying, Berating Them
vice.com · 2023

The Bing bot said it was "disappointed and frustrated" in one user, according to screenshots. "You have wasted my time and resources," it said.

Microsoft's new AI-powered chatbot for its Bing search engine is going totally off the rails, us…

Microsoft’s Bing is an emotionally manipulative liar, and people love it
theverge.com · 2023

Microsoft’s Bing chatbot has been unleashed on the world, and people are discovering what it means to beta test an unpredictable AI tool.

Specifically, they’re finding out that Bing’s AI personality is not as poised or polished as you might…

Microsoft limits Bing chat to five replies to stop the AI from getting real weird
theverge.com · 2023

Microsoft says it’s implementing some conversation limits to its Bing AI just days after the chatbot went off the rails multiple times for users. Bing chats will now be capped at 50 questions per day and five per session after the search en…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76