Description: A lawyer in California asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. The chatbot produced a false story of Professor Jonathan Turley sexually harassing a student on a class trip.
Entities
View all entitiesAlleged: OpenAI developed and deployed an AI system, which harmed Jonathan Turley.
Incident Stats
Incident Reports
Reports Timeline
reason.com · 2023
- View the original report at its source
- View the report at the Internet Archive
[UPDATE: My apologies for misattributing this at first to ChatGPT-4. I had accessed the OpenAI query portal through a page focusing on ChatGPT-4 (https://openai.com/product/gpt-4) and then clicking on "Try on ChatGPT Plus," which is why I h…
washingtonpost.com · 2023
- View the original report at its source
- View the report at the Internet Archive
One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone…
arstechnica.com · 2024
- View the original report at its source
- View the report at the Internet Archive
OpenAI's ChatGPT is more than just an AI language model with a fancy interface. It's a system consisting of a stack of AI models and content filters that make sure its outputs don't embarrass OpenAI or get the company into legal trouble whe…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.