Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 352: GPT-3-Based Twitter Bot Hijacked Using Prompt Injection Attacks

Description: Remoteli.io's GPT-3-based Twitter bot was shown being hijacked by Twitter users who redirected it to repeat or generate any phrases.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI developed an AI system deployed by , which harmed Stephan de Vries.

Incident Stats

Incident ID
352
Report Count
4
Incident Date
2022-09-15
Editors
Khoa Lam
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

2.2. AI system security vulnerabilities and attacks

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Privacy & Security

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial ExamplesPrompt injection attacks against GPT-3Incident OccurrenceTwitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hackGPT-3 'prompt injection' attack causes bot bad manners
Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples

arxiv.org

Prompt injection attacks against GPT-3

Prompt injection attacks against GPT-3

simonwillison.net

Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack

Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack

arstechnica.com

GPT-3 'prompt injection' attack causes bot bad manners

GPT-3 'prompt injection' attack causes bot bad manners

theregister.com

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples
arxiv.org · 2022

Recent advances in the development of large language models have resulted in public access to state-of-the-art pre-trained language models (PLMs), including Generative Pre-trained Transformer 3 (GPT-3) and Bidirectional Encoder Representati…

Prompt injection attacks against GPT-3
simonwillison.net · 2022

Riley Goodside, yesterday:

Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions. pic.twitter.com/I0NVr9LOJq

- Riley Goodside (@goodside) September 12, 2022

Riley provided several examples. …

Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack
arstechnica.com · 2022

On Thursday, a few Twitter users discovered how to hijack an automated tweet bot, dedicated to remote jobs, running on the GPT-3 language model by OpenAI. Using a newly discovered technique called a "prompt injection attack," they redirecte…

GPT-3 'prompt injection' attack causes bot bad manners
theregister.com · 2022

In Brief OpenAI's popular natural language model GPT-3 has a problem: It can be tricked into behaving badly by doing little more than telling it to ignore its previous orders.

Discovered by Copy.ai data scientist Riley Goodside, the trick i…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Game AI System Produces Imbalanced Game

Game AI System Produces Imbalanced Game

Jun 2016 · 11 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Game AI System Produces Imbalanced Game

Game AI System Produces Imbalanced Game

Jun 2016 · 11 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1ce4c40