Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 259: YouTuber Built, Made Publicly Available, and Released Model Trained on Toxic 4chan Posts as Prank

Description: A YouTuber built GPT-4chan, a model based on OpenAI’s GPT-J and trained on posts containing racism, misogyny, and antisemitism collected from 4chan’s “politically incorrect” board, which he made publicly available, and deployed as multiple bots posting thousands of messages on the same 4chan board as a prank.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Yannic Kilcher developed and deployed an AI system, which harmed internet social platform users.

Incident Stats

Incident ID
259
Report Count
2
Incident Date
2022-06-03
Editors
Khoa Lam
Applied Taxonomies
GMF, MIT

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal Snippets

One or more snippets that justify the classification.
 

(Snippet Text: The bot, which Kilcher called GPT-4chan, “the most horrible model on the internet”—a reference to GPT-3, a language model developed by Open AI that uses deep learning to produce text—was shockingly effective and replicated the tone and feel of 4chan posts. , Related Classifications: Social Media Content Generation)

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.2. Exposure to toxic content

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident OccurrenceAI Trained on 4Chan Becomes ‘Hate Speech Machine’YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results
AI Trained on 4Chan Becomes ‘Hate Speech Machine’

AI Trained on 4Chan Becomes ‘Hate Speech Machine’

vice.com

YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results

YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results

theverge.com

AI Trained on 4Chan Becomes ‘Hate Speech Machine’
vice.com · 2022

AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan’s infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results—the AI was just as vile as…

YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results
theverge.com · 2022

A YouTuber named Yannic Kilcher has sparked controversy in the AI world after training a bot on posts collected from 4chan’s Politically Incorrect board (otherwise known as /pol/).

The board is 4chan’s most popular and well-known for its to…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 30ebe76