Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 133: Online Trolls Allegedly Abused TikTok’s Automated Content Reporting System to Discriminate against Marginalized Creators

Description: TikTok's automated content reporting system was allegedly abused by online trolls to intentionally misreport content created by users of marginalized groups.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: TikTok developed and deployed an AI system, which harmed TikTok content creators of marginalized groups.

Incident Stats

Incident ID
133
Report Count
1
Incident Date
2020-12-15
Editors
Sean McGregor, Khoa Lam
Applied Taxonomies
GMF, CSETv1, MIT

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

133

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal Snippets

One or more snippets that justify the classification.
 

(Snippet Text: Myself, alongside many other creators, especially BIPOC, LGBTQPIA+, and those living with disabilities, are being targeted by trolls who are intentionally falsely reporting our content with the goal to delete our videos from the app., Related Classifications: Automated Content Curation)

Known AI Goal Classification Discussion

Free text with comments justifying the chosen classification (e.g. based on information on selected snippets and technical analysis), if needed.
 

Automated Content Curation: If a content analysis component in moderation exists

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+1
TikTok Deleted My Account Because I’m a Latina Trans Woman
TikTok Deleted My Account Because I’m a Latina Trans Woman

TikTok Deleted My Account Because I’m a Latina Trans Woman

losangelesblade.com

TikTok Deleted My Account Because I’m a Latina Trans Woman
losangelesblade.com · 2020

My name is Rosalynne (Rose) Montoya, I am a Latina, bisexual, transgender woman. I am a social media content creator and before Monday December 14th, 2020, I had grown my audience to 300K+ followers on my TikTok account. (@RosalynneMontoya)…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Korean Chatbot Luda Made Offensive Remarks towards Minority Groups

Korean Chatbot Luda Made Offensive Remarks towards Minority Groups

Dec 2020 · 13 reports
AI-Generated Faces Used by Scammers to Pose as a Law Firm in Boston

AI-Generated Faces Used by Scammers to Pose as a Law Firm in Boston

Apr 2022 · 1 report
Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Korean Chatbot Luda Made Offensive Remarks towards Minority Groups

Korean Chatbot Luda Made Offensive Remarks towards Minority Groups

Dec 2020 · 13 reports
AI-Generated Faces Used by Scammers to Pose as a Law Firm in Boston

AI-Generated Faces Used by Scammers to Pose as a Law Firm in Boston

Apr 2022 · 1 report
Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Alleged Issues with Proctorio's Remote-Testing AI Prompted Suspension by University

Jan 2020 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 9d70fba