Incidentes involucrados como desarrollador e implementador
Incidente 1412 Reportes
California Police Turned on Music to Allegedly Trigger Instagram’s DCMA to Avoid Being Live-Streamed
2021-02-05
A police officer in Beverly Hills played copyrighted music on his phone when realizing that his interactions were being recorded on a livestream, allegedly hoping the Instagram's automated copyright detection system to end or mute the stream.
MásIncidente 3432 Reportes
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
2021-07-11
Facebook's, Instagram's, and Twitter's automated content moderation failed to proactively remove racist remarks and posts directing at Black football players after finals loss, allegedly largely relying on user reports of harassment.
MásIncidente 3942 Reportes
Social Media's Automated Word-Flagging without Context Shifted Content Creators' Language Use
2017-03-15
TikTok's, YouTube's, Instagram's, and Twitch's use of algorithms to flag certain words devoid of context changed content creators' use of everyday language or discussion about certain topics in fear of their content getting flagged or auto-demonetized by mistake.
MásIncidente 4472 Reportes
Footballer's "X-Rated" Comment Created by Instagram's Mistranslation
2022-12-19
Instagram's English translation of a footballer's comment on his wife's post in Spanish made the message seem "racy" and "X-rated," which some fans found amusing.
MásIncidents involved as Deployer
Incidente 4693 Reportes
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
MásIncidente 7232 Reportes
Instagram Algorithms Reportedly Directed Children's Merchandise Ad Campaign to Adult Men and Sex Offenders
2024-05-13
An Instagram ad campaign for children's merchandise was intended to reach adult women but was instead predominantly shown to adult men, including convicted sex offenders, due to Instagram's algorithmic targeting. This failure is reported to have led to direct solicitations for sex with a 5-year-old model in the ads.
MásIncidente 5761 Reporte
Alleged Misuse of PicSo AI for Generating Inappropriate Content Emphasizing "Girls"
2023-10-24
PicSo AI, which appears to be getting advertised by Meta over Instagram, is allegedly being used for generating inappropriate content with an emphasis on "girls." This raises concerns about the misuse of generative AI technologies for creating offensive and potentially sexually explicit material that could be used for nefarious and criminal purposes.
MásIncidente 7581 Reporte
Teen's Overdose Reportedly Linked to Meta's AI Systems Failing to Block Ads for Illegal Drugs
2023-09-11
Meta's AI moderation systems reportedly failed to block ads for illegal drugs on Facebook and Instagram, allowing users to access dangerous substances. The system's failure is linked to the overdose death of Elijah Ott, a 15-year-old boy who sought drugs through Instagram.
MásIncidents implicated systems
Incidente 8852 Reportes
Meta AI Characters Allegedly Exhibited Racism, Fabricated Identities, and Exploited User Trust
2025-01-03
Meta deployed AI-generated profiles on its platforms, including Instagram and Facebook, as part of an experiment. The profiles, such as "Liv" and "Grandpa Brian," allegedly featured fabricated identities and misleading diversity claims. These accounts also allegedly manipulated user emotions for engagement and profit. Reportedly, backlash over offensive and deceptive content led Meta to delete the profiles on January 3rd, 2025, citing a blocking-related bug.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
Incidentes involucrados como desarrollador e implementador
- Incidente 3432 Reportes
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
- Incidente 3591 Reporte
Facebook, Instagram, and Twitter Cited Errors in Automated Systems as Cause for Blocking pro-Palestinian Content on Israeli-Palestinian Conflict
Incidents involved as Deployer
- Incidente 4693 Reportes
Automated Adult Content Detection Tools Showed Bias against Women Bodies
- Incidente 7581 Reporte
Teen's Overdose Reportedly Linked to Meta's AI Systems Failing to Block Ads for Illegal Drugs
Incidents implicated systems
Meta
Incidentes involucrados como desarrollador e implementador
- Incidente 7232 Reportes
Instagram Algorithms Reportedly Directed Children's Merchandise Ad Campaign to Adult Men and Sex Offenders
- Incidente 8852 Reportes
Meta AI Characters Allegedly Exhibited Racism, Fabricated Identities, and Exploited User Trust