Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 833: Polish Radio Station Replaces Human Hosts with AI-Generated Presenters to Simulate Interviewing Deceased Poet Wisława Szymborska

Description: Polish radio station Off Radio Krakow replaced human presenters with AI-generated ones and aired a simulated interview with the deceased poet Wisława Szymborska. The AI-driven experiment aimed to attract younger listeners but led to job losses for former hosts. In response to the public backlash, the station ended its use of AI presenters.
Editor Notes: Michal Rusinek, who operates the foundation that runs Wisława Szymborska's estate, reportedly gave permission for her voice to be used, but he later expressed disappointment with the outcome, describing the AI-generated interview as “horrible” and saying it made Szymborska sound "bland" and "naïve." Rusinek noted that, while Szymborska herself might have found humor in the experiment, the interview misrepresented her tone and personality. Regardless of Rusinek's role and response, Incident 833 still raises thorny problems about the ethical dimensions of using AI to recreate the voices of deceased individuals, along with all the potential and actual harms that result from eroding transparency and truth in media platforms. See also Incident 627: Unauthorized AI Impersonation of George Carlin Used in Comedy Special.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI , ElevenLabs and Leonardo AI developed an AI system deployed by Off Radio Krakow and Mariusz Marcin Pulit, which harmed Lukasz Zaleski , Mateusz Demski , Wisława Szymborska , Off Radio Krakow audience and Off Radio Krakow employees.

Incident Stats

Incident ID
833
Report Count
2
Incident Date
2024-10-21
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

6.2. Increased inequality and decline in employment quality

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Socioeconomic & Environmental Harms

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Incident OccurrencePolish radio station abandons use of AI 'presenters' following outcryAn ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.
Polish radio station abandons use of AI 'presenters' following outcry

Polish radio station abandons use of AI 'presenters' following outcry

abcnews.go.com

An ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.

An ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.

nytimes.com

Polish radio station abandons use of AI 'presenters' following outcry
abcnews.go.com · 2024

WARSAW, Poland -- A Polish radio station said Monday that it has ended an “experiment” that involved using AI-generated "presenters" instead of real journalists after the move sparked an outcry.

Weeks after dismissing its journalists, OFF R…

An ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.
nytimes.com · 2024

When a state-funded Polish radio station canceled a weekly show featuring interviews with theater directors and writers, the host of the program went quietly, resigned to media industry realities of cost-cutting and shifting tastes away fro…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
Unauthorized AI Impersonation of George Carlin Used in Comedy Special

Unauthorized AI Impersonation of George Carlin Used in Comedy Special

Jan 2024 · 8 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Robot kills worker at German Volkswagen plant

Robot kills worker at German Volkswagen plant

Jul 2014 · 27 reports
Nuclear False Alarm

Nuclear False Alarm

Sep 1983 · 27 reports
Amazon Censors Gay Books

Amazon Censors Gay Books

May 2008 · 24 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors
Unauthorized AI Impersonation of George Carlin Used in Comedy Special

Unauthorized AI Impersonation of George Carlin Used in Comedy Special

Jan 2024 · 8 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Robot kills worker at German Volkswagen plant

Robot kills worker at German Volkswagen plant

Jul 2014 · 27 reports
Nuclear False Alarm

Nuclear False Alarm

Sep 1983 · 27 reports
Amazon Censors Gay Books

Amazon Censors Gay Books

May 2008 · 24 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b