Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 807: ChatGPT Introduces Errors in Critical Child Protection Court Report

Description: A child protection worker in Victoria used ChatGPT to draft a report submitted to the Children's Court. The AI-generated report contained inaccuracies and downplayed risks to the child, resulting in a privacy breach when sensitive information was shared with OpenAI.
Editor Notes: Reconstructing the timeline of events: Between July and December 2023, according to reporting, nearly 900 employees of Victoria’s Department of Families, Fairness, and Housing (DFFH), representing 13% of the workforce, accessed ChatGPT. In early 2024, a case worker used ChatGPT to draft a child protection report submitted to the Children’s Court. This report contained significant inaccuracies, including the misrepresentation of personal details and a downplaying of risks to the child, whose parents had been charged with sexual offenses. Following this incident, an internal review of the case worker’s unit revealed that over 100 other cases showed signs of potential AI involvement in drafting child protection documents. On September 24, 2024, the department was instructed to ban the use of public generative AI tools and to notify staff accordingly, but the Office of the Victorian Information Commissioner (OVIC) found this directive had not been fully implemented. The next day, on September 25, 2024, OVIC released its investigation findings, confirming the inaccuracies in the ChatGPT-generated report and outlined the risks associated with AI use in child protection cases. OVIC issued a compliance notice requiring DFFH to block access to generative AI tools by November 5, 2024.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI developed an AI system deployed by Department of Families Fairness and Housing , Government of Victoria and Employee of Department of Families Fairness and Housing, which harmed Unnamed child and Unnamed family of child.

Incident Stats

Incident ID
807
Report Count
7
Incident Date
2024-09-25
Editors
Applied Taxonomies
MIT

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Privacy & Security

Entity

Which, if any, entity is presented as the main cause of the risk
 

Human

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Intentional

Incident Reports

Reports Timeline

Investigation into the use of ChatGPT by a Child Protection worker+2
Vic case worker used ChatGPT to draft child protection report
+2
AI ban ordered after child protection worker used ChatGPT in Victorian court case
Australian Information Commissioner Halts GenAI Use for Child Protection Agency as ChatGPT Downplays Risk
Investigation into the use of ChatGPT by a Child Protection worker

Investigation into the use of ChatGPT by a Child Protection worker

ovic.vic.gov.au

Vic case worker used ChatGPT to draft child protection report

Vic case worker used ChatGPT to draft child protection report

itnews.com.au

Victorian welfare agency banned from GenAI after child protection debacle

Victorian welfare agency banned from GenAI after child protection debacle

themandarin.com.au

AI ban ordered after child protection worker used ChatGPT in Victorian court case

AI ban ordered after child protection worker used ChatGPT in Victorian court case

theguardian.com

Victorian child protection worker uses ChatGPT for protection report

Victorian child protection worker uses ChatGPT for protection report

cyberdaily.au

Victoria's child protection agency bans AI use after report debacle

Victoria's child protection agency bans AI use after report debacle

newsbytesapp.com

Australian Information Commissioner Halts GenAI Use for Child Protection Agency as ChatGPT Downplays Risk

Australian Information Commissioner Halts GenAI Use for Child Protection Agency as ChatGPT Downplays Risk

medianama.com

Investigation into the use of ChatGPT by a Child Protection worker
ovic.vic.gov.au · 2024

The following is a copy of the executive summary of the report. To view the report in full, please download the PDF provided by OVIC.

Executive summary

Background

In December 2023, the Department of Families, Fairness and Housing (DFFH) rep…

Vic case worker used ChatGPT to draft child protection report
itnews.com.au · 2024

Victoria's Department of Families, Fairness and Housing (DFFH) has been directed to ban and block access to a range of generative AI tools after a child protection worker used ChatGPT to draft a report submitted to the Children's Court.

The…

Victorian welfare agency banned from GenAI after child protection debacle
themandarin.com.au · 2024

Victoria’s Department of Families, Fairness and Housing (DFFH) child protection service has been banned from using generative artificial intelligence in the workplace for at least a year.

The ban comes after an investigation into a case wor…

AI ban ordered after child protection worker used ChatGPT in Victorian court case
theguardian.com · 2024

Victoria's child protection agency has been ordered to ban staff from using generative AI services after a worker was found to have entered significant amounts of personal information, including the name of an at-risk child, into ChatGPT.

T…

Victorian child protection worker uses ChatGPT for protection report
cyberdaily.au · 2024

Victoria’s child protection agency has been ordered to ban the use of AI tools after a case worker used ChatGPT to write a child’s protection report, resulting in sensitive data being submitted and a number of inaccuracies being generated.

…
Victoria's child protection agency bans AI use after report debacle
newsbytesapp.com · 2024

What's the story

Victoria's child protection agency has imposed a ban on its staff using generative artificial intelligence (AI) services. This decision comes after an employee was discovered to have inputted substantial personal informatio…

Australian Information Commissioner Halts GenAI Use for Child Protection Agency as ChatGPT Downplays Risk
medianama.com · 2024

A state-level Australian Information Commissioner has ordered Victoria state’s child protection agency to stop using generative AI services. According to the Information Commissioner, the agency staff entered a significant amount of persona…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Australian Automated Debt Assessment System Issued False Notices to Thousands

Australian Automated Debt Assessment System Issued False Notices to Thousands

Jul 2015 · 39 reports
Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

May 2022 · 2 reports
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Jul 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Australian Automated Debt Assessment System Issued False Notices to Thousands

Australian Automated Debt Assessment System Issued False Notices to Thousands

Jul 2015 · 39 reports
Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent

May 2022 · 2 reports
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Jul 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 5fc5e5b