インシデント 807: translated-ja-ChatGPT Introduces Errors in Critical Child Protection Court Report
概要: translated-ja-A child protection worker in Victoria used ChatGPT to draft a report submitted to the Children's Court. The AI-generated report contained inaccuracies and downplayed risks to the child, resulting in a privacy breach when sensitive information was shared with OpenAI.
Editor Notes: Reconstructing the timeline of events: Between July and December 2023, according to reporting, nearly 900 employees of Victoria’s Department of Families, Fairness, and Housing (DFFH), representing 13% of the workforce, accessed ChatGPT. In early 2024, a case worker used ChatGPT to draft a child protection report submitted to the Children’s Court. This report contained significant inaccuracies, including the misrepresentation of personal details and a downplaying of risks to the child, whose parents had been charged with sexual offenses. Following this incident, an internal review of the case worker’s unit revealed that over 100 other cases showed signs of potential AI involvement in drafting child protection documents. On September 24, 2024, the department was instructed to ban the use of public generative AI tools and to notify staff accordingly, but the Office of the Victorian Information Commissioner (OVIC) found this directive had not been fully implemented. The next day, on September 25, 2024, OVIC released its investigation findings, confirming the inaccuracies in the ChatGPT-generated report and outlined the risks associated with AI use in child protection cases. OVIC issued a compliance notice requiring DFFH to block access to generative AI tools by November 5, 2024.
Alleged: OpenAI developed an AI system deployed by Department of Families Fairness and Housing , Government of Victoria と Employee of Department of Families Fairness and Housing, which harmed Unnamed child と Unnamed family of child.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
インシデントレポート
レポートタイムライン
Loading...