Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1073: $31,000 Sanction in Lacey v. State Farm Tied to Purportedly Undisclosed Use of LLMs and Erroneous Citations

Description: In the case of Lacey v. State Farm, two law firms were sanctioned $31,000 after submitting a legal brief containing reportedly erroneous citations generated using AI tools. The court reportedly found that the lawyers failed to disclose the use of AI, neglected to verify its output, and refiled a revised brief with additional inaccuracies. Judge Michael Wilner deemed the conduct reckless and issued sanctions for what he described as "improper" and "misleading" legal filings.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
 

Entities

View all entities
Alleged: Unnamed large language model developer developed an AI system deployed by K&L Gates LLP and Ellis George LLP, which harmed K&L Gates LLP , Ellis George LLP , Michael Wilner , Judicial process integrity and Defense counsel in Lacey v. State Farm.
Alleged implicated AI system: Unknown large language model

Incident Stats

Incident ID
1073
Report Count
2
Incident Date
2025-04-15
Editors
Dummy Dummy

Incident Reports

Reports Timeline

Incident OccurrenceAI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in SanctionsJudge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief
AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions

AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions

reason.com

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief

webpronews.com

AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions
reason.com · 2025

I should note up front that both of the firms involved (the massive 1700-lawyer national one and the smaller 45-lawyer predominantly California one) have, to my knowledge, excellent reputations, and the error is not at all characteristic of…

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief
webpronews.com · 2025

A judge in California has imposed sanctions to law firms that relied on AI for case research, resulting in an error-filled brief.

In the case of Lacey v. State Farm, Judge Michael Wilner (serving as Special Master in the case) took the two …

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Jun 2022 · 7 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection

Jun 2022 · 7 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide
  • RAIC AIID Taxonomy Policy

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2023 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e4ae132