Incidente 843: translated-es-Generative AI Plagiarism Incident at Hingham High School Reportedly Tied to Inaccurate Citation Outputs from Grammarly AI
Descripción: translated-es-In December 2023, two Hingham High School students ("RNH" and unnamed) reportedly used Grammarly to create a script for an AP U.S. History project. The AI-generated text included fabricated citations to nonexistent books, which the student copied and pasted without verification or acknowledgment of AI use. This violated the school's academic integrity policies, leading to disciplinary action. RNH's parents later sued the school district, but a federal court ruled in favor of the school.
Editor Notes: The incident itself occurred sometime in December 2023. The court ruling was published on November 20, 2024. It can be read here: https://fingfx.thomsonreuters.com/gfx/legaldocs/lbvgjjqnkpq/11212024ai_ma.pdf.
Entidades
Ver todas las entidadesAlleged: Grammarly developed an AI system deployed by Hingham High School students y Hingham High School student RNH, which harmed Hingham High School students , Hingham High School student RNH , Hingham High School y Academic integrity.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
5.1. Overreliance and unsafe use
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Human-Computer Interaction
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Informes del Incidente
Cronología de Informes

Los padres de un estudiante de último año de secundaria de Massachusetts que utilizó inteligencia artificial para un proyecto de estudios sociales presentaron una demanda contra sus maestros y la escuela después de que su hijo fuera castiga…

Un tribunal federal falló ayer en contra de los padres que demandaron a un distrito escolar de Massachusetts por castigar a su hijo que utilizó una herramienta de inteligencia artificial para completar una tarea.
Dale y Jennifer Harris dema…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Defamation via AutoComplete
· 28 informes
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Defamation via AutoComplete
· 28 informes