Description: Large language models are reportedly hallucinating software package names, some of which are uploaded to public repositories and integrated into real code. One such package, huggingface-cli, was downloaded over 15,000 times. This behavior enables "slopsquatting," a term coined by Seth Michael Larson of the Python Software Foundation, where attackers register fake packages under AI-invented names and put supply chains at serious risk.
Editor Notes: See Bar Lanyado's report at: https://www.lasso.security/blog/ai-package-hallucinations. See Spracklen, et al's preprint here, "We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs," here: https://arxiv.org/abs/2406.10279. See Zhou, et al's study, "Larger and more instructable language models become less reliable," here: https://doi.org/10.1038/s41586-024-07930-y.
Outils
Nouveau rapportNouvelle RéponseDécouvrirVoir l'historique
Le Moniteur des incidents et risques liés à l'IA de l'OCDE (AIM) collecte et classe automatiquement les incidents et risques liés à l'IA en temps réel à partir de sources d'information réputées dans le monde entier.
Entités
Voir toutes les entitésAlleged: OpenAI , Google , Cohere , Meta , DeepSeek AI et BigScience developed an AI system deployed by Developers using AI-generated suggestions et Bar Lanyado, which harmed Developers and businesses incorporating AI-suggested packages , Alibaba , Organizations that incorporated fake dependencies , Software ecosystems , Users downstream of software contaminated by hallucinated packages et Trust in open-source repositories and AI-assisted coding tools.
Systèmes d'IA présumés impliqués: LLM-powered coding assistants , ChatGPT 3.5 , ChatGPT 4 , Gemini Pro , Command , LLaMA , CodeLlama , DeepSeek Coder , BLOOM , Python Package Index (PyPI) , npm (Node.js) , GitHub et Google Search / AI Overview
Statistiques d'incidents
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.2. AI system security vulnerabilities and attacks
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional