Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Software Supply Chain at Risk from AI Hallucinations

IMAGE CREDITS: GETTY

A newly uncovered vulnerability in generative AI tools is introducing a dangerous twist to software supply chain security—one that experts warn could be easily exploited by attackers. Known as “package hallucination,” this flaw occurs when Large Language Models (LLMs) invent software packages that don’t actually exist. Developers using these tools may unknowingly copy the hallucinated names into their code, making room for bad actors to slip in malware.

Researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma have labeled this emerging threat “slopsquatting.” Their study shows that threat actors can register the fake packages invented by LLMs and weaponize them. If even one developer installs the malicious version of a hallucinated package, their project—and potentially their entire organization—could be compromised.

While package confusion attacks have been around for years, this AI-driven variation opens a new front in the battle for supply chain security. The researchers’ findings are alarming: not one of the 16 LLMs tested was free of hallucinations. Across 576,000 AI-generated code samples, a staggering 19.7% of all suggested packages were hallucinated—and more than 205,000 were completely fictitious.

Commercial LLMs showed hallucination rates starting at 5.2%, while open-source models reached as high as 21.7%. Even more troubling, the issue appears to be systemic. More than half of the hallucinated packages showed up repeatedly in the same model after just ten prompts, revealing a pattern of persistent misinformation.

These hallucinated package names act as breadcrumbs for attackers. All they have to do is publish a malicious version of a non-existent library with the AI-generated name. Once released, these poisoned packages can spread rapidly—especially as more developers rely on LLMs to automate routine coding tasks. And because these AI tools sound authoritative, users often trust the outputs without question.

Interestingly, the study also found a silver lining: most LLMs could identify their own hallucinated packages. This opens the door to potential safeguards. The researchers suggest solutions like Retrieval Augmented Generation (RAG), prompt tuning, and self-refinement, which can help steer models toward factual and validated recommendations. At the development level, approaches like decoding strategy adjustments and supervised fine-tuning might reduce hallucination rates even further.

But the implications are clear: as generative AI continues to shape modern development workflows, its flaws are being absorbed into the supply chain. And with GenAI tools growing in popularity, the speed at which threats evolve is accelerating too.

For developers, this means extra vigilance is now critical. Cross-checking recommended packages, using package verification tools, and staying up to date with model updates and security advisories could be the only defense between clean code and compromise.

Share with others