A new type of supply chain attack, known as slopsquatting, has emerged due to the growing use of generative AI tools in software development. This attack exploits the ability of AI models to generate code with non-existent package names.
Seth Larson, a security researcher, used the word slopsquatting, which is based on typosquatting, a technique in which attackers generate malicious packages with names that seem similar to valid ones to trick developers.
However, slopsquatting does not rely on spelling mistakes. Instead, it takes advantage of fictional package names generated by AI tools in code examples. Attackers use tools such as ChatGPT, CodeLlama, DeepSeek, WizardCoder, and Mistral to create malicious packages with names corresponding to those hallucinated by AI models.
In March 2025, an investigation that looked at more than 576,000 AI-generated Python and JavaScript code samples discovered that almost 20% of them had nonexistent packages. Even commercial tools like ChatGPT-4 generated hallucinated packages in around 5% of scenarios, but open-source models like CodeLlama, DeepSeek, WizardCoder, and Mistral had higher error rates.
Understanding the Slopsquatting Attack Vector
Researchers found the malicious package ccxt-mexc-futures in the Python Package Index in one real-world example. It claimed to be an add-on for ccxt, a widely used cryptocurrency trading library. The fake package was designed to hijack trading activity on the MEXC exchange by redirecting orders to a malicious server, enabling token theft.
The malicious code specifically altered three core functions—describe, sign, and prepare_request_headers—within the original ccxt framework. This allowed it to execute arbitrary code on the user’s machine by retrieving a configuration file from a fake MEXC domain (v3.mexc.workers[.]dev), which then rerouted trading commands to a rogue platform at greentreeone[.]com.
Though the package has since been removed from PyPI, it had already been downloaded over 1,000 times, highlighting the risk to developers who rely on large language models without verifying recommended packages.
Socket security researchers discovered that attackers are uploading fake modules to every major open-source registry. These compromised packages can open a reverse shell after installation, providing an attacker with continuous access and enabling them to steal sensitive data.
In testing, Socket recorded more than 200k unique fake package names. It is noteworthy that 58% of these names resurfaced at least once in 10 trials, and 43% of these names returned across comparable requests. A deeper analysis shows that small alterations of real product names account for 38% of the entries, straightforward mistakes account for 13%, and complete fabrications make up the remaining 51%.
Additionally, the software supply chain security company Socket said:
Unsuspecting developers or organizations might inadvertently be including vulnerabilities or malicious dependencies in their code base, which could allow for sensitive data or system sabotage if undetected.
Although no active campaigns using this method have yet appeared, Socket’s researchers highlight that these names are far too predictable and hence open for malicious activities.
Always carefully verify package names in AI-generated code and never assume that a case study is secure or authentic. To ensure that every package is pinned to a stable version, use hash verification, lock files, and dependency scanners.
Employ model strategies such as modified decoding techniques or supervised fine-tuning, along with prompt-engineering methods like Retrieval-Augmented Generation (RAG), self-refinement, and prompt programming to reduce package hallucinations. It is essential that all developers receive training in AI security principles, particularly those relevant to software development. Limit library additions to an approved list of dependencies—ideally stored in your organisation’s internal repository.
Phishing Tackle provides training course videos that cover different types of artificial intelligence threats designed to counter the ever-changing risks posed by artificial intelligence. We offer a free 14-day trial to help train your users in avoiding these types of attacks and to test their knowledge through simulated phishing campaigns.