AI-driven programming is revolutionizing how developers work, but it’s also opening doors to new security threats. One such risk is slopsquatting, a sneaky technique where attackers exploit incorrect package suggestions from AI models. These models sometimes recommend non-existent packages, which hackers then register on repositories like PyPI or NPM, embedding malicious code. This tactic mirrors typosquatting, where users accidentally mistype library names, but slopsquatting leverages AI’s “hallucinations” instead.
A recent experiment by Socket Dev tested 16 AI systems, including GPT-4, DeepSeek, and Mistral, analyzing over 576,000 code snippets. Shockingly, 19.7% of the suggested packages didn’t exist, creating a perfect opportunity for attackers. The rise of vibe coding—where developers rely heavily on AI suggestions with minimal manual validation—only amplifies this vulnerability.
As AI tools become integral to coding, developers must stay vigilant. Double-checking package names and sources can help mitigate the risks of slopsquatting and keep malicious code at bay.


