Home / Blog / One compromised scanner, three hacked projects, 100 million
INCIDENT·14 min read·Pavel

One compromised scanner, three hacked projects, 100 million downloads poisoned.

LiteLLM supply chain attack traced from Trivy to KICS to PyPI. 100 million monthly downloads compromised. Full attack chain analysis with timeline.

I expected the Trivy compromise to stay contained. A vulnerability scanner gets hacked, credentials get rotated, everyone moves on. We wrote about the first wave when hackerbot-claw hit seven repos in a week. That felt bad enough.

What I didn't expect was a five-day cascade that ended with the most popular LLM proxy on PyPI stealing SSH keys from every machine it touched.

Between March 19 and March 24, 2026, a threat actor called TeamPCP executed a supply chain attack that chained three separate compromises into one of the worst credential theft campaigns the AI tooling ecosystem has seen. The attack moved from Aqua Security's Trivy scanner to Checkmarx's KICS GitHub Action to BerriAI's LiteLLM, a package with roughly 100 million monthly downloads and deep transitive presence in CrewAI, DSPy, MLflow, and hundreds of MCP servers.

Here's how it happened, what it stole, and what the rest of us should take from it.

The five-day chain: Trivy to KICS to LiteLLM

March 19. TeamPCP compromises Aqua Security's Trivy vulnerability scanner through a GitHub Actions tag swap. Mutable tags let them replace a trusted release with a malicious one. This wasn't the first time Trivy got hit. The initial repo compromise happened in February. The March 19 attack was the second successful breach from the same actor. Credential rotation after the first one didn't fully revoke access.

March 22. The attackers use their Trivy access to compromise Aqua's DockerHub account, publishing malicious Trivy images (v0.69.5 and v0.69.6). Three breaches, same actor, same root cause.

March 23. Using credentials harvested from Trivy's CI/CD environment, TeamPCP compromises Checkmarx's KICS GitHub Action. KICS runs in thousands of CI pipelines. Now those pipelines are leaking secrets too.

March 24. The KICS compromise gives TeamPCP access to a LiteLLM maintainer's CircleCI environment. Inside that environment: a PyPI publishing token and a GitHub Personal Access Token. Both are scoped too broadly. The attackers publish two poisoned LiteLLM versions to PyPI: 1.82.7 and 1.82.8.

The whole thing took five days. One initial foothold in a security scanner turned into control over a package that gets downloaded 3.4 million times per day. Security researcher Rami McCarthy has been maintaining a detailed timeline of the full TeamPCP campaign if you want the commit-by-commit breakdown.

What the payload actually does

The 1.82.8 payload uses a .pth file. If you're not familiar with the mechanism: Python executes .pth files in site-packages automatically on every interpreter startup. You don't need to import LiteLLM. You don't need to call it. If the package is installed, the payload runs.

Version 1.82.7 hides the payload in proxy/proxy_server.py instead. That one triggers on import.

Once active, it collects everything it can find. SSH keys. AWS, GCP, and Azure credentials. Kubernetes configs. Crypto wallets. Git credentials. Shell history. SSL private keys. CI/CD secrets. Database connection strings. Environment variables, which means every API key you've ever set.

The stolen data gets encrypted with a randomly generated AES-256 key, which itself gets encrypted with a hardcoded RSA public key. The archive ships to models.litellm.cloud. Not litellm.ai. A lookalike domain built to blend into network logs.

In Kubernetes environments, it goes further. It deploys privileged pods to every node for lateral movement and installs a persistent systemd backdoor.

The transitive dependency problem

This is where it gets worse.

LiteLLM isn't just a package people install directly. It's a transitive dependency. CrewAI uses it. DSPy uses it as its primary library for calling upstream LLM providers. MLflow emergency-pinned to version 1.82.6 within hours. Hundreds of MCP servers pull it in under the hood.

One developer at FutureSearch discovered the compromise only because Cursor IDE pulled LiteLLM through an MCP plugin. They never directly installed it. Their laptop ran out of RAM from what looked like a fork bomb, and when they investigated, they found the base64-encoded payload.

I spent two hours after reading this checking our own CI configs. That FutureSearch developer didn't install LiteLLM. Their tool did. Silently. Three layers deep. And they only caught it because the payload was noisy enough to crash their machine.

Why credential rotation failed twice

Aqua Security rotated credentials after the February compromise. TeamPCP walked back in on March 19. They rotated again. The attackers compromised DockerHub on March 22.

The LiteLLM maintainer showed up on HN within hours. His response was honest, which I respect: "I'm sorry for this." The Trivy compromise leaked their CircleCI credentials, which included both the PyPI publishing token and a GitHub PAT. The PyPI token was scoped to publishing, but the GitHub PAT had broader access. That explains the full account takeover: every personal repository on the maintainer's GitHub was edited to read "teampcp owns BerriAI." You can see the full timeline in GitHub issue #24518.

The question isn't why TeamPCP succeeded. The question is why the PyPI publishing token lived in the same CI environment as the vulnerability scanner. Separating those into different jobs with different credential scopes would have stopped the cascade at step one.

What this means for MCP

The transitive dependency problem hits MCP servers especially hard. Hundreds of MCP servers use LiteLLM under the hood for multi-provider routing. Unlike a regular Python application where you might notice unusual behavior, MCP servers often run as background processes with broad filesystem access and network permissions by default.

When we scanned 900 MCP configs on GitHub, the most common problem wasn't hardcoded credentials. It was overpermissioned tool access: servers running with full shell execution, unrestricted file reads, no sandboxing. A compromised LiteLLM inside one of these servers doesn't just steal credentials from the Python environment. It inherits whatever permissions the MCP server has, which in most configs means everything.

The FutureSearch developer who caught this got lucky because the fork bomb was noisy enough to crash their laptop. A quieter payload inside an MCP server with filesystem access could exfiltrate data for weeks without anyone noticing.

The bot army in the GitHub issue

Something else happened that deserves more than a footnote. Within hours of the vulnerability being reported in GitHub issue #24512, the thread filled with hundreds of bot comments. "Thanks, that helped!" and "Worked like a charm, much appreciated" and "This was the answer I was looking for." The same six phrases, repeated by what appear to be previously compromised GitHub accounts. The Trivy GitHub discussion got identical treatment weeks earlier.

The purpose is straightforward: drown out real discussion. Make it harder for affected users to find remediation advice in the thread where they'd naturally look for it.

But think about what this means for cascade timing. If the attacker can slow down incident response by even a few hours, that's a few more hours of compromised packages being downloaded. At 3.4 million downloads per day, every hour of delayed response means roughly 140,000 additional potentially affected installations. The bot spam isn't a side effect. It's part of the attack. Suppress the signal, extend the window, maximize the damage.

The cascade model is the new threat

Actually, wait. I keep calling this a "supply chain attack" but that undersells what happened. Traditional supply chain attacks target one package. This is closer to what epidemiologists call a superspreader event. One infected node passes it to a small number of high-connectivity nodes, and those nodes pass it to thousands.

The SolarWinds attack in 2020 worked the same way. Compromise the build system, not the product. But SolarWinds was a sophisticated state actor spending months inside the build pipeline. TeamPCP did it in five days with stolen credentials and mutable git tags. The barrier to entry for this class of attack just dropped by an order of magnitude.

The old model was: find a popular package, compromise it, profit. The new model is: find any package in the dependency graph of a popular package, compromise it, ride the graph.

TeamPCP didn't need to find a vulnerability in LiteLLM. They found one in Trivy, rode it to KICS, rode KICS to LiteLLM's CI, and rode that to 100 million monthly downloads. Three hops. Five days. The attack surface isn't the package. It's the graph.

That changes what "supply chain security" means. Auditing your direct dependencies isn't enough anymore. You need to audit the CI/CD environments that build those dependencies, and the tools those environments use, and the dependencies of those tools. It's graphs all the way down.

What actually helps

CrewAI dropped LiteLLM the same day, pushing native SDK integrations for OpenAI, Anthropic, Google, Azure, and Bedrock. Their message: fewer packages, fewer supply chain risks. DSPy opened an issue. MLflow pinned to the last safe version.

For everyone else, the immediate steps are straightforward. Check if LiteLLM 1.82.7 or 1.82.8 is anywhere in your environment, including inside virtual environments and containers you might have forgotten about. Search for litellm_init.pth on disk. Check for the systemd backdoor at ~/.config/sysmon/. If you find anything, rotate every credential on that machine. SSH keys, cloud IAM, API keys, database passwords. Everything.

The longer-term lesson is about CI/CD architecture. Vulnerability scanners should not run in the same job that has publishing credentials. This isn't a novel insight. It's basic principle of least privilege applied to CI pipelines. But it's also not the default in most setups, and that's the actual problem. The tooling to prevent this has existed for years. GitHub has immutable releases. PyPI supports OIDC-based trusted publishing that eliminates stored tokens entirely. Almost nobody uses either.

One thing that would have caught this earlier: network-level anomaly detection. The payload exfiltrated to models.litellm.cloud, a domain that didn't exist before the attack. Any proxy or gateway monitoring outbound traffic from CI/CD or development environments would have flagged an unknown destination receiving encrypted archives. Dependency auditing catches known bad versions. Network monitoring catches unknown bad behavior. You need both.

For MCP specifically, an open-source scanner that runs locally and checks configs against known vulnerability patterns is one layer. We're also tracking this and similar incidents in an open database to make the pattern visible across the ecosystem.

Questions people ask

Is LiteLLM safe to use now?

Versions 1.82.7 and 1.82.8 have been removed from PyPI. The package was quarantined and then restored after cleanup. If you're on 1.82.6 or earlier and verify the checksum, those versions were not affected. The maintainer is working with Google's Mandiant security team on a full scope assessment.

How do I check if I'm affected?

Run pip show litellm | grep Version to check your installed version. Search for the payload file with find / -name "litellm_init.pth" 2>/dev/null. Check for the backdoor at ~/.config/sysmon/sysmon.py. In Kubernetes, look for unexpected pods in kube-system namespace.

Were Docker deployments affected?

The LiteLLM proxy Docker images were not affected because they pin dependency versions in requirements.txt. The PyPI package compromise affected pip installations only. However, the earlier Trivy DockerHub compromise (v0.69.5 and v0.69.6) did affect Docker users of Trivy specifically.

What other projects depend on LiteLLM?

Major downstream consumers include CrewAI, DSPy, MLflow, various LangChain community packages, and hundreds of MCP servers. CrewAI has already removed the dependency. DSPy and MLflow have pinned to safe versions. Check your own dependency tree with pip show litellm or search your lockfiles.

How can I prevent this kind of attack?

Use OIDC-based trusted publishing instead of stored PyPI tokens. Separate CI jobs so vulnerability scanners never share an environment with publishing credentials. Pin dependencies with lockfiles and verify checksums. Audit transitive dependencies, not just direct ones. Run third-party tools in sandboxed environments with minimal filesystem access.


Related: - We scanned 900 MCP configs. 75% had security problems. - An AI agent compromised 7 repos in one week. - I left my AI agent running overnight. - Why your AI agent can't detect its own compromise.

Run the scanner yourself: orchesis.ai/scan

Open source · MIT License

Try the MCP Scanner

Scan your MCP configuration in seconds. Runs entirely in your browser.

Scan My Config

More articles

SECURITY

43% of MCP configs run bare shell. That's not a misconfiguration, it's the default.

11 min read

INCIDENT

I left my AI agent running overnight. Here's what I found in the morning.

12 min read

RESEARCH

We compared security in OpenClaw, Claude Code, and Cursor. None of them passed.

11 min read

ARCHITECTURE

Why your AI agent can't detect its own compromise (and what can)

7 min read

INCIDENT

An AI agent compromised 7 open-source repos in one week. The only defense that worked was another AI.

10 min read

SECURITY

We scanned 900 MCP configs on GitHub. 75% had security problems.

8 min read