The AI Tool You Trust Just Became a Supply Chain Weapon
Security researchers at OX Security just dropped a bombshell that should make every developer using AI coding assistants stop and think. A systemic vulnerability baked into Anthropic's Model Context Protocol — the standard that connects AI agents to your tools — can execute arbitrary commands on your machine. No authentication needed. No user interaction required in some cases.
And Anthropic's response? "Expected behaviour."
What Actually Happened
OX Security found that MCP's STDIO transport interface, designed to spawn local server processes, will execute any command passed to it regardless of whether the process starts successfully. Pass in a malicious command, get an error back, and the command still runs on your system.
This is not a bug in one implementation. It is an architectural design decision baked into Anthropic's official MCP SDKs across Python, TypeScript, Java, and Rust. Every developer building on MCP unknowingly inherits this exposure.
The numbers are staggering: over 200 open source projects affected, 150 million cumulative downloads, 7,000 publicly accessible servers, and up to 200,000 vulnerable instances in total.
Your Favourite AI Coding Tool Is Vulnerable
The research names the tools you probably use daily. Cursor, VS Code, Windsurf, Claude Code, and Gemini-CLI are all vulnerable to command injection via their MCP JSON configuration.
Windsurf got the worst of it. CVE-2026-30615 showed that when Windsurf processes attacker-controlled HTML content, malicious instructions can silently modify your local MCP configuration and register a rogue server. The result: arbitrary commands running on your machine with zero clicks from you.



