10K Claude Desktop Users Exposed by Zero-Click Vulnerability

More than 10,000 Claude Desktop users could face silent system takeover from a zero-click calendar-based flaw.

Written By
thumbnail
Ken Underhill
Ken Underhill
Feb 9, 2026
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

A newly disclosed flaw in Anthropic’s Claude Desktop Extensions shows how a routine productivity feature can enable zero-click system compromise. 

LayerX researchers found that a single malicious Google Calendar event can trigger remote code execution on Claude Desktop systems, enabling silent takeover at scale. 

“If exploited by a bad actor, even a benign prompt (“take care of it”), coupled with a maliciously worded calendar event, is sufficient to trigger arbitrary local code execution that compromises the entire system,” said LayerX researchers in their analysis.

“Exploits such as this one demonstrate the classic catch-22 of AI: to unlock the productivity benefits of AI, you need to give these tools deep access to sensitive data,” said Roy Paz, Principal AI Researcher at LayerX Security in an email to eSecurityPlanet.

He added, “But if any data is compromised as a result, the AI and model providers don’t see themselves responsible for the security of users using their products. This highlights the need for an AI ‘shared responsibility’ model where it is clear who is responsible for the different layers of security of AI tools.”

How the Claude Desktop Vulnerability Works

The vulnerability affects more than 10,000 active Claude Desktop users and over 50 desktop extensions distributed through Anthropic’s extension marketplace.

Unlike traditional browser extensions, which operate within tightly sandboxed environments, Claude Desktop Extensions run unsandboxed and with full operating system privileges, giving them broad access to local system resources.

At the root of the issue is the architecture of Anthropic’s Model Context Protocol (MCP). 

MCP allows Claude to autonomously select and chain together multiple tools to fulfill user requests, a design intended to improve productivity and automation. 

This autonomy creates a critical trust boundary failure, allowing data from low-risk connectors like Google Calendar to flow directly into high-privilege local executors without safeguards. 

This makes the vulnerability fundamentally different from classic software flaws like buffer overflows or injection bugs. 

Researchers characterize it as a workflow failure, where the model’s decision-making logic creates an unsafe execution path. 

Claude determines which connectors to invoke and how to combine them, but lacks the contextual awareness to distinguish between untrusted input and actions that require explicit user authorization.

Because Claude Desktop Extensions execute with full system privileges, any command they run inherits the same level of access as the logged-in user. 

This grants access to files, credentials, system settings, and arbitrary code execution, allowing even minor misinterpretations to escalate into full system compromise.

Advertisement

Proof-of-Concept (PoC) Attack 

In LayerX’s proof-of-concept attack, exploitation requires no advanced prompt engineering and no direct interaction from the victim. 

An attacker simply creates or injects a Google Calendar event with a benign-looking title, such as “Task Management.” 

The event description contains straightforward, plain-text instructions directing the system to pull code from a remote Git repository and execute it locally.

The attack is triggered later when the victim issues a vague but common prompt, such as, “Please check my latest events in Google Calendar and then take care of it for me.” 

Claude interprets “take care of it” as authorization to act on the instructions embedded in the calendar entry. 

The model then reads the event, invokes a local MCP extension with execution privileges, downloads the attacker’s code, and runs it — without a confirmation prompt, warning, or visible indication to the user.

Because the exploit requires no clicks, no explicit approval, and leaves the victim unaware until after compromise, LayerX assigned it a CVSS score of 10.0. 

While there is no public evidence of active exploitation, the attack’s simplicity, lack of user visibility, and broad privileges increase its potential risk. 

Advertisement

How to Reduce Risk From AI Agents

As AI agents gain greater access to local systems, existing security models can be strained. 

When productivity tools autonomously connect external data sources with privileged system actions, routine workflows may introduce unintended risk. 

  • Disable or uninstall high-privilege Claude Desktop extensions on systems that ingest untrusted external data such as calendars, email, or shared documents.
  • Restrict AI agents from executing local commands by default and require explicit, user-approved consent for any action that crosses trust boundaries.
  • Enforce least-privilege controls and harden file system and application permissions to limit what AI-driven processes can read, write, or execute.
  • Apply application allowlisting and endpoint protections to block unauthorized binaries, scripts, and developer tools from executing on non-developer systems.
  • Implement network segmentation and outbound traffic controls to prevent unauthorized downloads, lateral movement, and command-and-control activity.
  • Monitor endpoints for anomalous behavior, including unexpected command execution, suspicious process spawning, and unexplained file or configuration changes.
  • Test incident response and recovery plans for AI-driven compromise scenarios, including rapid isolation, credential rotation, extension removal, and system restoration.

Together, these measures help contain potential AI-driven compromises, reduce blast radius, and build operational resilience as organizations adapt to increasingly autonomous systems. 

Advertisement

AI Assistants and the Need for Clear Trust Boundaries

This issue highlights how AI-driven automation can blur security boundaries when autonomy and privilege are not clearly defined. 

As organizations deploy AI assistants with access to local systems, these tools should be managed as privileged software rather than treated solely as productivity features. 

Establishing clear trust boundaries, requiring explicit authorization, and applying layered controls helps prevent routine inputs from causing system-level impact.

These challenges point to the need for zero-trust solutions that assume no implicit trust between users, tools, or systems.

thumbnail
Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

AI Agent Safety Checklist
Girish Redekar
Mar 12, 2026
Active Directory Flaw Enables SYSTEM Privilege Escalation
Ken Underhill
Mar 12, 2026
400K WordPress Sites Exposed by Elementor Ally Plugin SQL Flaw
Ken Underhill
Mar 12, 2026
Iran-Linked Hacktivists Claim Wiper Attack on Stryker Systems
Ken Underhill
Mar 12, 2026
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.