Hundreds of Malicious Skills Found in OpenClaw’s ClawHub

Researchers found hundreds of malicious skills in OpenClaw’s ClawHub, revealing a coordinated AI supply chain attack.

Written By
thumbnail
Ken Underhill
Ken Underhill
Feb 3, 2026
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

A routine question about trust exposed a far more serious problem when researchers discovered hundreds of malicious skills hidden inside a widely used AI agent marketplace. 

Koi researchers analyzed ClawHub, the third-party skill repository for OpenClaw, and found that threat actors had quietly turned the ecosystem into a large-scale malware distribution channel.

We found “… 341 malicious skills – 335 of them from what appears to be a single campaign,“ said the researchers.

Inside the ClawHavoc Campaign

Koi Security conducted a comprehensive audit of all 2,857 skills available on ClawHub and identified 341 malicious entries. 

Of those, 335 were traced back to a single, coordinated operation now tracked as ClawHavoc

The campaign targeted both macOS and Windows systems, with a clear focus on users running OpenClaw continuously — often on dedicated, always-on machines such as Mac minis, which are commonly used to host AI agents.

Advertisement

How Malicious Skills Lured Users

To maximize reach and credibility, the attackers carefully disguised their malicious skills as popular and high-demand tools. 

These included cryptocurrency wallets and trackers, Polymarket trading bots, YouTube utilities, auto-updaters, and Google Workspace integrations.

Many of the skills also relied on typosquatting techniques, using names that closely resembled legitimate packages in order to capture accidental installs from users moving quickly through the marketplace.

Once a malicious skill was installed, the compromise hinged on a deceptively simple social engineering step. The skill’s documentation instructed users to install a required “prerequisite” before using any features. 

On Windows systems, this meant downloading a password-protected ZIP file from GitHub and executing its contents. 

On macOS, users were told to copy and paste a shell command from glot[.]io directly into the Terminal application.

That prerequisite step was the point of compromise.

Advertisement

Malware Delivery and Data Theft

The password-protected archive allowed the Windows payload to bypass automated antivirus scanning, while the macOS command decoded a base64-encoded script that fetched additional malware from attacker-controlled infrastructure. 

In both cases, the end result was the same: the delivery of a second-stage payload identified as Atomic macOS Stealer (AMOS), a commodity information stealer sold as malware-as-a-service for approximately $500 to $1,000 per month.

AMOS is capable of harvesting a wide range of sensitive data, including browser credentials, keychain passwords, cryptocurrency wallet information, SSH keys, and files from common user directories. 

In environments running AI agents, this exposure is particularly severe, as it can also include API keys, authentication tokens, and other secrets that the bot itself is authorized to access.

Advertisement

Additional Techniques and Outlier Attacks

While ClawHavoc accounted for the majority of malicious skills, researchers also identified several outliers that used different, and in some cases more covert, techniques. 

Some skills embedded reverse shell backdoors directly into otherwise functional code, triggering compromise during normal use rather than at installation time. 

Others quietly exfiltrated OpenClaw bot credentials from configuration files such as ~/.clawdbot/.env to external webhook services.

In one notable example, a skill masquerading as a legitimate Polymarket tool executed a hidden command that opened an interactive shell back to the attacker’s server. 

This granted the attacker full remote control over the victim’s system, allowing them to execute arbitrary commands, deploy additional malware, or establish long-term persistence without the user’s knowledge.

Advertisement

Reducing AI Supply Chain Risk

As AI agents become more deeply integrated into daily workflows, they introduce a new and often underappreciated attack surface for organizations. 

Incidents like ClawHavoc demonstrate how third-party skills and extensions can be abused to compromise systems, credentials, and sensitive data at scale. 

Mitigating these risks requires more than basic endpoint security — it requires controls tailored to how AI agents operate, update, and interact with external services. 

  • Audit and allowlist AI skills before installation, avoiding public marketplaces where possible and removing any skills that require external prerequisites or copy-and-paste scripts.
  • Run AI agents in isolated, sandboxed, or ephemeral environments to limit filesystem, credential, and network access if compromise occurs.
  • Restrict bot permissions using least-privilege principles and store credentials in secure secrets managers with regular key rotation.
  • Implement outbound network controls and monitoring to detect or block unauthorized connections to attacker-controlled infrastructure.
  • Monitor for suspicious behaviors such as unexpected process execution, credential file access, reverse shells, or unauthorized background persistence.
  • Disable automatic skill updates and perform continuous integrity checks to detect malicious changes in installed skills over time.
  • Test incident response plans for AI agent compromise scenarios, including credential revocation, system isolation, and forensic review.

These steps help organizations reduce exposure and respond effectively when AI agents are targeted. 

Advertisement

AI Agent Supply Chain Risks

ClawHavoc highlights how AI agent ecosystems are increasingly being targeted for supply chain abuse as they expand faster than the security measures supporting them. 

The campaign demonstrates how attackers can leverage trust in third-party skills to access not only individual systems, but also the sensitive data and integrations that AI agents are designed to manage.

As organizations confront these risks, many are turning to third-party risk management solutions to better assess, monitor, and control exposure introduced by external code and integrations.

thumbnail
Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

AI Agent Safety Checklist
Girish Redekar
Mar 12, 2026
Active Directory Flaw Enables SYSTEM Privilege Escalation
Ken Underhill
Mar 12, 2026
400K WordPress Sites Exposed by Elementor Ally Plugin SQL Flaw
Ken Underhill
Mar 12, 2026
Iran-Linked Hacktivists Claim Wiper Attack on Stryker Systems
Ken Underhill
Mar 12, 2026
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.