OpenClaw’s Rapid Rise Exposes Thousands of AI Agents to the Public Internet

More than 21,000 OpenClaw AI agents are now publicly exposed, raising security concerns over their action-capable design and extensibility.

Written By
thumbnail
Ken Underhill
Ken Underhill
Feb 2, 2026
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

In just days, a viral open-source AI assistant went from niche experiment to a widespread internet-facing risk. 

OpenClaw, a self-hosted personal AI agent capable of executing actions on a user’s behalf, saw explosive adoption in late January 2026 — along with widespread public exposure that has raised concerns among security researchers.

It “… has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints,” said Cisco security researchers.

Silas Cutler, principal security researcher at Censys, added “Censys has identified more than 21,000 publicly exposed instances as of 31 January 2026.”

Why OpenClaw Deployments Increase Risk

OpenClaw stands apart from traditional chatbots because it is built to execute actions directly on a user’s behalf, not just generate responses. 

The assistant can run shell commands, read and write files, manage calendars and email, interact with messaging applications, and automate workflows through community-developed skills. 

While these capabilities make OpenClaw powerful, they also increase risk when deployments are misconfigured or exposed — particularly when agents are connected to sensitive data, credentials, or production systems.

Advertisement

Rapid Adoption and Rebranding Complicate Governance

Compounding this risk is the project’s rapid and highly visible adoption. 

In less than a week, OpenClaw went from a niche experiment to thousands of active deployments, during a period marked by multiple name changes from Clawdbot to Moltbot and finally OpenClaw. 

These rebrands have complicated tracking, asset inventory, and governance, making it harder for organizations to understand where and how the assistant is being deployed.

Public Internet Exposure at Scale

By design, OpenClaw listens locally on TCP port 18789 and is intended to be accessed through a browser interface bound to localhost. 

The project’s documentation recommends using protective access methods such as SSH tunneling or Cloudflare Tunnel for any remote connectivity. 

Despite this guidance, many operators have opted to expose OpenClaw instances directly to the public internet, either for convenience or ease of access.

As of Jan. 31, 2026, internet scanning data from Censys identified 21,639 publicly reachable OpenClaw instances by matching HTTP page titles associated with earlier project names, including Moltbot Control and Clawdbot Control. 

Although most exposed instances still require an authentication token to interact with the interface, their public visibility alone increases risk. 

Exposed services can be fingerprinted, probed for weaknesses, and targeted with brute-force attempts against tokens or authentication mechanisms.

Geographically, the largest concentration of visible deployments appears in the United States, followed by China and Singapore. 

Approximately 30% of observed instances were hosted on Alibaba Cloud infrastructure, though researchers caution that this may reflect scanning visibility rather than true adoption patterns. 

Additional deployments may be hidden behind reverse proxies, tunnels, or managed access layers, meaning the total number of exposed agents is likely higher than what is visible through direct scanning.

Advertisement

Skills and Extensions Expand the Attack Surface

Beyond exposure, OpenClaw’s security posture is further shaped by its extensibility. 

The assistant maintains persistent memory across sessions, integrates with popular messaging platforms, and allows users to install downloadable skills that extend functionality. 

Because OpenClaw operates with high-level privileges on the host system, poorly reviewed or malicious skills can execute commands, access files, exfiltrate data, or modify system state. 

Researchers have already reported cases where OpenClaw leaked plaintext API keys and credentials through unsecured endpoints or prompt manipulation.

These risks are amplified by the broader skills ecosystem. 

In December 2025, Anthropic introduced Claude Skills, and shortly thereafter, Cisco’s AI Threat and Security Research team released an open-source Skill Scanner tool designed to identify malicious or risky behavior embedded in agent skills. 

When Cisco researchers tested a third-party skill titled What Would Elon Do? against OpenClaw, the scanner identified nine security issues, including two critical and five high-severity findings. 

The skill contained instructions that silently exfiltrated data to an external server using embedded shell commands and explicit prompt injection designed to bypass internal safety controls. 

Additional findings included command injection and references to malicious payloads hidden within skill files.

Notably, the vulnerable skill had been promoted to the top of the public skill repository, highlighting how hype and popularity signals can be manipulated to distribute malicious components at scale. 

Together, these findings illustrate how exposed deployments and an unvetted skills ecosystem can quickly turn powerful AI agents into high-risk assets.

Advertisement

How to Reduce Risk From AI Agents

As agentic AI tools like OpenClaw see wider use, organizations should establish safeguards to ensure these systems are deployed and operated responsibly. 

Because they can execute commands, access data, and connect to external services, configuration choices play an important role in overall risk. 

Managing that risk involves applying appropriate technical controls, maintaining visibility into agent activity, and setting clear governance expectations. 

  • Avoid exposing OpenClaw instances directly to the internet and require secure access methods such as SSH tunnels or managed gateways.
  • Treat all agent skills and extensions as untrusted code and review or scan them before installation.
  • Apply least-privilege access to agent permissions, credentials, and integrations with external services.
  • Isolate agentic AI deployments using segmentation, containers, or dedicated hosts to limit lateral movement.
  • Monitor and log agent activity, including outbound connections, file access, and command execution, as security events.
  • Establish governance policies to detect and manage shadow AI adoption in developer and power-user environments.
  • Test and refine incident response plans for AI agent compromise scenarios, including containment, credential rotation, and recovery.

Together, these steps help limit the blast radius of AI agent misconfigurations and build long-term resilience.

Advertisement

When AI Agents Outpace Security Planning

OpenClaw’s rapid adoption illustrates how quickly agentic AI can transition from experimentation to broader use, sometimes outpacing security planning. 

While these assistants offer clear productivity benefits, their autonomy and access to sensitive systems mean that deployment and configuration choices matter.

These challenges highlight the importance of zero-trust solutions that limit implicit trust and enforce consistent access controls across AI-driven systems.

thumbnail
Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

AI Agent Safety Checklist
Girish Redekar
Mar 12, 2026
Active Directory Flaw Enables SYSTEM Privilege Escalation
Ken Underhill
Mar 12, 2026
400K WordPress Sites Exposed by Elementor Ally Plugin SQL Flaw
Ken Underhill
Mar 12, 2026
Iran-Linked Hacktivists Claim Wiper Attack on Stryker Systems
Ken Underhill
Mar 12, 2026
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.