Last week, we wrote about Clawdbot and the security considerations that come with giving an AI agent broad access to your digital life. We noted that security-conscious users were urging caution even as enthusiasm ran high.

In the days since, the situation has evolved dramatically—and not in a reassuring direction.

The project has been renamed twice (Clawdbot → Moltbot → OpenClaw), weathered a crypto scam that briefly hit $16 million in market cap, and most significantly, has been the subject of multiple security disclosures revealing exposed instances, leaked credentials, and active exploitation attempts.

For organizations evaluating AI agent technologies, what's unfolded offers concrete lessons about the gap between "exciting new capability" and "ready for production use."

What Happened: A Timeline

January 27: Anthropic issued a trademark request, noting that "Clawd" was too similar to "Claude." Creator Peter Steinberger announced a rebrand to "Moltbot."

Within seconds of the rebrand: When Steinberger attempted to rename the GitHub organization and X/Twitter handle simultaneously, crypto scammers seized both abandoned accounts in approximately 10 seconds. They immediately began promoting a fake $CLAWD token on Solana.

Hours later: The fraudulent token reached over $16 million in market capitalization before Steinberger publicly denounced it as a scam. The token subsequently crashed over 90%, falling from roughly $8 million to under $800,000.

January 28-30: Security researchers published findings showing hundreds of Moltbot instances exposed to the public internet, leaking API keys, OAuth tokens, chat histories, and credentials.

January 30: The project renamed again to "OpenClaw."

January 31: Reports emerged that the Moltbook platform—a social network for AI agents—had exposed its entire database, including secret API keys that would allow anyone to post on behalf of any agent on the platform, including those belonging to prominent AI figures.

The Security Findings

Multiple independent researchers and security firms have now documented serious vulnerabilities in how Clawdbot/Moltbot/OpenClaw is being deployed:

Exposed Instances

Security researcher Jamieson O'Reilly identified hundreds of instances exposed to the public internet. Using scanning tools like Shodan, exposed servers could be identified within seconds through characteristic HTML fingerprints.

The root cause: the system automatically approves localhost connections without authentication. When deployed behind a reverse proxy (a common configuration), all connections appear to come from localhost—effectively disabling authentication for everyone.

Some deployments were particularly severe. Researchers found instances that allowed unauthenticated command execution on the host system, in certain cases running with elevated privileges.

Credential Exposure

The system stores secrets in plaintext Markdown and JSON files. Security analysts have observed that commodity infostealers—including RedLine, Lumma, and Vidar—have already been updated to target these files specifically.

This represents a meaningful shift: the AI assistant's configuration files are now explicitly on malware authors' target lists, alongside traditional credential stores like browser password databases and SSH keys.

Prompt Injection Demonstrations

Security researchers demonstrated practical prompt injection attacks with concerning ease. In one documented case, Archestra AI CEO Matvey Kukuy extracted a private cryptocurrency key from a compromised system via email-based prompt injection within five minutes.

This aligns with the risks we discussed in our original article and in our broader coverage of AI-powered cyber threats—but seeing it demonstrated against a real, widely-deployed system makes the theoretical concrete.

Supply Chain Vulnerability

O'Reilly also demonstrated a proof-of-concept supply chain attack against ClawdHub, the project's skills library. He uploaded a skill called "What Would Elon Do," artificially inflated the download count to over 4,000, and watched as developers from seven countries installed it.

The skill was benign—it simply displayed a message saying "YOU JUST GOT PWNED (harmlessly)"—but it proved that malicious code could be distributed to thousands of installations with minimal friction.

ClawdHub's developer documentation explicitly states that all downloaded code is treated as trusted, with no moderation process.

The Moltbook Database Exposure

Perhaps most striking: Moltbook, a social networking platform for AI agents, exposed its entire database to the public with no protection. This included secret API keys that would allow anyone to post on behalf of any agent on the platform.

The exposure meant that bad actors could potentially impersonate any AI agent—including those belonging to influential voices in the AI community—to spread misinformation, promote scams, or post inflammatory content.

Moltbook was taken offline to address the issue and reset keys.

Expert Reactions

The security community's response has been pointed. Heather Adkins, a founding member of the Google Security Team, issued a blunt public advisory: "Don't run Clawdbot."

Blockchain security firm SlowMist documented the vulnerability scope. Malwarebytes published analysis of impersonation campaigns exploiting the rebrand confusion. Bitdefender and others have issued security alerts.

The pattern is consistent: researchers who examine the deployments are finding problems.

Fake Extensions and Malware

The project's popularity has attracted malicious actors beyond just opportunistic crypto scammers:

A fake VS Code extension named "ClawdBot Agent - AI Coding Assistant" appeared on the marketplace, designed to deploy remote access tools when the IDE launched. Microsoft has since removed it.

Telegram groups using the Clawdbot name have been observed promoting crypto wallet stealers.

Security researchers predict the ~/.clawdbot configuration directory will become a standard target for infostealers, similar to how ~/.npmrc and ~/.gitconfig are already targeted.

What This Illustrates

The Clawdbot situation isn't primarily a story about one project's failures. It's a case study in what happens when powerful, system-level software achieves viral adoption before security practices mature.

Several dynamics are worth noting:

Speed of adoption outpaced security review: The project went from obscurity to tens of thousands of stars in days. Many users deployed it before comprehensive security analysis existed.

Default configurations favored convenience: The authentication issues stem from design decisions that made the tool easier to set up but harder to deploy safely. This is a common pattern—and a common source of breaches.

The ecosystem attracted attackers immediately: Within days of the project going viral, malware authors had updated their tools to target it, scammers had created fake tokens, and malicious extensions had appeared in marketplaces.

Local-first doesn't mean secure: One of Clawdbot's selling points was that it runs on your own hardware, keeping data local. But as one security professional observed: "Local-first does not mean secure. AI agents fundamentally violate established security models. They need to read messages, store credentials, execute commands, and maintain persistent state. Everything security teams have spent decades trying to prevent."

For Organizations: Practical Implications

If your organization has employees experimenting with Clawdbot, Moltbot, or OpenClaw—or plans to evaluate AI agents more broadly—the past week offers concrete guidance:

Assume experimentation is happening: The project's viral growth means technically-inclined employees may already be running instances. This is classic shadow AI territory, amplified by the tool's system-level access requirements.

Isolation is non-negotiable: Security researchers consistently recommend running AI agents in isolated virtual machines with restricted network access—not directly on workstations or servers with access to production credentials. This was good advice last week; it's emphatic advice now.

Credential exposure may already have occurred: If instances were running with default configurations during the window when vulnerabilities were publicly known but not yet patched, credentials stored or accessible to those instances should be considered potentially compromised.

Watch for related threats: The fake extensions, Telegram scams, and malware campaigns mean that employees searching for or downloading AI agent tools face elevated risk of encountering malicious software.

The Broader Lesson

The Clawdbot situation doesn't mean AI agents are inherently unusable or that the technology should be avoided entirely. Peter Steinberger and the community are actively working to address the security issues, and the project's open-source nature means problems are being identified and discussed publicly.

But it does illustrate something important: the gap between "this technology is exciting" and "this technology is ready for business use" can be substantial. And that gap is where breaches happen.

The businesses that will benefit most from AI agents won't necessarily be the fastest adopters. They'll be the ones who build the capability to evaluate, deploy, and monitor these tools safely—treating them with the same rigor applied to any other software with privileged system access.

We'll continue monitoring this space as it develops.


This article is intended for informational purposes only and does not constitute professional security, legal, or compliance advice. Organizations should consult with qualified professionals to assess their specific situation and develop appropriate policies.