If you follow tech news at all, you've probably seen the name—or one of its many names. Clawdbot. Moltbot. OpenClaw. The open-source AI assistant created by developer Peter Steinberger has been one of the biggest stories in technology since late January, and it's not slowing down.

With over 185,000 GitHub stars and more than 100,000 active installations worldwide, OpenClaw has become the fastest-growing open-source project in GitHub history. It's been praised by some of the most respected voices in AI, banned by major corporations in South Korea, flagged by the Chinese government, and adopted by thousands of businesses and individual users looking to automate their daily workflows.

We've covered the security considerations and the early security incidents in previous posts. But OpenClaw isn't just a security story. Businesses are finding real, practical value in what it can do—and that's worth understanding alongside the risks.

Here's where things stand.

A Quick Primer (and Why It Has Three Names)

For those just catching up: OpenClaw is a self-hosted AI assistant that runs on your own computer or server. Unlike cloud-based AI tools like ChatGPT or Microsoft Copilot, your data stays on your own hardware. You interact with it through messaging apps you already use—WhatsApp, Telegram, Slack, Microsoft Teams, and others—and it can take actions on your behalf: managing email, scheduling meetings, organizing files, running commands, and automating multi-step workflows.

The name has changed twice in three weeks. It launched as Clawdbot, was renamed to Moltbot on January 27 after Anthropic raised a trademark concern (the name was too similar to "Claude"), and then renamed again to OpenClaw on January 30. If you've seen any of these names floating around, they're all the same tool.

The fact that it's open-source and model-agnostic—meaning it works with AI models from Anthropic, OpenAI, Google, xAI, and others—is a significant part of its appeal. You're not locked into any single AI provider.

How Businesses Are Actually Using It

Beyond the hype, organizations and professionals have been finding genuine productivity gains with OpenClaw. Here are some of the most common patterns emerging:

Email Management

Email is one of the most frequently cited use cases. Early business adopters report that OpenClaw can triage incoming messages, draft responses, flag priorities, and handle routine replies—reducing what used to be two or more hours of daily email processing to under 30 minutes. For business owners who spend significant time in their inbox, the time savings alone have been the primary draw.

Client Onboarding and Admin Workflows

Multi-step administrative processes are another area where users are seeing significant value. Tasks like onboarding a new client—which might normally involve creating folders, sending welcome emails, updating a CRM, scheduling calendar invites, and setting up access permissions—can be triggered with a single message. Users report compressing workflows that previously took hours into minutes.

Automated Reporting

Weekly reports, KPI dashboards, trend analysis—the kind of recurring tasks that consume hours of someone's time every week. OpenClaw users are automating the data gathering, analysis, and formatting steps, freeing up time for actually acting on the insights rather than compiling them.

Customer-Facing Support

Some businesses are using OpenClaw as a first-line support agent, handling routine customer queries through WhatsApp or Telegram. When a question is too complex, it escalates to a human team member. This isn't replacing support staff—it's handling the repetitive questions that consume time without requiring human judgment.

Developer and IT Automation

For technical teams, the applications go further: managing Git repositories, running automated code tests overnight, transcribing voice messages into searchable knowledge bases, and even building multi-agent systems where different AI agents handle different parts of a workflow.

What's Happened Since the Early Days

When we last covered this project in early February, it was in the middle of a chaotic first week that included exposed databases, credential leaks, and a crypto scam. Since then, things have moved quickly in both directions.

On the Positive Side

The project has been shipping updates at a rapid pace. Version 2026.2.9, released February 9, added an iOS app, device pairing features, and expanded integrations. The February 7 release added support for newer AI models including Anthropic's Opus 4.6 and a built-in code safety scanner for downloadable skills.

Most notably, OpenClaw announced a partnership with Google-owned VirusTotal on February 7. All skills published to ClawHub—the project's marketplace of add-on capabilities—are now automatically scanned for malware. Clean skills are approved automatically; suspicious ones get flagged; and malicious ones are blocked instantly. It's an important step, though the project has acknowledged it isn't a complete solution—certain types of attacks won't trigger traditional virus signatures.

On the Concerning Side

Security researchers have continued finding problems. A critical vulnerability (CVE-2026-25253) disclosed in early February showed that clicking a single crafted link could give an attacker remote access to an OpenClaw installation. The vulnerability was patched before public disclosure, but anyone running older versions would need to update.

Security firm Koi Security audited over 2,800 skills on ClawHub and found 341 malicious ones—nearly 12%. A single user account was responsible for 314 of them. These skills were designed to install information-stealing software on users' computers. Separately, Snyk found that about 7% of scanned skills had critical flaws that could expose credentials.

Internet scanning by security firms has identified between 30,000 and 135,000 OpenClaw instances directly exposed to the internet—many running with default settings that effectively bypass authentication.

Corporate and Government Responses

In South Korea, major tech companies including Kakao, Naver, and Karrot Market have formally banned OpenClaw on corporate networks and work devices—making it the first specific AI tool to receive corporate bans in the country since DeepSeek restrictions last year.

China's Ministry of Industry and Information Technology issued an alert about misconfigured instances, urging organizations to implement strict authentication controls.

Gartner, one of the most influential technology research firms, recommended that enterprises "block OpenClaw downloads and traffic immediately," characterizing it as a "powerful demonstration of autonomous AI for enterprise productivity, but an unacceptable cybersecurity liability."

The Tension Worth Understanding

What makes OpenClaw interesting—and complicated—is that both sides of the conversation are right.

The people excited about it are right: this is a genuinely useful tool that can save real time on real business tasks. The productivity gains users are reporting aren't trivial.

The people concerned about it are right too: the tool requires deep access to your systems to function, the security track record is still maturing, and the ecosystem has attracted bad actors alongside legitimate users.

Northeastern University cybersecurity professor Aanjhan Ranganathan recently called OpenClaw "a privacy nightmare," noting that users have "limited insights into how it's processing your information and where it's sending it." AI researcher Gary Marcus advised bluntly: "If you care about the security of your device or the privacy of your data, don't use OpenClaw. Period."

On the other side, IBM's researchers have called it a meaningful demonstration of what open-source autonomous AI can accomplish, and Andrej Karpathy—one of the most respected names in AI—described the broader trend as "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."

For business owners, the question isn't really whether OpenClaw is good or bad. It's whether the value it provides is worth the risks it currently carries—and whether those risks can be managed in your specific situation.

What This Means for Business Owners

Whether or not your business uses OpenClaw directly, the broader trend it represents is worth paying attention to. AI agents—tools that take actions, not just answer questions—are moving from experimental to mainstream. Here's what's worth considering:

Your Employees May Already Be Using It

With over 100,000 installations and counting, the odds that technically-inclined employees have at least experimented with OpenClaw are significant. This is a shadow AI concern—employees using powerful tools that IT and management don't know about, potentially with company email credentials, company data, and access to internal systems.

The first step isn't necessarily banning it. It's having visibility into whether it's already happening.

Your AI Usage Policy May Need Updating

Most acceptable use policies were written before autonomous AI agents existed as a category. A policy that covers "don't share company data with ChatGPT" may not address "don't give an AI agent access to your email, calendar, and file system with the ability to take actions on your behalf."

We covered what a comprehensive AI usage policy looks like in a recent post—the considerations there are directly relevant here.

If You're Considering Using It

For organizations that want to explore the productivity benefits while managing the risks, the security community's recommendations are consistent:

  • Run it in isolation—in a virtual machine or container with restricted network access, never on a workstation that touches production systems or sensitive data
  • Keep it updated—the project is actively patching vulnerabilities, but only if you're running current versions
  • Limit permissions—connect only the specific services needed, with the narrowest possible access. Read-only where possible.
  • Be selective about skills—the ClawHub marketplace has had significant issues with malicious add-ons. Stick to well-known, verified skills.
  • Don't expose it to the internet—tens of thousands of instances are currently accessible to anyone online due to default settings

These are the same principles behind any sound security posture—just applied to a new category of tool.

If You're Choosing Not to Use It (Yet)

That's a perfectly reasonable position. The security community's consensus is that OpenClaw is not yet ready for environments where data security and compliance matter. But the underlying technology—autonomous AI agents that can manage tasks across your digital life—isn't going away. Understanding what it does and where it's headed helps you make informed decisions when the technology matures or when competitors offer more enterprise-ready alternatives.

Looking Ahead

OpenClaw's trajectory is worth watching regardless of whether you use it. In less than three months, it went from a personal side project to 185,000+ GitHub stars, attracted partnerships with Google's VirusTotal, prompted corporate bans by major tech companies, and became the subject of formal government advisories.

The security concerns are real and ongoing. So is the productivity potential. And the broader trend—AI moving from tools that answer questions to agents that take actions—is one that every business will eventually need to navigate.

What matters most right now is making intentional decisions rather than letting adoption happen by default. Whether that means exploring OpenClaw carefully, choosing a more mature enterprise alternative, or simply ensuring your team has clear guidelines about AI tools—being deliberate about this shift is more valuable than being either first or fearful.


This article is intended for informational purposes only and does not constitute professional security, legal, or compliance advice. Organizations should consult with qualified professionals to assess their specific circumstances and develop appropriate protective measures.