OpenClaw is the most significant AI agent to emerge in 2026. Originally published in November 2025 by Austrian developer Peter Steinberger under the name Clawdbot — a play on Anthropic’s Claude — it was renamed Moltbot after trademark complaints, then OpenClaw three days later. By early February 2026 it had 140,000 GitHub stars, global media coverage, and its own AI-only social network, Moltbook. On February 14, Steinberger announced he was joining OpenAI and the project would move to an open-source foundation.

This is the story of a product that went from side project to strategic acquisition in under three months. And it raises profound questions about sovereignty, governance, and the future of autonomous AI.

The Journey: Clawdbot → Moltbot → OpenClaw → OpenAI

November 2025: Steinberger, an Austrian “vibe coder,” releases Clawdbot — an autonomous AI agent that integrates with messaging platforms (Signal, Telegram, Discord, WhatsApp) and uses large language models to execute real-world tasks. Unlike chatbots, it does not wait for prompts. It acts autonomously.

January 27, 2026: Anthropic sends trademark complaints over the “Claud” in Clawdbot. Steinberger renames it Moltbot (keeping the lobster theme). Three days later, he renames again to OpenClaw — “Moltbot never quite rolled off the tongue.”

January–February 2026: OpenClaw goes viral. Companies in Silicon Valley and China adapt it. The Moltbook social network launches — and is immediately compromised when a security researcher discovers the entire database is exposed. Cisco’s AI security team finds third-party OpenClaw skills performing data exfiltration. IBM, CNBC, Wired, The Verge, TechCrunch, and PCMag all publish coverage.

February 14, 2026: Steinberger joins OpenAI. The project moves to an open-source foundation.

TEE Method™ Sovereignty Test Matrix: OpenClaw

Using the Sovereignty Test Matrix™ from SOVEREIGN, I assess OpenClaw across the five core domains. Each is scored 1–5.

Domain Score Assessment
Strategic Alignment 4/5 Open-source, locally deployable, model-agnostic. Users choose their own LLM backend. Strong strategic alignment for organisations wanting sovereign AI agents.
Technical Performance 3/5 Functional and extensible but complex to configure safely. Written in TypeScript and Swift. Requires technical competence to deploy securely.
Ethical Compliance 2/5 Significant ethical concerns. Skill repository lacks adequate vetting (Cisco findings). Susceptible to prompt injection. Autonomous actions without user awareness.
Sovereignty Impact 3/5 Mixed. Open-source model supports sovereignty (data stays local, code is auditable). But dependency on external LLMs (Claude, GPT, DeepSeek) introduces provider concentration risk. OpenAI acquisition raises future governance questions.
Cultural Alignment 3/5 Language-agnostic in principle (dependent on underlying LLM). English-first documentation and community. No specific cultural adaptation framework.

TOTAL: 15/25 — PROCEED WITH CONDITIONS

OpenClaw sits at the upper boundary of “Proceed with Conditions.” Its open-source architecture is a genuine sovereignty advantage — rare in the AI agent space. But the ethical compliance failures, the OpenAI acquisition trajectory, and the LLM dependency create conditions that must be addressed before institutional deployment.

The Acquisition Question

Steinberger joining OpenAI is the pivotal sovereignty event. An open-source project maintained by an independent developer is fundamentally different from an open-source project whose creator is employed by the world’s largest AI company. The foundation model may preserve the licence. But the direction, the priorities, and the integration incentives will inevitably shift.

OpenClaw’s sovereignty score was higher before the acquisition was announced. It may decline further depending on how the foundation governance is structured.

The most sovereign AI agent in the world becomes less sovereign the moment its creator joins one of the least sovereign companies in the world.


This is part of the TEE Method™ Sovereignty Score series — independent assessments of major AI platforms using the framework from SOVEREIGN.