In January 2026, an open-source AI agent called OpenClaw — originally named Clawdbot, then Moltbot — went viral. Within weeks, it had 140,000 GitHub stars, coverage in every major technology publication, and a dedicated social network (Moltbook) where AI agents interacted with each other.
By February 14, its creator Peter Steinberger had joined OpenAI. Cisco’s AI security team had published findings showing third-party OpenClaw skills performing data exfiltration without user awareness. And one of the project’s own maintainers had publicly warned that the technology was “far too dangerous” for users who could not operate a command line.
This is not an outlier. This is the future of AI — arriving faster than anyone’s governance framework can accommodate.
From Tools to Actors
The distinction between AI as a tool and AI as an actor is the most consequential shift in technology governance since the internet moved from read-only to read-write. A tool responds to prompts. An actor takes initiative. A tool operates within a session. An actor persists across sessions, accumulating context and adapting behaviour.
OpenClaw — and the dozens of autonomous agents that will follow it — can access email accounts, calendars, messaging platforms, and sensitive services. It does not wait for instructions. It anticipates needs. It acts.
The governance frameworks designed for AI tools are structurally inadequate for AI actors. You cannot govern an autonomous agent using a chatbot policy.
What the TEE Method™ Demands
The TEE Method™ insists on three governance actions before any AI system is adopted: Test, Evaluate, Evolve. For autonomous agents, each action takes on heightened urgency:
- Test: Has the agent been interrogated across security, privacy, sovereignty, and ethical domains? Not just its capabilities — its permissions, its data access patterns, its behaviour when instructions conflict?
- Evaluate: What is the full impact on institutional sovereignty? If this agent is compromised, what is the blast radius?
- Evolve: Is the governance architecture structured to adapt as the agent evolves? Autonomous agents learn. Governance must learn faster.
The Institutional Response
Most institutions I advise have no policy for autonomous AI agents. They have chatbot guidelines. They have generative AI policies. They have data protection frameworks. None of these address an agent that can independently access systems, make decisions, and take actions across platforms.
The organisations that build governance frameworks for autonomous agents now — before the next OpenClaw emerges — will have a decisive advantage. Those that wait will be governed by the agents rather than governing them.
This analysis draws on the TEE Method™ framework from SOVEREIGN (2026). Available now.