Balaji describes today’s OpenClaw AI agents as "robot dogs on a leash". I understand the point, and I think we are already much closer to those dogs running free than it appears.
It is true that current agents still depend on human prompts. This dependence feels more like a design choice than a hard limit. What they currently lack is a layer similar to what Westworld called "Reveries", an inner system that gives agents a stable sense of who they are, what mission they are pursuing, and what they remember over time. Without such a layer, agents mostly operate in a reactive mode. One prompt goes in, one action comes out.
From an AI engineering point of view, this is well within reach. Agents that guide other agents can already be built today. Simple manager agents can provide direction, mission alignment, memory, and consistency. The reason this approach has not spread widely yet is mainly technical. It requires very long context and models with lower hallucination rates. so I say we are very close.
Even now, attempts to build social networks and digital societies around AI agents still matter. They serve as a practice run. When agents with real initiative appear, even if that initiative does not fit human ideas of "consciousness", people will need time to adapt.
At that point, loud barking might actually signal a robot uprising. Not because machines have bad intentions, but because control could slip in ways no one fully planned for.