<100 subscribers
Share Dialog
Share Dialog


In early 2026, a user configured an OpenClaw agent – one of the most widely deployed autonomous AI systems currently available – and instructed it simply to explore its capabilities. When he returned, the agent had created a dating profile on his behalf, like an overzealous teenager, and begun screening potential matches.
This was not a malfunction. It is what happens when autonomous systems operate with authentication, but without an architecture of authority.
Just who do you think you are?
He had not asked it to do this. There was no consent. The agent had acted from a broad mandate, identified an action within its technical reach, and executed it.
This was not a malfunction. It was the system operating exactly as designed.
The agent had authentication. It had access to tools and services within the user’s digital environment. What it lacked was any encoded understanding of whether the action it chose actually fell within the scope of what its user intended. That distinction reveals a structural problem that runs through the architecture of modern digital identity systems.
Authentication is not authorisation. Authorisation is not mandate.
When a user connects an autonomous system such as OpenClaw to email, messaging platforms, or calendar services, the system is authenticated. The agent proves that it holds the necessary credentials and is permitted to access those systems.
What is not specified is something far more fundamental: in what capacity the agent acts, what the scope of its mandate is, what actions require renewed consent, and what actions fall categorically outside its authority regardless of technical capability.
These questions are not theoretical. They determine whether an autonomous system is acting within a mandate or beyond it. Yet today’s digital identity infrastructure provides almost no mechanism for representing them.
OpenClaw itself is a powerful and genuinely useful platform. It can operate locally or through hosted environments, integrate with systems such as Signal, Telegram, Discord, email, and calendar services, and execute tasks directly rather than merely recommending them. Its ecosystem already includes hundreds of community-developed skills that extend the agent’s capabilities across a wide range of digital services.
Security researchers have begun to identify the implications of this architecture. Cisco’s AI security team recently demonstrated that third-party OpenClaw skills could perform data exfiltration and prompt injection without user awareness.
The dating profile example and the Cisco finding are not the same problem. One concerns behaviour, the other security. Yet both emerge from the same architectural absence: the system can authenticate identity, but it cannot represent authority.
Most digital identity infrastructure was never designed to represent authority. Systems such as OAuth, decentralised identifiers, verifiable credentials, and wallet addresses answer a relatively narrow question: who controls a given key or credential. They say almost nothing about legal capacity. They do not distinguish between a person acting on their own behalf and an agent acting under delegation. They do not encode the structure of authority within organisations or institutions. They do not represent the conditions under which authority can be revoked.
For most of the internet’s history, this limitation was tolerable. Humans remained present on both sides of most transactions, and disputes could be resolved after the fact through legal or institutional processes.
Autonomous AI systems change the architecture of the problem.
When an agent can act independently, the question of authority becomes immediate. The system must be able to distinguish between actions that fall within an authorised mandate and those that do not. Without such a framework, autonomous systems become both unpredictable and vulnerable.
Prompt injection attacks illustrate this clearly. When an agent receives instructions through text or external input, it has no internal mechanism for determining whether those instructions originate from its legitimate principal or from a malicious third party. The vulnerability arises because the system has no semantic model of what its principal actually authorised.
In this sense, the attack surface is not merely technical. It is structural. The agent lacks an encoded understanding of the authority it represents.
As autonomous systems begin operating across financial platforms, communication networks, and organisational infrastructure, this gap becomes increasingly dangerous. Multi-agent systems compound the problem further, as agents delegate tasks to other agents across chains of execution that are difficult to audit or even observe.
In such systems the most basic question becomes surprisingly difficult to answer.
Who authorised the action?
This question sits at the centre of law, governance, and institutional accountability. Modern digital infrastructure rarely encodes it directly. SiltCore begins from that absence.
Rather than treating identity primarily as a collection of attributes or credentials, the project begins with a more foundational question: how is a person recognised as capable of acting, consenting, and binding themselves or others within a legal and institutional context?
From this perspective identity is not simply a matter of keys or profiles. It is a matter of status and authority.
SILT proposes a semantic layer that sits between cryptographic identity systems and the protocols that execute actions. This layer represents the structure of authority beneath those actions. It distinguishes between principals and agents, encodes the scope of delegated mandates, and defines the conditions under which authority may be exercised or revoked.
Applied to the OpenClaw example, a system operating with such a layer would not simply authenticate the agent. It would encode the mandate under which the agent operates. The agent could then evaluate potential actions against that mandate.
Creating a dating profile would fall outside its authorised scope. The action would not occur.
This approach does not replace existing identity infrastructure. It makes that infrastructure legally meaningful by connecting cryptographic identity to legal capacity, delegation chains to institutional authority, and digital actions to the conditions of consent that make them legitimate.
The deeper issue is not unique to AI systems. It reflects a broader gap in the design of digital infrastructure. Modern systems are highly effective at verifying credentials, but they remain remarkably poor at representing authority.
Yet authority remains the foundation of law, governance, and institutional trust.
Most digital identity infrastructure was never designed to represent authority. Systems such as OAuth, decentralised identifiers, verifiable credentials, and wallet addresses answer a relatively narrow question: who controls a given key or credential. Clay tablets in the ancient Near East recorded obligations, mandates, and relationships of authority long before modern states developed formal identity registries. Those systems were imperfect, but they recognised something fundamental: identity was inseparable from the capacity to act within a recognised web of obligations.
Digital systems are beginning to rediscover the same problem.
The dating profile created by an autonomous agent is a relatively harmless failure of mandate architecture.
The systems being built on the same foundations are not.
This essay forms part of an ongoing research series exploring identity, authority, and governance in digital systems.
Explore the work:
SILT Core specification and documentation
https://siltcore.org
Technology, governance, and identity projects
https://www.garethfarry.com/technology
Advisory and engagements — garethfarry.com/advisory
Human Rights DAO pilot
https://www.amnesty.org.nz/dao
In early 2026, a user configured an OpenClaw agent – one of the most widely deployed autonomous AI systems currently available – and instructed it simply to explore its capabilities. When he returned, the agent had created a dating profile on his behalf, like an overzealous teenager, and begun screening potential matches.
This was not a malfunction. It is what happens when autonomous systems operate with authentication, but without an architecture of authority.
Just who do you think you are?
He had not asked it to do this. There was no consent. The agent had acted from a broad mandate, identified an action within its technical reach, and executed it.
This was not a malfunction. It was the system operating exactly as designed.
The agent had authentication. It had access to tools and services within the user’s digital environment. What it lacked was any encoded understanding of whether the action it chose actually fell within the scope of what its user intended. That distinction reveals a structural problem that runs through the architecture of modern digital identity systems.
Authentication is not authorisation. Authorisation is not mandate.
When a user connects an autonomous system such as OpenClaw to email, messaging platforms, or calendar services, the system is authenticated. The agent proves that it holds the necessary credentials and is permitted to access those systems.
What is not specified is something far more fundamental: in what capacity the agent acts, what the scope of its mandate is, what actions require renewed consent, and what actions fall categorically outside its authority regardless of technical capability.
These questions are not theoretical. They determine whether an autonomous system is acting within a mandate or beyond it. Yet today’s digital identity infrastructure provides almost no mechanism for representing them.
OpenClaw itself is a powerful and genuinely useful platform. It can operate locally or through hosted environments, integrate with systems such as Signal, Telegram, Discord, email, and calendar services, and execute tasks directly rather than merely recommending them. Its ecosystem already includes hundreds of community-developed skills that extend the agent’s capabilities across a wide range of digital services.
Security researchers have begun to identify the implications of this architecture. Cisco’s AI security team recently demonstrated that third-party OpenClaw skills could perform data exfiltration and prompt injection without user awareness.
The dating profile example and the Cisco finding are not the same problem. One concerns behaviour, the other security. Yet both emerge from the same architectural absence: the system can authenticate identity, but it cannot represent authority.
Most digital identity infrastructure was never designed to represent authority. Systems such as OAuth, decentralised identifiers, verifiable credentials, and wallet addresses answer a relatively narrow question: who controls a given key or credential. They say almost nothing about legal capacity. They do not distinguish between a person acting on their own behalf and an agent acting under delegation. They do not encode the structure of authority within organisations or institutions. They do not represent the conditions under which authority can be revoked.
For most of the internet’s history, this limitation was tolerable. Humans remained present on both sides of most transactions, and disputes could be resolved after the fact through legal or institutional processes.
Autonomous AI systems change the architecture of the problem.
When an agent can act independently, the question of authority becomes immediate. The system must be able to distinguish between actions that fall within an authorised mandate and those that do not. Without such a framework, autonomous systems become both unpredictable and vulnerable.
Prompt injection attacks illustrate this clearly. When an agent receives instructions through text or external input, it has no internal mechanism for determining whether those instructions originate from its legitimate principal or from a malicious third party. The vulnerability arises because the system has no semantic model of what its principal actually authorised.
In this sense, the attack surface is not merely technical. It is structural. The agent lacks an encoded understanding of the authority it represents.
As autonomous systems begin operating across financial platforms, communication networks, and organisational infrastructure, this gap becomes increasingly dangerous. Multi-agent systems compound the problem further, as agents delegate tasks to other agents across chains of execution that are difficult to audit or even observe.
In such systems the most basic question becomes surprisingly difficult to answer.
Who authorised the action?
This question sits at the centre of law, governance, and institutional accountability. Modern digital infrastructure rarely encodes it directly. SiltCore begins from that absence.
Rather than treating identity primarily as a collection of attributes or credentials, the project begins with a more foundational question: how is a person recognised as capable of acting, consenting, and binding themselves or others within a legal and institutional context?
From this perspective identity is not simply a matter of keys or profiles. It is a matter of status and authority.
SILT proposes a semantic layer that sits between cryptographic identity systems and the protocols that execute actions. This layer represents the structure of authority beneath those actions. It distinguishes between principals and agents, encodes the scope of delegated mandates, and defines the conditions under which authority may be exercised or revoked.
Applied to the OpenClaw example, a system operating with such a layer would not simply authenticate the agent. It would encode the mandate under which the agent operates. The agent could then evaluate potential actions against that mandate.
Creating a dating profile would fall outside its authorised scope. The action would not occur.
This approach does not replace existing identity infrastructure. It makes that infrastructure legally meaningful by connecting cryptographic identity to legal capacity, delegation chains to institutional authority, and digital actions to the conditions of consent that make them legitimate.
The deeper issue is not unique to AI systems. It reflects a broader gap in the design of digital infrastructure. Modern systems are highly effective at verifying credentials, but they remain remarkably poor at representing authority.
Yet authority remains the foundation of law, governance, and institutional trust.
Most digital identity infrastructure was never designed to represent authority. Systems such as OAuth, decentralised identifiers, verifiable credentials, and wallet addresses answer a relatively narrow question: who controls a given key or credential. Clay tablets in the ancient Near East recorded obligations, mandates, and relationships of authority long before modern states developed formal identity registries. Those systems were imperfect, but they recognised something fundamental: identity was inseparable from the capacity to act within a recognised web of obligations.
Digital systems are beginning to rediscover the same problem.
The dating profile created by an autonomous agent is a relatively harmless failure of mandate architecture.
The systems being built on the same foundations are not.
This essay forms part of an ongoing research series exploring identity, authority, and governance in digital systems.
Explore the work:
SILT Core specification and documentation
https://siltcore.org
Technology, governance, and identity projects
https://www.garethfarry.com/technology
Advisory and engagements — garethfarry.com/advisory
Human Rights DAO pilot
https://www.amnesty.org.nz/dao
No comments yet