
All of our societal infrastructure runs on some version of trust. Coordination, collaboration, organizations, and governance all depend on people and systems relying on one another's behavior. Today, these systems are under strain. AI is introducing non-human agents into coordination at every level, and the institutional foundations of the existing world order are shifting beneath us. To build coordination infrastructure that can withstand what comes next, we will need a new engineering discipline of trust.
However, trust itself remains poorly defined. "Trustless," "trusted," "trust assumptions," "trust-minimized" — these terms get thrown around constantly in coordination discourse. They are used loosely and often each mean something different. Even scholars can't agree. Williamson (1993) calls calculative trust a contradiction in terms, Gambetta (1988) models trust as subjective probability, while Rousseau and colleagues (1998) conclude that deterrence isn't trust at all.
An engineering discipline requires a clear, actionable model of what trust is. This article introduces just that.
I propose that:
Trust is a prediction — a principal's probabilistic forecast of an agent's behavior, strong enough to guide a decision.
Trust is about resources — concerns the principal's resources that may be impacted by the agent's behavior.
Trust is contextual — always scoped to an agent, a domain, and an environment.
Trust has two components — informed by what the principal knows about the agent (which I'll call pistis, borrowing from Jordan Hall) and how the environment shapes the agent's behavior (hardness, borrowing from Josh Stark).
Trust is measurable in resource value — quantified as the maximum resource exposure a principal would accept in a given context.
My goal with this model is to make trust not just describable but also engineerable.
Any time you lend a friend your car, grant a team authority over a budget, or enter a partnership, you are placing your resources under someone else's discretionary control. That is delegation: one party (the principal) relying on another (the agent) to act in a way that (hopefully) furthers the principal's interests. Delegation can be hierarchical, like a board appointing a CEO; or reciprocal, like cofounders splitting responsibilities, in which case both parties are simultaneously principal and agent.
Since delegation exposes a principal's resources to risk, they will delegate only if they are sufficiently confident that their agent won't abuse their resources or if the potential payoff is worth the risk. The latter is important and warrants an investigation of its own, but our focus in this article is on the former. The principal's confidence is a forecast: a prediction of the agent's behavior within the context of the delegation. That prediction is trust.
There is precedent for this view. Gambetta, notably, defines trust as "a particular level of the subjective probability with which an agent assesses that another agent will perform a particular action, both before he can monitor such action and in a context in which it affects his own action." Trust matters precisely because it drives the principal's decision about what to delegate, how much, and to whom.
Those decisions — what to delegate, how much, and to whom — are decisions about resources. A principal doesn’t predict an agent’s behavior in the abstract; they care about behavior because of its impact on their interests. To give this precision, I model everything the principal cares about, including everything that may be at stake in a delegation, as a resource. The prediction that constitutes trust is, fundamentally, a prediction about what will happen to the principal’s resources as a result of the delegation.
What counts as a resource? In my model, nearly everything: physical goods like foodstuffs and building materials, digital goods like software and information, conceptual goods like money and intellectual property, and relational goods like reputation and privacy. What makes them all resources is that the principal values them, and the agent's behavior can affect that value.1 And in every case, the principal is asking the same thing: can I trust you with these resources? My car? This data? My child’s safety?
Importantly, the resources at stake extend beyond those the principal explicitly delegates. Delegating access to electronic data may not risk the data itself, but the agent could use that access to damage other resources the principal values. This full footprint is the principal's resource exposure: everything at stake as a result of the delegation, not just the budget, data access, or materials handed over.
As a principal's prediction, trust is necessarily perspectival: it is the principal's perspective about a specific agent within a specific context. Alice's trust in Bob to cook dinner is a different prediction from her trust in Bob to manage a treasury. As Hardin (2002) puts it: A trusts B to do X. Saying "I trust you" is always shorthand — with what? and to do what? The full statement specifies the agent, the domain, and the conditions.
We can break down context into two distinct components: a domain (subject of the delegation) and an environment (external conditions). The domain is the objectives and tasks the principal wants the agent to complete on their behalf, as well as the resources the principal delegates to accomplish them — the subject of the delegation — which, as we established in the previous section, are what the trust prediction is ultimately about.
Abstractly, the domain comprises the set of potential actions the agent may take that are either enabled by the delegated resources or may impact the principal's resources at stake. Our earlier examples of delegation were primarily framed in terms of domain: driving a car, managing a budget, and cooking dinner. Consider entering a partnership: in this reciprocal delegation, two individuals endeavor to collaborate towards a common goal. That goal, and the resources they pool to achieve it, constitutes the domain.
The environment is the set of external conditions that shape which actions an agent can or is likely to take in relation to the domain. The reciprocal delegation of a partnership is forged with legal documents that define the bounds of the partnership itself: what each partner is allowed or not allowed to do, and how those rules are enforced. Those mechanisms are part of the environment.

Thus far, we've established that trust is a prediction, and that this prediction is made in relation to a context composed of both domain and environment. It follows, then, that trust must be informed by both. The principal's read on the agent matters, and so do the environmental conditions that shape the behavior of any agent in that position. This is where my model parts ways with much of the trust literature.2
Williamson argues that "calculative trust" (reliance based on rational incentives) is a contradiction in terms. Rousseau et al. go further: they conclude that "deterrence" (compliance enforced by consequences) is not trust and exclude it from their model. On this view, environmental constraints are a substitute for trust — control rather than confidence. I disagree: if trust is the prediction, and the environment changes the prediction, you cannot exclude the environment's contribution. The prediction is trust; you cannot separate the output from one of its components.
Consider a familiar case: you meet someone and learn they attended your alma mater. You immediately trust them a little more. Most people would call this social trust — a sense of shared identity, common experience, maybe even shared values. And some of that is real. But much of what's actually at work here is the web of mutual connections you likely share: second- and third-degree relationships that create latent reputational accountability. If this person wrongs you, word travels through a network that matters to both of you. At worst, you know how to find them. That influence on their behavior is environmental, not personal; and it is doing more work than many people realize.
This environment — a shared alma mater — is just as important as the actual personal relationship that develops between those two people. An actionable model of trust needs to account for both, separately, so we can reason about how they combine and how to strengthen each one.
The first component of trust draws from the principal's assessment of the agent itself: who is this person? what is their nature? Borrowing from Jordan Hall (2026), I call this pistis.
Pistis corresponds to the delegation domain. Adapting Mayer, Davis, and Schoorman's (1995) well-known framework, a principal's pistis for an agent reflects their assessment of the agent's competence, integrity, and intrinsic alignment (shared values, identity, mission). Track record, judgment under pressure, character: these are all properties of the agent as the principal perceives them, not of the environment. Reputation in this sense serves as an information input to pistis; a compressed record of how the agent has behaved in prior contexts.
Hall's description of pistis is excellent:
[The] capacity to enter into a relationship of calibrated mutual reliance — a relationship grounded neither in naive hope nor pragmatic contract but in demonstrated reliability, transparent action, and progressive deepening. [...] You decide to work together on something small. They deliver. Or they don't. Over months, maybe years, you build a picture of who they are. But that picture is always partial, always mediated by narrative — theirs and yours. People perform reliability. People perform transparency. And the performance is sometimes indistinguishable from the real thing until the moment it isn't.
Once pistis exists between two parties, the marginal cost of creating trust is very low. Teams with strong pistis move fast precisely because they don't burn energy on verification. But pistis is formed through direct experience, and it is expensive to build. As Hall describes, it is slow to form, fragile, and it doesn't scale easily. Dunbar's number is essentially the point at which we run out of bandwidth for building and maintaining pistis.
The second component of trust is derived from the environmental factors that shape an agent's behavior, regardless of who the agent is. Borrowing from Stark (2022), I call this hardness.3 Where pistis asks "who is this agent?", hardness asks "what could/would any agent in this position do?"
Hardness ranges on a continuum from soft to hard: from social norms and reputational accountability to structural incentives (token stakes, collateral, profit-sharing) and institutional mechanisms (laws, contracts, professional licensing) and physical laws and smart contracts.
Some hardness is pre-existing — inherited from physics, geography, existing social networks, or legal jurisdiction. But much of it can be engineered. Stark notes that humanity has progressively learned to create hardness: first by harnessing atoms, then by building institutions, and now by creating blockchains and deploying smart contracts. Hardness is a design lever for engineering trust. You can choose how much hardness to build into an environment, what kind, and at what cost.
The cost structure of hardness varies across the spectrum. Institutional hardness can be expensive to establish and enforce (lawyers are not cheap). Smart contracts, by contrast, can be deployed relatively cheaply and are essentially free to maintain. But since both mechanisms are agent-agnostic, hardness scales where pistis cannot.

This is the core of the model: trust is the aggregate of pistis and hardness, a single prediction composed of both inputs. To see why, consider how the two interact. Within a given delegation, pistis and hardness are economic substitutes. A principal can reach the same level of trust through different compositions: high pistis and low hardness, or the reverse, or any mix in between. For example, a malicious agent who bonds $50k in a slashable contract can still be trusted with up to $50k. Even though pistis is zero, hardness is high, so the principal can still trust the agent in this context.
But zoom out across time, and the relationship changes: pistis and hardness are economic complements. Return to the fellow alumnus you met earlier. The shared alumni network — the latent reputational accountability that let you trust them a little more — is a form of hardness. That hardness makes it safe enough to collaborate on something small, creating the opportunity for pistis to form out of direct evidence of their competence and character. As pistis grows, you're willing to delegate more, and eventually to build new hardness together: more scalable structures that couldn't have been built without the relationship.
This is roughly how healthy institutions work: the structure creates the conditions for relationships to form, not the other way around. Hall describes this as a progressive ladder — "observe, coordinate, depend, bind" — each step earned and also reversible if proven unwarranted. My model provides a mechanism for that ladder: hardness lowers the cost of the first step, and pistis compounds from there.
Earlier we asked: can I trust you with my car? With this data? With my child's safety? But there's a further question lurking in each of these: how much is at stake? Can I trust you with $100? What about $100K? If trust is a prediction about the principal's resource exposure, then it has a natural unit: the value of those resources. Trust is the maximum resource exposure a principal would accept in a given context — the trust capacity of that context.
The $100 case is easy since money comes in its own units. But the same logic applies to any resource the principal values. What we want to measure is resource value:4 the value of a resource to the principal, in whatever terms the principal can estimate it. It is this valuation — not the resource’s type or natural units — that provides a common denomination. Just like trust, resource value is always perspectival and contextual. Even $100K means something different to a billionaire than to someone living paycheck to paycheck, and your privacy matters more to you in some contexts than others.
Resource value is quantifiable. Not because every resource has natural units, but because principals are always making comparative judgments about resources. You may not be able to put a precise dollar figure on your privacy, but every day you trade it off against other resources, like access to social media or e-commerce convenience.
We can measure both pistis and hardness in terms of resource value. Because of the direct relationship between a principal's resources and an agent's incentives, hardness is correlated with resource value. A $50K slashable bond covers $50K of resource value, while a confidentiality agreement protects the value of the information it covers. If you can value the resource (as principals always do, even if implicitly), you can measure the hardness.
Pistis is less obvious. But consider: you'd trust a close friend to watch your kids for an evening, but you might not extend the same social trust to a neighbor you've only met once or twice. In both scenarios, the same resource (your child's safety) is at stake. The difference, entirely in your assessment of the person, has a clear magnitude; and it's denominated in how much that resource is worth to you.
Both components are denominated in the value of the resources at stake. That commensurability is what makes trust engineerable: we can substitute one for the other — if pistis drops, compensate with more hardness — and we can identify gaps. If a delegation requires a certain trust capacity, how much do we already have from pistis and pre-existing hardness, and where could the rest come from?
As an illustration, consider Alice delegating $100K of treasury authority to Bob. Based on their working history and personal relationship, she estimates that she would be comfortable delegating up to $60K to Bob in the domain of treasury management in the absence of any environmental conditions; this is her her pistis for Bob in this domain. Alice also requires that Bob post a $50K slashable bond. Together, $60K of pistis and $50K of hardness creates sufficient trust capacity for Alice to make the delegation.
But now replace Bob with an unknown agent: pistis drops near zero, and the $50K bond alone isn't enough. To close the gap, Alice could add more hardness (such as a tighter spending cap or requiring that the new agent get approval from Alice for certain types of transactions), or invest time building pistis with the new agent.
Let's look at one more example: a family choosing between two colleges for their child. At one school, the family has close ties to faculty and staff — people they've known for years and whose judgment they trust deeply. The school sits in a state with weak institutional oversight and limited liability for student welfare. At the other school, the family knows no one, but it operates under strict regulatory oversight, mandatory safety protocols, and serious legal exposure if a student is harmed through negligence. The compositions of pistis and hardness are nearly opposite in these two scenarios, yet both contexts can earn the family's trust.
The five propositions above define an atomic model of trust within a single delegation. That model is a primitive; a building block we can apply in many ways. One of the most revealing is to ask what happens in a delegation chain: Alice delegates to Bob, who delegates to Carol. What are the dynamics of trust across the links?
Each link is its own delegation. The agent becomes a principal themselves and forms a new prediction of their agent's behavior. But the two components of trust behave very differently across those links, in particular with respect to the principal's resource exposure. Pistis gives a principal confidence in their direct agent's behavior. But its relevance degrades with each nested delegation, because the domain gets narrower at each link. Alice's pistis for Bob in one domain does not tell her much about Carol in a narrower domain, even if Bob has pistis for Carol.
Consider a common corporate scenario. The CEO has high pistis for her VP in the broad domain of finance. The VP delegates a narrower sub-domain to a contractor. The CEO's pistis for the VP doesn't cover that narrower domain; it was never scoped to. So the CEO's resource exposure passes right through to the contractor. If the contractor shirks their responsibility, "my VP vouched for him" is cold comfort.
Hardness, on the other hand, does not degrade across delegations. A hard boundary at any point in the chain caps risk for everyone above it, regardless of what happens below. If the VP adds a robust contractor agreement with liability terms and a performance bond, the CEO's resource exposure is capped at that boundary, regardless of what happens downstream, of domain, or whether the CEO has ever met the contractor. In the AI delegation literature, Tomasev et al. (2026) call these boundaries "liability firebreaks."
In a delegation network, pistis must be assessed at every link. Hardness applied at any link can be relied upon throughout the entire chain. The ability to model both components of trust is what makes trust designable at organizational scale.

The model defines an atomic primitive of trust: a single delegation, decomposed into measurable, designable components. From that primitive, we can formalize the tradeoffs between pistis and hardness, design environments that produce the right composition for a given delegation, and reason about how trust propagates through networks. For any given system, we can specify what hardness to build, where to invest in pistis, and how much resource exposure each part of the system can bear. We can ask what a given trust composition costs and perhaps even predict its impact on the expected payoff of a delegation. These are design questions, and they are answerable. An actionable, quantitative model is the foundation for an engineering discipline of trust.
The model already reframes one of crypto's most misleading claims. "Trustless" does not mean the absence of trust. Rather, it means a trust composition dominated by hardness rather than pistis. Blockchains combine cryptography and economic incentives to produce trust with minimal pistis requirements. The distributed computing and economic stake involved in that combination entails massive fixed costs, but it yields a platform for cheap, programmable trust in the form of hardness.
The need for this kind of trust engineering is more acute than ever. Humans' shared biology, priors, norms, and evolved social intuitions give us enough common ground that many behaviors can be assumed without being designed for. AI agents do not share this constitution. For non-human agents, the predictability assessments that humans make implicitly must be explicitly designed for.
Society will need entirely new coordination infrastructure. Establishing an actionable model of trust is how we begin.
My thanks to Brennan Mulligan, Sara Norval, Nick Naraghi, Alex Murray, dancingpenguin.eth, Shelby Steidl, and Abram Dawson for their helpful comments on earlier drafts.
Resources here are not limited to instrumental goods. Anything the principal values (including ends in themselves like relationships and privacy) counts as a resource in this model, because delegation can put any of them at stake.
↩Tomasev et al. (2026) arrive at a compatible decomposition from the AI delegation literature, defining trust as the delegator's degree of belief in a delegatee's capability to execute a task in alignment with explicit constraints and implicit intent. Their "explicit constraints" map onto hardness; "implicit intent" maps onto pistis.
↩Hall arrives at a similar concept through a different lineage, calling it horkos — the ancient Greek oath-bond, the ritual counterpart to pistis. "Computers are incapable of making promises," Hall writes. "But they are constructed from oaths." I prefer Stark's label: hardness foregrounds the material and institutional spectrum, where horkos emphasizes the binding act. Both capture the insight that predictability can be built into the environment, but hardness better serves an engineering frame.
↩Resource value is closely related to utility in the economic sense. Both capture the idea of subjective valuation from a decision-maker's perspective. I use "resource value" to avoid the baggage of expected utility theory, which carries assumptions about rationality, cardinal measurement, and probability weighting that are unnecessary here.
↩
Anticapture
Towards a Framework of Capture-Resistant GovernanceSpecial thanks to the many people who have greatly influenced my thinking and provided feedback to various versions of this article: Nick Naraghi, Aaron Soskin, David Ehrlichman, Jon Hillis, Ven Gist, Tracheopteryx, David Phelps, Kames Cox-Gheraghty, Nich Kesonpat, Travis Wyche, Tae, Christian Lemp, Joshua Tan, Kevin Seigler, Chase Chapman, Marcus Phillips, Stefen Deleveaux, earth2travis, Daniel Ospina, Lane Rettig, Richard Bartlett.We see th...

DAOs, Tokens, and Goodhart's Forest
This article was originally published in Haus Party, DAOhaus’ newsletter, on October 14, 2020. This version includes some light edits.Tokens are incredibly powerful incentive mechanisms, and power corrupts. Welcome to this multi-part series, in which we’ll explore why tokens are so powerful, how things can go awry, and what can be done about it.You never know what's lurking in Goodhart's ForestThe challenge of human coordination is a challenge of alignment. The first step to accompl...
Governance is about action, not decision-making
The common misconception that governance is primarily about decision-making is wrong, outdated, and dangerous. The goal of governance is to DO things. Governance is about taking ACTION. Decision-making is important, but it is just one aspect of taking action.“Governance = decision-making” is wrongGovernance is a specific category of coordination. All governance is coordination, but not all coordination is governance. Coordination enables a network of agents to work together to bring about cha...

All of our societal infrastructure runs on some version of trust. Coordination, collaboration, organizations, and governance all depend on people and systems relying on one another's behavior. Today, these systems are under strain. AI is introducing non-human agents into coordination at every level, and the institutional foundations of the existing world order are shifting beneath us. To build coordination infrastructure that can withstand what comes next, we will need a new engineering discipline of trust.
However, trust itself remains poorly defined. "Trustless," "trusted," "trust assumptions," "trust-minimized" — these terms get thrown around constantly in coordination discourse. They are used loosely and often each mean something different. Even scholars can't agree. Williamson (1993) calls calculative trust a contradiction in terms, Gambetta (1988) models trust as subjective probability, while Rousseau and colleagues (1998) conclude that deterrence isn't trust at all.
An engineering discipline requires a clear, actionable model of what trust is. This article introduces just that.
I propose that:
Trust is a prediction — a principal's probabilistic forecast of an agent's behavior, strong enough to guide a decision.
Trust is about resources — concerns the principal's resources that may be impacted by the agent's behavior.
Trust is contextual — always scoped to an agent, a domain, and an environment.
Trust has two components — informed by what the principal knows about the agent (which I'll call pistis, borrowing from Jordan Hall) and how the environment shapes the agent's behavior (hardness, borrowing from Josh Stark).
Trust is measurable in resource value — quantified as the maximum resource exposure a principal would accept in a given context.
My goal with this model is to make trust not just describable but also engineerable.
Any time you lend a friend your car, grant a team authority over a budget, or enter a partnership, you are placing your resources under someone else's discretionary control. That is delegation: one party (the principal) relying on another (the agent) to act in a way that (hopefully) furthers the principal's interests. Delegation can be hierarchical, like a board appointing a CEO; or reciprocal, like cofounders splitting responsibilities, in which case both parties are simultaneously principal and agent.
Since delegation exposes a principal's resources to risk, they will delegate only if they are sufficiently confident that their agent won't abuse their resources or if the potential payoff is worth the risk. The latter is important and warrants an investigation of its own, but our focus in this article is on the former. The principal's confidence is a forecast: a prediction of the agent's behavior within the context of the delegation. That prediction is trust.
There is precedent for this view. Gambetta, notably, defines trust as "a particular level of the subjective probability with which an agent assesses that another agent will perform a particular action, both before he can monitor such action and in a context in which it affects his own action." Trust matters precisely because it drives the principal's decision about what to delegate, how much, and to whom.
Those decisions — what to delegate, how much, and to whom — are decisions about resources. A principal doesn’t predict an agent’s behavior in the abstract; they care about behavior because of its impact on their interests. To give this precision, I model everything the principal cares about, including everything that may be at stake in a delegation, as a resource. The prediction that constitutes trust is, fundamentally, a prediction about what will happen to the principal’s resources as a result of the delegation.
What counts as a resource? In my model, nearly everything: physical goods like foodstuffs and building materials, digital goods like software and information, conceptual goods like money and intellectual property, and relational goods like reputation and privacy. What makes them all resources is that the principal values them, and the agent's behavior can affect that value.1 And in every case, the principal is asking the same thing: can I trust you with these resources? My car? This data? My child’s safety?
Importantly, the resources at stake extend beyond those the principal explicitly delegates. Delegating access to electronic data may not risk the data itself, but the agent could use that access to damage other resources the principal values. This full footprint is the principal's resource exposure: everything at stake as a result of the delegation, not just the budget, data access, or materials handed over.
As a principal's prediction, trust is necessarily perspectival: it is the principal's perspective about a specific agent within a specific context. Alice's trust in Bob to cook dinner is a different prediction from her trust in Bob to manage a treasury. As Hardin (2002) puts it: A trusts B to do X. Saying "I trust you" is always shorthand — with what? and to do what? The full statement specifies the agent, the domain, and the conditions.
We can break down context into two distinct components: a domain (subject of the delegation) and an environment (external conditions). The domain is the objectives and tasks the principal wants the agent to complete on their behalf, as well as the resources the principal delegates to accomplish them — the subject of the delegation — which, as we established in the previous section, are what the trust prediction is ultimately about.
Abstractly, the domain comprises the set of potential actions the agent may take that are either enabled by the delegated resources or may impact the principal's resources at stake. Our earlier examples of delegation were primarily framed in terms of domain: driving a car, managing a budget, and cooking dinner. Consider entering a partnership: in this reciprocal delegation, two individuals endeavor to collaborate towards a common goal. That goal, and the resources they pool to achieve it, constitutes the domain.
The environment is the set of external conditions that shape which actions an agent can or is likely to take in relation to the domain. The reciprocal delegation of a partnership is forged with legal documents that define the bounds of the partnership itself: what each partner is allowed or not allowed to do, and how those rules are enforced. Those mechanisms are part of the environment.

Thus far, we've established that trust is a prediction, and that this prediction is made in relation to a context composed of both domain and environment. It follows, then, that trust must be informed by both. The principal's read on the agent matters, and so do the environmental conditions that shape the behavior of any agent in that position. This is where my model parts ways with much of the trust literature.2
Williamson argues that "calculative trust" (reliance based on rational incentives) is a contradiction in terms. Rousseau et al. go further: they conclude that "deterrence" (compliance enforced by consequences) is not trust and exclude it from their model. On this view, environmental constraints are a substitute for trust — control rather than confidence. I disagree: if trust is the prediction, and the environment changes the prediction, you cannot exclude the environment's contribution. The prediction is trust; you cannot separate the output from one of its components.
Consider a familiar case: you meet someone and learn they attended your alma mater. You immediately trust them a little more. Most people would call this social trust — a sense of shared identity, common experience, maybe even shared values. And some of that is real. But much of what's actually at work here is the web of mutual connections you likely share: second- and third-degree relationships that create latent reputational accountability. If this person wrongs you, word travels through a network that matters to both of you. At worst, you know how to find them. That influence on their behavior is environmental, not personal; and it is doing more work than many people realize.
This environment — a shared alma mater — is just as important as the actual personal relationship that develops between those two people. An actionable model of trust needs to account for both, separately, so we can reason about how they combine and how to strengthen each one.
The first component of trust draws from the principal's assessment of the agent itself: who is this person? what is their nature? Borrowing from Jordan Hall (2026), I call this pistis.
Pistis corresponds to the delegation domain. Adapting Mayer, Davis, and Schoorman's (1995) well-known framework, a principal's pistis for an agent reflects their assessment of the agent's competence, integrity, and intrinsic alignment (shared values, identity, mission). Track record, judgment under pressure, character: these are all properties of the agent as the principal perceives them, not of the environment. Reputation in this sense serves as an information input to pistis; a compressed record of how the agent has behaved in prior contexts.
Hall's description of pistis is excellent:
[The] capacity to enter into a relationship of calibrated mutual reliance — a relationship grounded neither in naive hope nor pragmatic contract but in demonstrated reliability, transparent action, and progressive deepening. [...] You decide to work together on something small. They deliver. Or they don't. Over months, maybe years, you build a picture of who they are. But that picture is always partial, always mediated by narrative — theirs and yours. People perform reliability. People perform transparency. And the performance is sometimes indistinguishable from the real thing until the moment it isn't.
Once pistis exists between two parties, the marginal cost of creating trust is very low. Teams with strong pistis move fast precisely because they don't burn energy on verification. But pistis is formed through direct experience, and it is expensive to build. As Hall describes, it is slow to form, fragile, and it doesn't scale easily. Dunbar's number is essentially the point at which we run out of bandwidth for building and maintaining pistis.
The second component of trust is derived from the environmental factors that shape an agent's behavior, regardless of who the agent is. Borrowing from Stark (2022), I call this hardness.3 Where pistis asks "who is this agent?", hardness asks "what could/would any agent in this position do?"
Hardness ranges on a continuum from soft to hard: from social norms and reputational accountability to structural incentives (token stakes, collateral, profit-sharing) and institutional mechanisms (laws, contracts, professional licensing) and physical laws and smart contracts.
Some hardness is pre-existing — inherited from physics, geography, existing social networks, or legal jurisdiction. But much of it can be engineered. Stark notes that humanity has progressively learned to create hardness: first by harnessing atoms, then by building institutions, and now by creating blockchains and deploying smart contracts. Hardness is a design lever for engineering trust. You can choose how much hardness to build into an environment, what kind, and at what cost.
The cost structure of hardness varies across the spectrum. Institutional hardness can be expensive to establish and enforce (lawyers are not cheap). Smart contracts, by contrast, can be deployed relatively cheaply and are essentially free to maintain. But since both mechanisms are agent-agnostic, hardness scales where pistis cannot.

This is the core of the model: trust is the aggregate of pistis and hardness, a single prediction composed of both inputs. To see why, consider how the two interact. Within a given delegation, pistis and hardness are economic substitutes. A principal can reach the same level of trust through different compositions: high pistis and low hardness, or the reverse, or any mix in between. For example, a malicious agent who bonds $50k in a slashable contract can still be trusted with up to $50k. Even though pistis is zero, hardness is high, so the principal can still trust the agent in this context.
But zoom out across time, and the relationship changes: pistis and hardness are economic complements. Return to the fellow alumnus you met earlier. The shared alumni network — the latent reputational accountability that let you trust them a little more — is a form of hardness. That hardness makes it safe enough to collaborate on something small, creating the opportunity for pistis to form out of direct evidence of their competence and character. As pistis grows, you're willing to delegate more, and eventually to build new hardness together: more scalable structures that couldn't have been built without the relationship.
This is roughly how healthy institutions work: the structure creates the conditions for relationships to form, not the other way around. Hall describes this as a progressive ladder — "observe, coordinate, depend, bind" — each step earned and also reversible if proven unwarranted. My model provides a mechanism for that ladder: hardness lowers the cost of the first step, and pistis compounds from there.
Earlier we asked: can I trust you with my car? With this data? With my child's safety? But there's a further question lurking in each of these: how much is at stake? Can I trust you with $100? What about $100K? If trust is a prediction about the principal's resource exposure, then it has a natural unit: the value of those resources. Trust is the maximum resource exposure a principal would accept in a given context — the trust capacity of that context.
The $100 case is easy since money comes in its own units. But the same logic applies to any resource the principal values. What we want to measure is resource value:4 the value of a resource to the principal, in whatever terms the principal can estimate it. It is this valuation — not the resource’s type or natural units — that provides a common denomination. Just like trust, resource value is always perspectival and contextual. Even $100K means something different to a billionaire than to someone living paycheck to paycheck, and your privacy matters more to you in some contexts than others.
Resource value is quantifiable. Not because every resource has natural units, but because principals are always making comparative judgments about resources. You may not be able to put a precise dollar figure on your privacy, but every day you trade it off against other resources, like access to social media or e-commerce convenience.
We can measure both pistis and hardness in terms of resource value. Because of the direct relationship between a principal's resources and an agent's incentives, hardness is correlated with resource value. A $50K slashable bond covers $50K of resource value, while a confidentiality agreement protects the value of the information it covers. If you can value the resource (as principals always do, even if implicitly), you can measure the hardness.
Pistis is less obvious. But consider: you'd trust a close friend to watch your kids for an evening, but you might not extend the same social trust to a neighbor you've only met once or twice. In both scenarios, the same resource (your child's safety) is at stake. The difference, entirely in your assessment of the person, has a clear magnitude; and it's denominated in how much that resource is worth to you.
Both components are denominated in the value of the resources at stake. That commensurability is what makes trust engineerable: we can substitute one for the other — if pistis drops, compensate with more hardness — and we can identify gaps. If a delegation requires a certain trust capacity, how much do we already have from pistis and pre-existing hardness, and where could the rest come from?
As an illustration, consider Alice delegating $100K of treasury authority to Bob. Based on their working history and personal relationship, she estimates that she would be comfortable delegating up to $60K to Bob in the domain of treasury management in the absence of any environmental conditions; this is her her pistis for Bob in this domain. Alice also requires that Bob post a $50K slashable bond. Together, $60K of pistis and $50K of hardness creates sufficient trust capacity for Alice to make the delegation.
But now replace Bob with an unknown agent: pistis drops near zero, and the $50K bond alone isn't enough. To close the gap, Alice could add more hardness (such as a tighter spending cap or requiring that the new agent get approval from Alice for certain types of transactions), or invest time building pistis with the new agent.
Let's look at one more example: a family choosing between two colleges for their child. At one school, the family has close ties to faculty and staff — people they've known for years and whose judgment they trust deeply. The school sits in a state with weak institutional oversight and limited liability for student welfare. At the other school, the family knows no one, but it operates under strict regulatory oversight, mandatory safety protocols, and serious legal exposure if a student is harmed through negligence. The compositions of pistis and hardness are nearly opposite in these two scenarios, yet both contexts can earn the family's trust.
The five propositions above define an atomic model of trust within a single delegation. That model is a primitive; a building block we can apply in many ways. One of the most revealing is to ask what happens in a delegation chain: Alice delegates to Bob, who delegates to Carol. What are the dynamics of trust across the links?
Each link is its own delegation. The agent becomes a principal themselves and forms a new prediction of their agent's behavior. But the two components of trust behave very differently across those links, in particular with respect to the principal's resource exposure. Pistis gives a principal confidence in their direct agent's behavior. But its relevance degrades with each nested delegation, because the domain gets narrower at each link. Alice's pistis for Bob in one domain does not tell her much about Carol in a narrower domain, even if Bob has pistis for Carol.
Consider a common corporate scenario. The CEO has high pistis for her VP in the broad domain of finance. The VP delegates a narrower sub-domain to a contractor. The CEO's pistis for the VP doesn't cover that narrower domain; it was never scoped to. So the CEO's resource exposure passes right through to the contractor. If the contractor shirks their responsibility, "my VP vouched for him" is cold comfort.
Hardness, on the other hand, does not degrade across delegations. A hard boundary at any point in the chain caps risk for everyone above it, regardless of what happens below. If the VP adds a robust contractor agreement with liability terms and a performance bond, the CEO's resource exposure is capped at that boundary, regardless of what happens downstream, of domain, or whether the CEO has ever met the contractor. In the AI delegation literature, Tomasev et al. (2026) call these boundaries "liability firebreaks."
In a delegation network, pistis must be assessed at every link. Hardness applied at any link can be relied upon throughout the entire chain. The ability to model both components of trust is what makes trust designable at organizational scale.

The model defines an atomic primitive of trust: a single delegation, decomposed into measurable, designable components. From that primitive, we can formalize the tradeoffs between pistis and hardness, design environments that produce the right composition for a given delegation, and reason about how trust propagates through networks. For any given system, we can specify what hardness to build, where to invest in pistis, and how much resource exposure each part of the system can bear. We can ask what a given trust composition costs and perhaps even predict its impact on the expected payoff of a delegation. These are design questions, and they are answerable. An actionable, quantitative model is the foundation for an engineering discipline of trust.
The model already reframes one of crypto's most misleading claims. "Trustless" does not mean the absence of trust. Rather, it means a trust composition dominated by hardness rather than pistis. Blockchains combine cryptography and economic incentives to produce trust with minimal pistis requirements. The distributed computing and economic stake involved in that combination entails massive fixed costs, but it yields a platform for cheap, programmable trust in the form of hardness.
The need for this kind of trust engineering is more acute than ever. Humans' shared biology, priors, norms, and evolved social intuitions give us enough common ground that many behaviors can be assumed without being designed for. AI agents do not share this constitution. For non-human agents, the predictability assessments that humans make implicitly must be explicitly designed for.
Society will need entirely new coordination infrastructure. Establishing an actionable model of trust is how we begin.
My thanks to Brennan Mulligan, Sara Norval, Nick Naraghi, Alex Murray, dancingpenguin.eth, Shelby Steidl, and Abram Dawson for their helpful comments on earlier drafts.
Resources here are not limited to instrumental goods. Anything the principal values (including ends in themselves like relationships and privacy) counts as a resource in this model, because delegation can put any of them at stake.
↩Tomasev et al. (2026) arrive at a compatible decomposition from the AI delegation literature, defining trust as the delegator's degree of belief in a delegatee's capability to execute a task in alignment with explicit constraints and implicit intent. Their "explicit constraints" map onto hardness; "implicit intent" maps onto pistis.
↩Hall arrives at a similar concept through a different lineage, calling it horkos — the ancient Greek oath-bond, the ritual counterpart to pistis. "Computers are incapable of making promises," Hall writes. "But they are constructed from oaths." I prefer Stark's label: hardness foregrounds the material and institutional spectrum, where horkos emphasizes the binding act. Both capture the insight that predictability can be built into the environment, but hardness better serves an engineering frame.
↩Resource value is closely related to utility in the economic sense. Both capture the idea of subjective valuation from a decision-maker's perspective. I use "resource value" to avoid the baggage of expected utility theory, which carries assumptions about rationality, cardinal measurement, and probability weighting that are unnecessary here.
↩
Anticapture
Towards a Framework of Capture-Resistant GovernanceSpecial thanks to the many people who have greatly influenced my thinking and provided feedback to various versions of this article: Nick Naraghi, Aaron Soskin, David Ehrlichman, Jon Hillis, Ven Gist, Tracheopteryx, David Phelps, Kames Cox-Gheraghty, Nich Kesonpat, Travis Wyche, Tae, Christian Lemp, Joshua Tan, Kevin Seigler, Chase Chapman, Marcus Phillips, Stefen Deleveaux, earth2travis, Daniel Ospina, Lane Rettig, Richard Bartlett.We see th...

DAOs, Tokens, and Goodhart's Forest
This article was originally published in Haus Party, DAOhaus’ newsletter, on October 14, 2020. This version includes some light edits.Tokens are incredibly powerful incentive mechanisms, and power corrupts. Welcome to this multi-part series, in which we’ll explore why tokens are so powerful, how things can go awry, and what can be done about it.You never know what's lurking in Goodhart's ForestThe challenge of human coordination is a challenge of alignment. The first step to accompl...
Governance is about action, not decision-making
The common misconception that governance is primarily about decision-making is wrong, outdated, and dangerous. The goal of governance is to DO things. Governance is about taking ACTION. Decision-making is important, but it is just one aspect of taking action.“Governance = decision-making” is wrongGovernance is a specific category of coordination. All governance is coordination, but not all coordination is governance. Coordination enables a network of agents to work together to bring about cha...
Share Dialog
Share Dialog

Subscribe to spengrah

Subscribe to spengrah
<100 subscribers
<100 subscribers
No activity yet