For those who advocate for Sowellian Governance, the argument for the model is obvious. Today's governance models primarily result in opaque, unaccountable decisions, where both the reasons why a decision was made, as well as the consequences of that choice are largely untracked. The metric that determines the continuation of a plan or policy is primarily whether it's sufficiently palatably packaged and if the right backs are rubbed, not what it promises as expected impact, and certainly not a transparent accounting of what its achieved impacts actually are.
The promise of Sowellian Governance is to explicitly structure governance toward the other pole: explicit explanation of expected impacts, and ongoing independent credibly-neutral evaluation of that impact.
The pattern most Sowellians adhere to is quite simple, at least in theory:
Establish the outcomes you want to achieve with your policy,
agree on key metrics of performance, both success and failure
and then tie incentives to the achievement of those metrics
Evaluated in isolation, these are substantially unobjectionable properties for any governance system, and of course most governance models do each of these at least implicitly and often explicitly. The challenge, however, is that in an adversarial context each of these can be quite difficult to agree to for manifold reasons. To name just a few:
Policy authors want to minimize their accountability for the policy impact, or may have strong uncertainty about it
Governors don't want to impose undue burden through reporting requirements
It's rarely the case that simple performance metrics are available and instead a suite of metrics are required to evaluate success with accuracy
Metrics have to be carefully chosen so that they cannot be gamed or manipulated
etc etc
It can be difficult enough to meaningfully choose these metrics when incentives are aligned (e.g. in the case of a CEO picking KPIs for their company), in a decentralized and potentially adversarial context this process is made even more difficult.
The Negation Game is an experiment in a governance model we call Epistocracy, and through the key mechanics of the Negation Game (namely, epistemic leverage, which we'll talk about) it seeks to achieve the goals of Sowellian Governance even in the context of highly adversarial actors.
As of August 2025, the current stage of the Negation Game is as a simple web application that allows representative decision makers (in the context of DAO governance, these are called 'delegates', and they're similar to democratic representatives) to construct and publish a graph of their reasoning on a particular topic so that it can be evaluated by other members of the organization or the community.
However, the Negation Game is best thought of not only as a tool for thought, but as an incentive model for implementing Sowellian Governance. You can read technical details here. What's most powerful about the Negation Game is the incentive model that underlies it, and for the sake of this post that's what we'll be referencing to when we refer to the Negation Game.
It's perhaps easiest to understand the mechanics of the Negation Game with a simple, real life example.
The Security Audit Subsidy Program proposal initially hit the Scroll DAO governance forums in mid June of 2025. Scroll DAO is a decentralized governance body tasked with stewarding and growing the Scroll transaction network. The governance forums are the official home for most of the discussion of what proposals should be passed and what the DAO should be focused on.
To summarize the proposal, it suggests that the DAO should allocate up to half a million USD worth of funds from its treasury to pay for certain services that app makers building on top of the Scroll network frequently need but often cannot afford. In particular, the funding is allocated to security auditing services, which are vital for end user experience (otherwise, users could be at risk of losing funds due to bugs or exploits), but are quite expensive for the app developer. Overall, passing such a proposal should improve access to such services and therefore improve quality of applications for end users on the Scroll network.
From a Sowellian perspective, you would expect such a proposal to have at least these sorts of evaluation criteria:
100% of major applications building on Scroll have undergone a security audit
Major applications audited by the program have less than 1 major and 2 minor incidents
80% of applications funded by the proposal remain actively developed two years after the subsidy is provided.
However, you'll notice that the success metrics of the actual proposal are not nearly so precise, nor informative as the above:
From a Sowellian perspective this is somewhat embarrassing. Not only are the success metrics toothless (if they don't hit those metrics what happens? Nothing.) but they're also uninformative (50% of the projects generate revenue in 6 months? So, is $1 enough?). In the existing governance paradigm, this is entirely unsurprising — there are no incentives for delegates nor for proposal authors to think about such metrics.
The model you see above is what we'll call the staged voting model:
The proposal author (often someone with a financial interest in a proposal being passed) publishes a proposal to the forum
Delegates comment on the proposal and suggest edits
Delegates endorse a proposal, if enough of them endorse, the proposal goes to a vote in the next voting cycle
Delegates and token holders vote on the proposal, passing it or not
The Negation Game model differs substantially:
Proposal authors publishes a proposal, which is immediately "up for vote"
Delegates either endorse the proposal (staking their tokens), or they negate
When they negate, they offer arguments as to why they disagree, which they also stake, bringing down the score of that proposal
To garner more influence, proposal authors can "restake", or use the tokens they have staked in the proposal passing to say what success criterias they are willing to be constrained to, in exchange for doing this they get a higher score.
Delegates can stake to say which of the restaked criteria they find informative, when they do so this also gives a higher score to the proposal.
When the time weighted average score reaches a minimum threshold, the proposal executes and staked tokens convert to conditional shares in the DAO's future performance.
As criteria are resolved, tokenholders receive payouts proportional to how well the proposal meets its commitments — profiting from success or losing their stake if criteria fail.
Unlike the accountability-free staged voting model, this model creates an incentive for proposal creators and delegates to actively evaluate the likely outcomes of a proposal.
The key step to notice in this model is step 4, where the proposal author "restakes" the criteria that would change their mind.
This process of restaking is the key mechanism behind the Negation Game which differentiates it substantially from other governance models (including token voting and futarchy). We call this action of restaking epistemic leverage.
The way epistemic leverage works is that it permits a delegate to evaluate the metrics that might be relevant to a proposal, while leaving it up to the proposer to evaluate whether the metrics will be achieved. Essentially what the delegate is saying is, "these things would be really informative to me, if you really think you can achieve them (which you're willing to signal by staking against them) then I'm in score of this proposal."
What's powerful about the Negation Game is that this need not be done in one single step. For example, rather than a delegate directly offering a criteria like the one above, "80% of applications funded by the proposal remain actively developed two years after the subsidy is provided.", they might start with a statement much simpler than this, something like, "Most apps given security subsidies will build with Scroll over the long term." and say that they would increase their support of the proposal given that some reasonable criteria is found for that statement. A subsequent process (detailed below) then finds that criteria and seeks their consent for it.
This radically simplifies the experience of evaluating a proposal as a delegate. Rather than evaluating the details of the proposal, I'm evaluating the outcomes I'd like to see, and leaving it to the authors to determine whether their proposal is likely to achieve this, as well as leaving it to other participants to find a mutually agreeable definition of such a criteria.
What's happening behind the scenes is that governance participants are staking three kinds of targets:
proposals themselves (e.g. "Pass the Security Subsidy Proposal")
the veracity of "criteria", known in the Negation Game as negations (e.g. the criteria in this case says, "Apps given security subsidies will build with Scroll over the long term")
the relevance of criteria, statements that indicate whether a criteria matters
On any and all of these it's possible for a participant to restake. By restaking they are identifing a statement that would change their mind if it were true and they precommit to the amount it would change their mind. In exchange for doing this, their proposal receives higher score.
So, with a statement like, "Don't Enact the Security Subsidy Proposal", a good statement to restake as the proposal author might be "Apps given security subsidies will build with Scroll over the long term" because it's important to the network that apps stick around after receiving these funds. You can even imagine that apps themselves might stake a target like this, themselves effectively offering to give up their staked funds if they were to leave the network early.
The magnitude of the additional score they receive for restaking is affected by three factors:
The inferred probability of the restaked statement, highly probable statements grant more influence
The score of the relevance of the criteria, high relevance score means more influence per amount restaked
The amount they have been doubted on their restake, to doubt is to make the bet that even though a participant says they will change their mind, they actually won't
The first two components dictating the leverage a participant receives for restaking should feel familiar and unsurprising. The more I as a proponent of something tell you about the ways in which I could change my mind, and you perceive those as both important and also likely, the more you're going to trust me — at least the more you're going to trust me IF you think I'm actually the kind of person who does change my mind as a result of receiving contradictory evidence. And that's what doubt, the final mechanic in epistemic leverage is for: it's a kind of context-aware reputation sensing mechanism to identify credibly neutral, intellectually honest participants. To radically coarse-grain the explanation of the mechanism: by doubting you, I as a doubter reduce the score you get as a restaker, and I start to get paid out of your restake as long as the probability of the criteria is high (#1) and the relevance of the criteria is high (#2). In other words, both the doubter and the restaker want a high P(criteria) and P(proposal | criteria), where they differ is that the restaker wants the doubter to fear that they will change their mind, and the doubter wants the restaker to be bull-headed in the face of a high P(criteria).
This underlying incentive setup offers a number of truly beautiful emergent properties. Foremost among them is that it creates a pervasive incentive to be known as a neutral participant. If you're not a neutral participant, then doubting you can be profitable. This is a radically different arrangement from current DAO structures where it's merely a norm that one abstains from voting on proposals that one benefits from personally. However, abstaining on personally beneficial proposals runs contrary to your personal incentives and is easy to circumvent because one can always create new wallets that vote pseudonymously on your behalf. It's even more radically different from the incentives of most political systems, where political candidates are systematically rewarded for being dogmatic committment rather than epistemic sensitivity.
This incentive for neutrality is incredibly powerful, as it means that whereas delegation of power both in DAOs and in the world is currently handled on a per-identity basis (via delegation to professional delegates, via representative democracy), with epistemic leverage delegation of power becomes:
dynamic & context aware — the delegates' neutrality on this particular issue is the operative question that drives doubters, including sensitivity to their overall reputation for neutrality, which could include their historical activity or private information known only to the doubter
position neutral — it does not necessarily favor delegates that happened to be early to the DAO, a new participant could come in for just a single proposal and find themselves with significant epistemic leverage because of their knowledge or prior reputation from other contexts
competence sensitive — what distinguishes an expert from a layperson is the expert's knowledge of relevant considerations; and epistemic leverage directly rewards revelation of relevant considerations by participants, putting experts at an advantage without instituting a credentialing system.
Importantly, because this kind of reputation accrues positively, epistemic leverage has natural Sybil resistance properties for the same reason that expertise is Sybil resistant — you want to be uniquely known as an expert.
In addition to offering the incredibly powerful filter|incentive to amplify|elicit neutrality, this incentive architecture offers other useful signals which can be extracted from the graph. Most important is the ability to lift double-crux information directly from the Negation Game's epistemic graph. Whenever a staker of a proposal restakes a criteria, and a staker of the inverse of that proposal restakes the inverse of that same criteria we can have high confidence that the criteria must be informative, because both sides have indicated they would find it informative.
For example, if a staker of "Pass the Security Subsidy Proposal" says they would change their mind if "Apps given security subsidies won't build with Scroll over the long term", and (just as before) a staker of "Don't Enact the Security Subsidy Proposal", restakes "Apps given security subsidies will build with Scroll over the long term", then we know we should be paying attention to the the long term retention of security subsidized apps, because both parties agreed it's important. We can use this information to directly inform the relevance between the proposal and the criteria because of the countervailing nature of the information, much in the same way that Community Notes uses surprising agreement as a metric for which note to show, where notes are shown where people who usually disagree both find a note helpful.
With this context, the answer to a common question asked about the Negation Game should be clear. Because the relevance between proposals and criteria is the primary driver of decision making in this governance model, the natural question is, "How is relevance determined?" Now, the three (plus one) sources of relevance information should be clear:
Participants (including delegates and proposal creators) can directly stake the negation edges to increase the relevance between two points
Stakers of negation edges can restake to use epistemic leverage to increase the score of the relevance, giving more neutral parties more voice in the determination of relevance
Double-cruxes evident in countervailing restakes can automatically signal to the system to increase the score of the relevance, much in the same way that Community Notes uses surprising agreement
Bonus: finally, this pattern repeats recursively on meta-relevance edges, accruing stakes, restakes, and double-cruxes as needed until convergence is reached.
And if we return to our original discussion of Sowellian Governance, evaluating the relevance of various metrics and then tying them to the incentives of the participant was the primary challenge that the model faced. And now, with the Negation Game it's possible to address these fundamental challenges through epistemic leverage.
The Negation Game provides a systematic solution to the core problems that make Sowellian Governance difficult to implement in adversarial contexts:
Accountability Without Gaming: Rather than policy authors minimizing their accountability, the restaking mechanism incentivizes them to commit to measurable outcomes in exchange for increased score. The more specific and challenging their commitments, the more influence they gain—but only if other participants believe these commitments are both relevant and achievable.
Distributed Metric Selection: Instead of governors imposing top-down reporting requirements, the relevance staking system allows the community to organically identify which metrics matter most. Double-crux detection automatically surfaces the criteria that opposing sides both consider informative, eliminating much of the political friction around metric selection.
Gaming Resistance Through Neutrality Incentives: The doubt mechanism creates ongoing incentives for intellectual honesty. Participants who consistently game metrics or fail to honor their restaked commitments become profitable targets for doubters, naturally filtering for genuine epistemic engagement over time.
Dynamic Expertise Recognition: Unlike traditional credentialing systems, epistemic leverage rewards domain knowledge contextually. Experts gain influence by demonstrating their understanding of relevant considerations, while maintaining the Sybil resistance properties that prevent manipulation through multiple identities.
The Negation Game represents more than just a novel voting mechanism—it's a comprehensive answer to the implementation challenges that have historically prevented Sowellian-style Governance from scaling beyond aligned contexts. By creating economic incentives for accountability, intellectual honesty, and expertise, it transforms adversarial governance from a coordination problem into a market for truth.
The Security Audit Subsidy Program example demonstrates this transformation in practice. Where traditional governance produced vague, unenforceable success metrics, the Negation Game's incentive structure would naturally drive toward the specific, measurable outcomes that Sowell advocated for, in this case: concrete audit completion rates, measurable security improvements, and verifiable long-term engagement metrics.
As decentralized organizations continue to grapple with the challenges of effective governance at scale, the Negation Game offers a path forward that doesn't require perfect actors or aligned incentives—only a willingness to put one's tokens where one's convictions lie. In doing so, it may finally deliver on the promise of governance systems that reward results over rhetoric, and evidence over ideology.
Connor McCormick
Support dialog