
Every iteration of builders thinks they've solved digital freedom. Every iteration recreates the same power structures they tried to escape.
February 1996: John Perry Barlow declares cyberspace naturally free from governmental tyranny. Within months, 5,000 websites host his manifesto. He becomes "the Thomas Jefferson of Cyberspace." He's wrong about almost everything.
But here's what nobody tells you: while Barlow was writing poetry in Davos, Eric Hughes had already been writing code for three years. The Cypherpunk Manifesto came first, March 1993. Hughes understood what Barlow didn't: you defeat power with infrastructure, not declarations.
Then in 1999, Lawrence Lessig watched both approaches fail and wrote Code and Other Laws of Cyberspace. He correctly predicted the internet would become "the most regulable space of human activity." He was diagnostically perfect, but it didn't change anything.
Twenty-five years later, in 2024, Chris Dixon published Read Write Own, explaining why platforms inevitably betray users through "attract then extract." He's right about the economics, but most token projects still centralize anyway.
And then in November 2025, Vitalik Buterin, Yoav Weiss, and Marissa Posner published the Trustless Manifesto. It says something none of the previous iterations said: You're all measuring the wrong thing.
Five iterations. Thirty years. Billions of dollars. And we keep making the same mistake. Here's what each iteration got right, what they missed, and what it means for anyone trying to build something that won't re-centralize.
Barlow declared that cyberspace was naturally independent of governmental tyranny because sovereignty requires physical coercion, and "our identities have no bodies." His Declaration was intoxicating—within three months, 5,000 websites hosted it, and by nine months, 40,000. He perfectly captured the early internet's promise: a genuinely new kind of space where power couldn't follow, where geography didn't matter, where information wanted to be free.
The early internet was temporarily ungovernable. The architecture made identification hard, surveillance expensive, and borders meaningless. For a brief moment, it felt like freedom was natural to the medium itself. But Barlow only understood power as physical coercion. He didn't see that architecture itself is governance, that code shapes what's possible before you even make a choice. He confused a temporary condition—the early internet's accidental architecture—for a permanent truth.
By 2002, only 20,000 sites still hosted the Declaration. Reality had caught up. The internet got regulated—just not by governments at first. Corporate platforms, spam filters, DNS control, payment rails, app stores. Even Barlow admitted in 2006 that he should have been clearer about cyberspace's "mind/body interdependence."
The lesson: Freedom isn't natural. It's designed. And what's designed can be redesigned.
Eric Hughes published his manifesto three years before Barlow's Declaration. While Barlow was theorizing about naturally free cyberspace, Hughes was building the infrastructure that might actually make it free. "We the Cypherpunks are dedicated to building anonymous systems," he wrote. "We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money. Cypherpunks write code."
Hughes understood something crucial: you defeat power with infrastructure, not poetry. He correctly identified privacy as a collective action problem—"Privacy only extends so far as the cooperation of one's fellows in society." You need critical mass adoption, not just ideological purity. And technically, he was right about everything. PGP works. Tor works. The cryptographic toolkit for privacy is mathematically sound.
But technical possibility doesn't equal social adoption. PGP was cryptographically perfect and socially dead. Privacy tools have network effects but brutal cold starts—nobody benefits from being the only one using encrypted email. The UX tax compounds the coordination problem. Signal is more private than WhatsApp. WhatsApp has 2 billion users. That's not a marketing problem; it's a coordination problem Hughes never solved.
The lesson: Building alternatives is necessary but insufficient. You need mechanisms that solve the cold-start problem. Satoshi understood this—Bitcoin provides immediate individual value without requiring counterparty adoption. That's why Bitcoin succeeded where PGP failed.
By 1999, both approaches had been tried. Barlow's ideological declaration. Hughes' technical infrastructure. The internet was still centralizing. Lawrence Lessig wrote Code and Other Laws of Cyberspace to explain why.
His "pathetic dot theory" showed four forces that regulate behavior: law, norms, market, and architecture. He warned that commerce would build "architectures of identification" to enable transactions, and those same architectures would make the internet more regulable, not less. His prediction that the internet would become "the most regulable space of human activity" was perfect. Facebook's real-name policy, Apple's app store chokepoint, Google's tracking ecosystem, China's Great Firewall—Lessig saw it all coming.
The crucial insight: when cyber-libertarians say "keep government out," they're not choosing freedom—they're choosing private sector regulation over public sector regulation. Commerce needs identification systems, identification systems enable surveillance, and surveillance enables control. The architecture was always going to be regulated. The question was by whom.
But Lessig's prescription—democratic governance of code through law and regulation—assumed competent, benevolent regulators who could move at technology's speed without regulatory capture. In practice, the CAN-SPAM Act legalized spam instead of stopping it. The Patriot Act expanded surveillance. Tech lobbying crushed net neutrality. Knowing why platforms centralize doesn't stop them from centralizing if you can't build working alternatives.
The lesson: Diagnosis without viable treatment is sophisticated pessimism. You can't regulate your way to freedom when the regulatory system itself is captured.
Twenty-five years after Lessig diagnosed the problem, Chris Dixon finally brought economic thinking to the solution. He doesn't appeal to ideology (Barlow), technology alone (Hughes), or regulation (Lessig). He designs incentive mechanisms.
Platforms follow a predictable "attract then extract" lifecycle. First, they subsidize to build network effects, then they jack up take rates and change rules once you're locked in. Dixon's diagnosis is perfect—YouTube changed creator revenue splits, Twitter closed its API, and Facebook pivoted to pay-to-reach. Every major platform followed this exact pattern because it's how equity-backed platforms are structurally designed to work.
His solution: give users ownership through tokens so the community controls the platform. If users own governance tokens, they can vote to keep the platform open. If users own value tokens, they benefit from platform growth instead of VCs alone. Ownership aligns incentives in ways previous generations missed.
But ownership distributes economic upside—it doesn't guarantee good governance. Look at the reality: voter turnout typically ranges from 5-17% of token holders, with many DAOs struggling to hit even 10%. Whales dominate voting in major DAOs; less than 1% of holders control 90% of voting power. Real decisions happen in Discord; voting is theater. Teams keep "temporary" admin keys indefinitely. Sometimes ownership just means you own a piece of something that still acts like a centralized company.
The lesson: Ownership is necessary but insufficient. You can't govern your way out of bad architecture.
The Trustless Manifesto says: you're all optimizing the wrong variable. Don't minimize governance. Minimize trust. Measure success not by transactions per second, but by trust eliminated per transaction.
It synthesizes all previous iterations. It's not about whether cyberspace is "naturally" free (Barlow), or just about building tools (Hughes), or regulating architecture (Lessig), or only about token ownership (Dixon). It's about architectural constraints that make centralization structurally impossible—not just economically irrational.
Let me tell you the most devastating case study in the history of technology. Email was once fully open—anyone could run their own mail server. The protocol was permissionless. In theory, you still can run your own server today.
Try it. Set up a mail server. Send mail to Gmail addresses. Watch what happens. Gmail will flag you as spam. Your IP gets blacklisted. Reputation systems make it nearly impossible for self-hosted email to reach inboxes. The protocol stayed open. Participation became impractical.
This is what the manifesto calls "de facto centralization." The protocol doesn't get captured—the access layer gets captured. Email is technically permissionless. Practically, it's gatekept.
This is happening to Web3 right now. Ethereum is decentralized, but if your dapp only works with Infura or Alchemy, centralized access layer. If AWS, GCP, or Cloudflare went dark, most "decentralized" apps would too. Hosted RPCs are the default. Few users run nodes. "Decentralized" frontends hosted on Vercel pointing to centralized backends.
The drift isn't theoretical. It's here. The manifesto nails it: "Trust does not return all at once. It returns through defaults slowly. Each choice feels harmless, temporary—not like centralization. No capture, no coup—just comfort. Help becomes habit; habit becomes dependence."
Systems drift toward centralization through convenience, not because the protocol changed, but because practical participation became impossible for normal users.
The fifth iteration gives us architectural constraints—laws that make centralization structurally impossible.
Law 1: No Critical Secrets
No step of a protocol should depend on private information held by a single actor—except the user themselves. The test: If this entity disappeared tomorrow with their secrets, would the system still work?
Law 2: No Indispensable Intermediaries
Anyone who forwards, executes, or attests must be replaceable by any other participant following the same rules. The crucial addition: "'Anyone can run one' is not enough—participation must be practically open, not reserved for those with servers, funding, and DevOps skills."
The test: Can a random developer with $100 and basic skills actually replace this intermediary? This is the difference between theoretical and practical openness. Email passes the first test (anyone can run a mailserver) but fails the second (good luck getting Gmail to accept your mail).
Law 3: No Unverifiable Outcomes
Every effect on state must be reproducible and checkable from public data. The test: Can I independently verify this outcome without trusting anyone?
These aren't principles—they're architectural constraints. They make centralization structurally impossible, not just economically irrational. The manifesto is honest about the cost: "Trustlessness is expensive. It requires redundancy, openness, and complexity. It requires mempools that anyone can use, even if that invites spam. It requires clients that anyone can run, even if few will."
Most teams won't pay this cost. VCs pressure for growth. Users want convenience. Trustless design is slower, more complex, lower throughput. The manifesto says: "If simplicity comes from trust, it is not simplicity. It is surrender."
This is why most "Web3" projects are really Web2.5. They want the marketing of decentralization with the convenience of centralization.
Five iterations. One progression.
Barlow thought freedom was natural. It's not—it's designed. Hughes thought code was enough. It's not—you need adoption mechanisms. Lessig thought democratic governance could constrain private power. It can't, not reliably. Dixon thought ownership would prevent betrayal. It helps, but governance is hard.
The Trustless Manifesto says: minimize the need for governance in the first place. Build systems where trust is architecturally unnecessary.
That's the evolution. From ideology to infrastructure to regulation to mechanism design to trust minimization. The only way to break a coordination monopoly is to build infrastructure that makes alternatives cheaper than the incumbent. Not just possible. Cheaper. Easier. More reliable.
That's what trustlessness actually means. Not just "don't trust, verify." But "trust is architecturally unnecessary."
The question isn't "who governs?" The question is: "What can we make ungovernable?" Not ungovernable by removing control—ungovernable because control itself is structurally impossible.
That's what almost nobody is building. And that's what determines whether your project exists in 10 years or becomes the next email—technically open, practically gatekept.
You can't break a coordination monopoly by making it slightly more expensive to coordinate. You break it by making coordination impossible to monopolize.
You've seen five attempts to break coordination monopolies. The Collapse of the Coordination Monopoly explains what they're fighting—how platforms capture markets by controlling coordination, and why trustless systems finally change the rules. https://paragraph.com/@jonathancolton.eth/five-iterations-of-digital-freedom
Referenced Works:
A Declaration of the Independence of Cyberspace - John Perry Barlow (1996)
A Cypherpunk's Manifesto - Eric Hughes (1993)
Code and Other Laws of Cyberspace - Lawrence Lessig (1999)
Read Write Own - Chris Dixon (2024)
The Trustless Manifesto - Vitalik Buterin, Yoav Weiss, Marissa Posner (2025)
Jonathan Colton
Let's go frens 💙
Good article
But right know is hard to know which one is real building or just scam
There is a three decade discussion on digital freedom. The “Trustless Manifesto” by @vitalik.eth is the latest iteration.
⭐️🖤⭐️🖤⭐️😍
Great job