
The myth always begins with the breakthrough technology—the faster chip, HTTP and the World Wide Web, the blockchain, the LLM. This is the story of the tech cycle: "This time, the technology will be so advanced, so sophisticated, the entire game will change."
In some sense, the game does change. But not in the way people expect. While technology does impress and valuations boom, soon value begins to accrue elsewhere. Not in the shiniest data centers, not to the best algorithms, but in the end user—the last mile.
The combined market capitalization of Cisco, Motorola Solutions, Nokia, and Ericsson, 4 of the largest providers of telecommunications equipment in the world, is roughly 20% of Meta alone. 5x the valuation, even though without these telecom providers, Meta's services quite literally would not be reachable by the vast majority of their user base.
TSMC, despite producing ~65% of all semiconductor chips and holding a 90%+ monopoly on "advanced chips", is worth about 25% of Microsoft's $3.7 trillion valuation or 45% of Alphabet's $2.2 trillion price tag.
The undisputed tech leaders of the last decade—Apple, Meta, Microsoft, Alphabet, Netflix, and Amazon—all had one thing in common, proximity to the final user.
These winners weren’t necessarily the most innovative protocols or the most advanced codebases. They were the ones that embedded themselves at the point of contact—where humans made choices, moved money, or searched for meaning. It wasn’t Google’s algorithm that created its dominance—it was the browser default. It wasn’t Facebook’s tech stack that made it powerful—it was the social graph, rendered legible and addictive.
While the earnings potential of the web2 giants was unquestionable, often their methods weren't. Monetization of personal data and complete user lock-in led many to look for a better way forward. Blockchain and the transition to Web3 promised (and still promise) a future of interoperability and user sovereignty.
The emergence of open social media protocols like Farcaster and the still powerful trend of self-custody for crypto assets may lead us to ask the question, "if last mile providers can't lock users in or sell data, where will value accrue in web3?"
In 2016, Joel Monegro introduced us to the seminal idea of the Fat Protocol Thesis. Now a partner Placeholder VC, Monegro brilliantly outlines many of the key and very real architectural differences between web2 and web3. To be reductive, the core insight is that in Web3 user authorization, data storage, and even a lot of compute, can be carried out by the shared protocol, while in web2 only data transmission occurred on shared rails.

Fundamentally, more work is done by shared infrastructure in Web3. Monegro and many others then conclude that because more work is done, more value can accrue. If we look at CoinGecko, or another market cap aggregator, it seems as though the thesis holds! Of the top 20 token valuations, 15 are for layer 1 or layer 2 protocols, undisputedly part of the shared protocol layer with no direct connection to a final user interface.
But now let's peek behind the curtain, are these prices indicative of value captured or simple speculative momentum? Money piling in following earlier money and excitement, once again believing that this time the technology is so advanced it's going to be different.
If instead of looking at market cap, we look at revenue, we see an image that's almost perfectly reversed. Of the top 20 web3 projects by revenue, only 4 are actual blockchains, and the highest earning chain, Hyperliquid, in fact has its own end user interface.

There is admittedly much nuance to the distinction between a protocol and an application, especially in Web3. While it can be argued that decentralized exchanges, lending protocols, and the like have elements of being protocols because they are permissionless and can be accessed by other service providers, a more important factor is at play.
They all have direct connections with end users. They each maintain interfaces that are accessed directly by the ultimate source of capital, the final user.
No matter how impressive a technology may be, it exists on a curve. Today it dazzles; in a year it’s replicated; in five years it's a commodity. Proprietary advantage fades as open-source architectures and abstraction emerge. This isn’t a critique of any technology or technologists, we're both technologists ourselves. It’s a recognition of destiny.
At the end of it all, access to the user and their trust is what dictates the potential for value capture. The amount of work to be done for any particular business model is a little more than incidental.
November 2022, the ChatGPT moment! Never mind the fact that the first AI winter was in the 1970s (and we've had several more since) or that the seminal paper for using transformers in AI was published back in 2017, this was the point from which technology would never be the same.
There would be no more need for people to learn software engineering; though software engineering has never really been about memorizing how to write code anyway. We'd start to see unicorns with a headcount of one; Telegram already had fewer than 50 employees, and Tether has roughly 100.
Large language models represent an undeniable technology breakthrough, but so did dozens and dozens of other new technologies over the past 75 years. Technologies, like empires, always aspire to permanence. But most are destined to give way to the next. They get standardized, modularized, priced into invisibility. Storage was once a differentiator. So was compute. Bandwidth. The compiler... For a brief moment, such layers held the illusion of permanence. Then they receded into utility.
OpenAI was recently valued at $300 billion. Anthropic, $61 billion. Scale AI was recently purchased for $29 billion.
These eyepopping numbers though hide two important facts about these companies at the base layer of the AI revolution. First, it's been more than 30 years since we've seen costs this high for the hottest startups. Whether its hundreds of millions for new GPU data centers or ten million to hire a single engineer, the costs for AI model and infra development have really no other parallel in the tech sector today. Second, the models are already converging, and as anyone who builds AI agents can tell you, good data matters much more than a good model. Properly setting up the data you share with your agent and the environment you construct around it will almost always impact results far more than switching from one model to another.
You can copy weights. You can fork the code. You can download yesterday’s state-of-the-art and run it on consumer hardware. But you can’t fork timing. You can’t copy trust. You can’t clone a network’s willingness to act in concert. That layer—just above cognition, just below action—is where the moat begins.
Coordination is messy. It's not unanimous either by vote or contract. It's the choreography of partially-aligned actors navigating shared purpose, each with their own incentives, languages, and thresholds for risk. It’s the domain where reputation holds weight, where brand matters, and the feeling your users get means more than the most elegant codebase.
This is where margin survives. Not in how brilliant the technology is, but in how it’s used—when, by whom, and toward what end. And the systems that quietly structure this flow, without shouting for attention, are the ones that accumulate over time. They don’t need to be the smartest. They just need to be the ones everyone uses when it matters.
"The language changes, the medium updates, the hype cycles evolve—but the topology of value remains stubbornly familiar."
Share Dialog
Chris Georgen and Dhivesh Govender
No comments yet