
The Sovereign Protocol Hiding in Plain Sight
I’m a true believer in crypto as a tool for global self-sovereignty. After shutting down Laguna Games, I came to the realization that I only want to work with people who share this ethos. I’ve reached a point in life where most of my net worth lives behind seed phrases, not bank logins. I’m no longer interested in spending time with tourists. Too many people have flooded the space chasing easy money and hype. I’m here for something deeper. The true cypherpunk spirit survives in small pockets ...

The Three Laws of LLMs
It's been a few weeks since my last post, and I'm feeling the weight of the challenge I've undertaken. How do I build an intelligent AI system I can trust implicitly? Against the backdrop of this hobby project, I've found myself increasingly reliant on centralized LLM tools for work at Codex, especially on the coding front. Re-reading my last post, I realize I was too optimistic and a bit naive. I greatly underestimated the complexity, both technically and practically. Yes...

What's Next?
On January 22, 2025, I closed the book on an 8-year journey. Rebooted in 2020 from Beyond Games, Laguna Games set out to build mobile free-to-play games, landed a publishing deal with WB, turned down an acqui-hire offer, and ultimately pivoted into the wild west of crypto gaming with Crypto Unicorns. It became my greatest success—and, in the end, my greatest failure. At its peak, Crypto Unicorns sold $25M in digital assets, hit $100M TVL in The Dark Forest: Act One, and launched a token at a ...


The Sovereign Protocol Hiding in Plain Sight
I’m a true believer in crypto as a tool for global self-sovereignty. After shutting down Laguna Games, I came to the realization that I only want to work with people who share this ethos. I’ve reached a point in life where most of my net worth lives behind seed phrases, not bank logins. I’m no longer interested in spending time with tourists. Too many people have flooded the space chasing easy money and hype. I’m here for something deeper. The true cypherpunk spirit survives in small pockets ...

The Three Laws of LLMs
It's been a few weeks since my last post, and I'm feeling the weight of the challenge I've undertaken. How do I build an intelligent AI system I can trust implicitly? Against the backdrop of this hobby project, I've found myself increasingly reliant on centralized LLM tools for work at Codex, especially on the coding front. Re-reading my last post, I realize I was too optimistic and a bit naive. I greatly underestimated the complexity, both technically and practically. Yes...

What's Next?
On January 22, 2025, I closed the book on an 8-year journey. Rebooted in 2020 from Beyond Games, Laguna Games set out to build mobile free-to-play games, landed a publishing deal with WB, turned down an acqui-hire offer, and ultimately pivoted into the wild west of crypto gaming with Crypto Unicorns. It became my greatest success—and, in the end, my greatest failure. At its peak, Crypto Unicorns sold $25M in digital assets, hit $100M TVL in The Dark Forest: Act One, and launched a token at a ...


Subscribe to 0xCodexVC

Subscribe to 0xCodexVC
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
Since 2024 I’ve been circling Bittensor's ecosystem trying to form a thesis around it. What initially grabbed me was the idea that you could incentivize useful work. Since launch, Bitcoin bootstrapped the world’s largest supercomputer by paying people to brute force a math puzzle. The futurist in me loves the thought experiment that you could aim this same power at intelligence instead.
Simple idea, but it turns out to be fucking complicated in practice.
At a high level, Bittensor splits the network into marketplaces called subnets. Each subnet focuses on a specific digital commodity tied to AI: price predictions, weather forecasts, code development, synthetic data, inference, and so on. Miners produce that commodity. Validators query miners, score their outputs as weights, and publish those weights onchain. Each subnet then runs its own flavor of Yuma Consensus over that matrix of weights to decide how TAO is distributed. The effect is less “who mined the most blocks” and more “who produced the most useful answers according to the judges this round.”
Despite having been in the wild for a few years now, the protocol remains in a state of flux. It was originally designed as a Polkadot parachain called Finney (takes me back to 2017!) and has since gone through several major evolutions, including two this year: Dynamic TAO and TaoFlow. These changes have rewired the entire economic engine. Instead of a single staking pool at the root deciding how emissions are split, every subnet now has its own token, an alpha token, and its own TAO/alpha liquidity pool that behaves like an AMM. When you “stake” TAO into a subnet you are really swapping TAO for that subnet’s alpha token. The prices of those alpha tokens relative to each other (and now the Net TAO flow EMA) decide how much of the global TAO emission each subnet receives.
It is a giant, perpetually ongoing tournament for emissions. A new race to 21,000,000.
The obvious question is whether you can actually validate “useful work” at a standard high enough to scale into a global utility. It is up to every subnet to define its own scoring rules. Yuma only knows about rankings; it does not know if your scores are gaming some edge case. Validators can collude. Miners can overfit. The attack surface is massive. Dynamic TAO has already kicked off debates about how quickly alpha should gain weight versus legacy root stake, how to bootstrap new subnets without handing them free money forever, and how to keep the whole thing from devolving into pure mercenary yield games. We are still in the Cambrian explosion phase of mechanism design here, making Bittensor one of the last truly wild frontiers in crypto.
Since 2024 I’ve been circling Bittensor's ecosystem trying to form a thesis around it. What initially grabbed me was the idea that you could incentivize useful work. Since launch, Bitcoin bootstrapped the world’s largest supercomputer by paying people to brute force a math puzzle. The futurist in me loves the thought experiment that you could aim this same power at intelligence instead.
Simple idea, but it turns out to be fucking complicated in practice.
At a high level, Bittensor splits the network into marketplaces called subnets. Each subnet focuses on a specific digital commodity tied to AI: price predictions, weather forecasts, code development, synthetic data, inference, and so on. Miners produce that commodity. Validators query miners, score their outputs as weights, and publish those weights onchain. Each subnet then runs its own flavor of Yuma Consensus over that matrix of weights to decide how TAO is distributed. The effect is less “who mined the most blocks” and more “who produced the most useful answers according to the judges this round.”
Despite having been in the wild for a few years now, the protocol remains in a state of flux. It was originally designed as a Polkadot parachain called Finney (takes me back to 2017!) and has since gone through several major evolutions, including two this year: Dynamic TAO and TaoFlow. These changes have rewired the entire economic engine. Instead of a single staking pool at the root deciding how emissions are split, every subnet now has its own token, an alpha token, and its own TAO/alpha liquidity pool that behaves like an AMM. When you “stake” TAO into a subnet you are really swapping TAO for that subnet’s alpha token. The prices of those alpha tokens relative to each other (and now the Net TAO flow EMA) decide how much of the global TAO emission each subnet receives.
It is a giant, perpetually ongoing tournament for emissions. A new race to 21,000,000.
The obvious question is whether you can actually validate “useful work” at a standard high enough to scale into a global utility. It is up to every subnet to define its own scoring rules. Yuma only knows about rankings; it does not know if your scores are gaming some edge case. Validators can collude. Miners can overfit. The attack surface is massive. Dynamic TAO has already kicked off debates about how quickly alpha should gain weight versus legacy root stake, how to bootstrap new subnets without handing them free money forever, and how to keep the whole thing from devolving into pure mercenary yield games. We are still in the Cambrian explosion phase of mechanism design here, making Bittensor one of the last truly wild frontiers in crypto.
The good news is that beneath all of that churn, real applications are starting to peek through. You see it in inference networks like Chutes, in LLM tooling and agents from Targon and Ridges, in niche data subnets like Zeus, and in predictive intelligence efforts like Sportstensor and Synth. Different teams, different domains, same core idea: turn a specific slice of intelligence into a metered commodity and let TAO emissions flow to whoever actually delivers.
Owning TAO is the base bet. You are betting that this thing becomes a persistent coordination layer for useful models, forecasts, and agents. If the experiment works, value accrues to the one scarce asset that everything in the ecosystem ultimately settles back into.
Owning subnet tokens is a levered, more opinionated bet on specific primitives and teams. When you buy alpha you are taking exposure to that subnet’s emissions share, its internal scoring rules, and whatever real world demand it can drum up. At Codex we decided we did not just want to sit in TAO and passively stake into a couple of “blue chip” subnets. If the whole point of this thing is proof of useful work, we want to be in the arena doing some of that work.
I believe in the thesis that predictive intelligence is the final benchmark. Over the past year we have been quietly building and testing systems that push LLMs into that environment through sports handicapping. The question is whether these systems can outperform a human. That work deserves its own deep dive, so for now I will just say we have seen enough to stay obsessed and have research dropping in 2026.
Parallel to that, we are spinning up liquidity focused miners on subnets like Swap (Subnet 10 / TaoFi) and Minotaur, wiring them into our CLMM engine, Slipstream. Getting paid in alpha tokens on top of normal LP fees meaningfully boosts ROI and stress tests our infra in a live, adversarial setting. To stay competitive we will have to keep improving our engines. It is a clear example of how Bittensor is already incentivizing useful work, in this case optimized liquidity provisioning, and attracting businesses like Codex.
I’m curious to see where it all goes. The core idea feels right: one neutral token, many experiments at the edge, and a brutally honest market deciding which forms of intelligence are actually worth paying for.
That is a bet I am comfortable taking on a 5 to 10 year horizon. I plan to keep accumulating TAO via mining subnets and LPing. It sits squarely at the intersection of everything I care about right now: crypto, AI, open systems, and the long, messy path toward turning “useful work” into something we can measure, price, and own.
The good news is that beneath all of that churn, real applications are starting to peek through. You see it in inference networks like Chutes, in LLM tooling and agents from Targon and Ridges, in niche data subnets like Zeus, and in predictive intelligence efforts like Sportstensor and Synth. Different teams, different domains, same core idea: turn a specific slice of intelligence into a metered commodity and let TAO emissions flow to whoever actually delivers.
Owning TAO is the base bet. You are betting that this thing becomes a persistent coordination layer for useful models, forecasts, and agents. If the experiment works, value accrues to the one scarce asset that everything in the ecosystem ultimately settles back into.
Owning subnet tokens is a levered, more opinionated bet on specific primitives and teams. When you buy alpha you are taking exposure to that subnet’s emissions share, its internal scoring rules, and whatever real world demand it can drum up. At Codex we decided we did not just want to sit in TAO and passively stake into a couple of “blue chip” subnets. If the whole point of this thing is proof of useful work, we want to be in the arena doing some of that work.
I believe in the thesis that predictive intelligence is the final benchmark. Over the past year we have been quietly building and testing systems that push LLMs into that environment through sports handicapping. The question is whether these systems can outperform a human. That work deserves its own deep dive, so for now I will just say we have seen enough to stay obsessed and have research dropping in 2026.
Parallel to that, we are spinning up liquidity focused miners on subnets like Swap (Subnet 10 / TaoFi) and Minotaur, wiring them into our CLMM engine, Slipstream. Getting paid in alpha tokens on top of normal LP fees meaningfully boosts ROI and stress tests our infra in a live, adversarial setting. To stay competitive we will have to keep improving our engines. It is a clear example of how Bittensor is already incentivizing useful work, in this case optimized liquidity provisioning, and attracting businesses like Codex.
I’m curious to see where it all goes. The core idea feels right: one neutral token, many experiments at the edge, and a brutally honest market deciding which forms of intelligence are actually worth paying for.
That is a bet I am comfortable taking on a 5 to 10 year horizon. I plan to keep accumulating TAO via mining subnets and LPing. It sits squarely at the intersection of everything I care about right now: crypto, AI, open systems, and the long, messy path toward turning “useful work” into something we can measure, price, and own.
No activity yet