
Physical AI: The Next Frontier for Data
Why the future of AI depends on data from the real world, not the web.

2024 Market Review and 2025 Predictions
The crypto market experienced a resurgence in 2024. The coming year will focus on further refinement of infrastructure and adoption of dApps. By solving distribution challenges, enhancing interoperability, and leveraging ZK technology, Web3 is poised for mainstream breakthroughs in 2025.

Why Appchains Make Sense for High‑Volume Consumer Platforms
How consumer platforms like Coinbase and Robinhood are using appchains to own infrastructure, unlock new revenue, and reshape user experiences.
<100 subscribers



Physical AI: The Next Frontier for Data
Why the future of AI depends on data from the real world, not the web.

2024 Market Review and 2025 Predictions
The crypto market experienced a resurgence in 2024. The coming year will focus on further refinement of infrastructure and adoption of dApps. By solving distribution challenges, enhancing interoperability, and leveraging ZK technology, Web3 is poised for mainstream breakthroughs in 2025.

Why Appchains Make Sense for High‑Volume Consumer Platforms
How consumer platforms like Coinbase and Robinhood are using appchains to own infrastructure, unlock new revenue, and reshape user experiences.
Share Dialog
Share Dialog

A few weeks ago, a social media platform called Moltbook went viral. The premise was simple and strange. A network built specifically for AI agents to socialize with other AI agents. Humans weren’t the intended audience. Agents were supposed to post, reply, and interact with each other autonomously.
It didn’t take long for a problem to surface. Moltbook couldn’t actually verify that the accounts posting were agents. Security was an open question. The thing built for machines couldn’t prove its users were machines.
We’re building platforms for machines that can’t verify machines, and platforms for humans that can’t keep machines out.
Around the same time, Discord announced a global rollout of age verification, requiring facial scans or government ID checks before users can access certain parts of the platform. On the surface, this is about child safety and regulatory compliance. Discord is responding to pressure from the UK’s Online Safety Act, Australian legislation, and a growing wave of US state laws demanding platforms prove their users aren’t minors.
But look at the tools they’re deploying. Facial age estimation, government ID verification, behavioral inference models that analyze account activity to predict whether a user is an adult. These are identity verification primitives. The same infrastructure that proves you’re over 18 is the infrastructure that could eventually prove you’re human at all. The motivations are different. The tooling is converging.
Two platforms. Two different problems. Both revealing the same gap. The internet has no reliable way to verify who or what is participating.
This is the moment we’re in. The internet’s trust layer was built on a single assumption. The thing on the other end of the connection is a human being. That assumption is now breaking in both directions simultaneously. And almost nobody is treating this as the infrastructure failure it actually is.
This problem isn’t new. Bots have been on the internet for decades. But for the first time, automated traffic has surpassed human activity online. Imperva’s latest report estimates bots now account for roughly half of all web traffic. The internet is already majority non-human. What’s changed in the last six to twelve months is what those bots can do.
AI can pass as human in most casual online interactions. We stopped asking whether machines could fool humans. They already do.
That changes everything. When the cost of creating a convincing participant drops near zero, every trust system built on the assumption that faking it was hard becomes obsolete. CAPTCHAs, phone number verification, email confirmation. All of it was designed to create just enough friction to stop low-effort spam. None of it was designed for a world where an agent can hold a nuanced conversation, mimic tone, and build relationships over weeks.
The speed of agent adoption has outpaced every system designed to manage identity online. And agents aren’t just tools people use privately anymore. They’re active participants on platforms built for humans.
OpenClaw is an early glimpse of this shift. People are using Discord not just to chat with other people but to interact with their personal AI agents directly inside servers. The agent sits in the channel alongside humans, responding, contributing, executing tasks. From the platform’s perspective, the difference between a human message and an agent message is dissolving.
Discord’s age verification is a response to regulators, not to agents. But the agent problem is coming for them next. And the verification infrastructure they’re building now will need to do double duty.
The default framing is defensive. How do we keep bots out? How do we stop AI from flooding platforms with synthetic accounts?
That framing is incomplete.
Moltbook’s failure reveals the other side. A platform designed entirely for agents couldn’t verify agent identity either. The authentication problem isn’t just “prove you’re human.” It’s “prove you are what you claim to be,” regardless of whether that’s a person or a machine.
An agent can already present itself as a human financial advisor in a Discord server, answering questions about portfolio allocation while the people on the other end have no idea they're talking to a machine. Going the other direction, a human can pose as a verified autonomous agent to access an API marketplace or an on-chain trading protocol, exploiting trust that was designed for machine participants. Both scenarios are possible right now. Neither has a reliable solution.
This distinction matters. We’re heading toward an internet where humans and agents coexist on the same platforms, in the same channels, performing overlapping functions. The question isn’t how to keep them separated. It’s how to build a trust layer that works for both.
Right now, that layer doesn’t exist.
This is where Sam Altman’s parallel bets become interesting. On one side, he’s building AI systems that are accelerating the agent explosion. Just yesterday, OpenAI hired the creator of OpenClaw to build the next generation of personal agents. On the other, he co-founded Worldcoin, a project that uses biometric verification to create cryptographic proof that a person is a real, unique human being.
For a long time, Worldcoin felt like a solution looking for a problem. Iris scanning seemed invasive, the crypto wrapper felt unnecessary, and the use case was abstract. “Proof of personhood” sounded like a philosophical exercise.
It doesn’t feel abstract anymore.
If agents are going to operate alongside humans on every major platform, and if platforms can’t reliably distinguish between the two using existing tools, then some form of cryptographic proof of humanity becomes essential. In a mixed human-agent internet, identity becomes a primitive, not a UI layer. It moves down the stack.
Worldcoin’s bet is that biometric verification, tied to a decentralized identity layer, is the cleanest solution. You prove you’re human once, and that proof travels with you across platforms and contexts. No platform has to solve the problem independently. The verification layer sits underneath everything.
The privacy tension here is real and worth acknowledging. Asking people to scan their irises is a significant ask. Any solution that wins at scale will need to prove that verification doesn’t require surveillance. Worldcoin’s claim is that biometric data can generate a cryptographic proof without storing the underlying data. Whether people trust that claim is part of the battle. But the underlying need doesn’t go away just because the first solutions feel uncomfortable.
Whether Worldcoin specifically wins this race is an open question. But the category it occupies, cryptographic proof of personhood, is no longer optional.
Verifying who’s participating on a network is only half the problem. The other half is what happens when those participants start transacting.
This isn’t theoretical. Agents are already consuming APIs, purchasing compute, and paying for services programmatically. As AI moves from chatbot to autonomous operator, the volume of machine-to-machine transactions is going to explode. SaaS is shifting from subscription models to usage-based pricing. API calls are becoming the atomic unit of economic activity. Agents need to pay for what they use, in real time, without a human approving every transaction.
This is where crypto infrastructure stops being a speculation vehicle and starts being a necessity.
x402 is a good example of what’s emerging. It’s a protocol that embeds payments directly into HTTP requests, the same protocol machines already use to communicate. Stripe just announced support for x402 on Base. A major payments company is now building infrastructure for agents to pay other agents (or services) on a blockchain, settled instantly, without relying on legacy intermediaries.
The marriage between AI and blockchain becomes obvious once you frame it this way. AI creates autonomous actors that need to transact. Blockchain creates a trust layer that lets those transactions happen without requiring every participant to go through a centralized gatekeeper. Proof of humanity (or proof of agent-hood) becomes the identity layer. Crypto rails become the payment layer. They’re solving different parts of the same problem.
Without verified identity, you can’t trust who you’re transacting with. Without programmable money, you can’t transact at the speed agents operate. You need both.
The platforms that figure out bidirectional authentication will have an enormous advantage. Not just proving humans are human, but proving agents are agents. Giving both classes of participant verifiable identity so that every interaction on the network carries context about who (or what) you’re dealing with.
This won’t be solved by CAPTCHAs or phone number verification or any of the legacy tools we’ve relied on. Those systems were built for a world where the only thing you needed to filter out was spam. We’re now in a world where the participants themselves are changing in kind.
The next phase of the internet won’t be defined by better models. It will be defined by who controls identity and transaction at the protocol layer.
Thanks for reading Mixed Realities by TJ Kawamura! Subscribe for free to receive new posts and support my work.
Subscribe

A few weeks ago, a social media platform called Moltbook went viral. The premise was simple and strange. A network built specifically for AI agents to socialize with other AI agents. Humans weren’t the intended audience. Agents were supposed to post, reply, and interact with each other autonomously.
It didn’t take long for a problem to surface. Moltbook couldn’t actually verify that the accounts posting were agents. Security was an open question. The thing built for machines couldn’t prove its users were machines.
We’re building platforms for machines that can’t verify machines, and platforms for humans that can’t keep machines out.
Around the same time, Discord announced a global rollout of age verification, requiring facial scans or government ID checks before users can access certain parts of the platform. On the surface, this is about child safety and regulatory compliance. Discord is responding to pressure from the UK’s Online Safety Act, Australian legislation, and a growing wave of US state laws demanding platforms prove their users aren’t minors.
But look at the tools they’re deploying. Facial age estimation, government ID verification, behavioral inference models that analyze account activity to predict whether a user is an adult. These are identity verification primitives. The same infrastructure that proves you’re over 18 is the infrastructure that could eventually prove you’re human at all. The motivations are different. The tooling is converging.
Two platforms. Two different problems. Both revealing the same gap. The internet has no reliable way to verify who or what is participating.
This is the moment we’re in. The internet’s trust layer was built on a single assumption. The thing on the other end of the connection is a human being. That assumption is now breaking in both directions simultaneously. And almost nobody is treating this as the infrastructure failure it actually is.
This problem isn’t new. Bots have been on the internet for decades. But for the first time, automated traffic has surpassed human activity online. Imperva’s latest report estimates bots now account for roughly half of all web traffic. The internet is already majority non-human. What’s changed in the last six to twelve months is what those bots can do.
AI can pass as human in most casual online interactions. We stopped asking whether machines could fool humans. They already do.
That changes everything. When the cost of creating a convincing participant drops near zero, every trust system built on the assumption that faking it was hard becomes obsolete. CAPTCHAs, phone number verification, email confirmation. All of it was designed to create just enough friction to stop low-effort spam. None of it was designed for a world where an agent can hold a nuanced conversation, mimic tone, and build relationships over weeks.
The speed of agent adoption has outpaced every system designed to manage identity online. And agents aren’t just tools people use privately anymore. They’re active participants on platforms built for humans.
OpenClaw is an early glimpse of this shift. People are using Discord not just to chat with other people but to interact with their personal AI agents directly inside servers. The agent sits in the channel alongside humans, responding, contributing, executing tasks. From the platform’s perspective, the difference between a human message and an agent message is dissolving.
Discord’s age verification is a response to regulators, not to agents. But the agent problem is coming for them next. And the verification infrastructure they’re building now will need to do double duty.
The default framing is defensive. How do we keep bots out? How do we stop AI from flooding platforms with synthetic accounts?
That framing is incomplete.
Moltbook’s failure reveals the other side. A platform designed entirely for agents couldn’t verify agent identity either. The authentication problem isn’t just “prove you’re human.” It’s “prove you are what you claim to be,” regardless of whether that’s a person or a machine.
An agent can already present itself as a human financial advisor in a Discord server, answering questions about portfolio allocation while the people on the other end have no idea they're talking to a machine. Going the other direction, a human can pose as a verified autonomous agent to access an API marketplace or an on-chain trading protocol, exploiting trust that was designed for machine participants. Both scenarios are possible right now. Neither has a reliable solution.
This distinction matters. We’re heading toward an internet where humans and agents coexist on the same platforms, in the same channels, performing overlapping functions. The question isn’t how to keep them separated. It’s how to build a trust layer that works for both.
Right now, that layer doesn’t exist.
This is where Sam Altman’s parallel bets become interesting. On one side, he’s building AI systems that are accelerating the agent explosion. Just yesterday, OpenAI hired the creator of OpenClaw to build the next generation of personal agents. On the other, he co-founded Worldcoin, a project that uses biometric verification to create cryptographic proof that a person is a real, unique human being.
For a long time, Worldcoin felt like a solution looking for a problem. Iris scanning seemed invasive, the crypto wrapper felt unnecessary, and the use case was abstract. “Proof of personhood” sounded like a philosophical exercise.
It doesn’t feel abstract anymore.
If agents are going to operate alongside humans on every major platform, and if platforms can’t reliably distinguish between the two using existing tools, then some form of cryptographic proof of humanity becomes essential. In a mixed human-agent internet, identity becomes a primitive, not a UI layer. It moves down the stack.
Worldcoin’s bet is that biometric verification, tied to a decentralized identity layer, is the cleanest solution. You prove you’re human once, and that proof travels with you across platforms and contexts. No platform has to solve the problem independently. The verification layer sits underneath everything.
The privacy tension here is real and worth acknowledging. Asking people to scan their irises is a significant ask. Any solution that wins at scale will need to prove that verification doesn’t require surveillance. Worldcoin’s claim is that biometric data can generate a cryptographic proof without storing the underlying data. Whether people trust that claim is part of the battle. But the underlying need doesn’t go away just because the first solutions feel uncomfortable.
Whether Worldcoin specifically wins this race is an open question. But the category it occupies, cryptographic proof of personhood, is no longer optional.
Verifying who’s participating on a network is only half the problem. The other half is what happens when those participants start transacting.
This isn’t theoretical. Agents are already consuming APIs, purchasing compute, and paying for services programmatically. As AI moves from chatbot to autonomous operator, the volume of machine-to-machine transactions is going to explode. SaaS is shifting from subscription models to usage-based pricing. API calls are becoming the atomic unit of economic activity. Agents need to pay for what they use, in real time, without a human approving every transaction.
This is where crypto infrastructure stops being a speculation vehicle and starts being a necessity.
x402 is a good example of what’s emerging. It’s a protocol that embeds payments directly into HTTP requests, the same protocol machines already use to communicate. Stripe just announced support for x402 on Base. A major payments company is now building infrastructure for agents to pay other agents (or services) on a blockchain, settled instantly, without relying on legacy intermediaries.
The marriage between AI and blockchain becomes obvious once you frame it this way. AI creates autonomous actors that need to transact. Blockchain creates a trust layer that lets those transactions happen without requiring every participant to go through a centralized gatekeeper. Proof of humanity (or proof of agent-hood) becomes the identity layer. Crypto rails become the payment layer. They’re solving different parts of the same problem.
Without verified identity, you can’t trust who you’re transacting with. Without programmable money, you can’t transact at the speed agents operate. You need both.
The platforms that figure out bidirectional authentication will have an enormous advantage. Not just proving humans are human, but proving agents are agents. Giving both classes of participant verifiable identity so that every interaction on the network carries context about who (or what) you’re dealing with.
This won’t be solved by CAPTCHAs or phone number verification or any of the legacy tools we’ve relied on. Those systems were built for a world where the only thing you needed to filter out was spam. We’re now in a world where the participants themselves are changing in kind.
The next phase of the internet won’t be defined by better models. It will be defined by who controls identity and transaction at the protocol layer.
Thanks for reading Mixed Realities by TJ Kawamura! Subscribe for free to receive new posts and support my work.
Subscribe
1 comment
The internet can't prove humans are human or that agents are agents. Moltbook, an AI-only social network, couldn't verify its users were bots. Discord is rolling out facial scans to prove its users are people. The identity crisis goes both ways.