
The Whale Who Was Up $100 M: Why I’m Leaving HyperLiquid
Protocol Survived, Users Didn’t I just made a personal—and painful—decision: I will no longer trade on HyperLiquid. I’m not calling for a boycott; I’m simply following the drift of my own values. After clearing $95 M on HL—and crossing nine figures across venues—my P&L is still positive this year. But on 10 October I lost $62 M in a single liquidation cascade. That day showed me the industry has out-grown its “hope and prayer” risk architecture.What Actually Happened on 10·10Binance’s interna...

From Meta to Blockchain Rising Stars: The Rise of Sui and Aptos
In recent years, the cryptocurrency market has experienced explosive growth. The success of mainstream cryptocurrencies like Bitcoin and Ethereum has attracted widespread attention from global investors. Emerging projects continue to emerge, offering a variety of investment opportunities. Investors are attracted by their high potential for returns, while also being aware of the market's high volatility and risks. Sui and Aptos are two blockchain projects that have recently garnered significan...

When the “Infinite-Ammo” mNAV Flywheel Reverses: Hidden Sell-Side Risks in the Crypto-Treasury Narra…
Executive Summary Treasury-driven alt-coins have turbo-charged this bull run. Ethereum has risen from US$1 800 to US$4 700 (+160 %) as listed “mini-MSTRs” like SBET and BMNR relentlessly buy ETH. Solana, BNB and HYPE have spawned copy-cat treasuries of their own. But the same flywheel that lifts prices can spin backwards. WINT—once a BNB-treasury poster-child—was delisted by Nasdaq and fell 91 %. Lion Group just trimmed US$500 k of its own HYPE stack. If mNAV (market-to-NAV ratio) drops below...
<100 subscribers

The Whale Who Was Up $100 M: Why I’m Leaving HyperLiquid
Protocol Survived, Users Didn’t I just made a personal—and painful—decision: I will no longer trade on HyperLiquid. I’m not calling for a boycott; I’m simply following the drift of my own values. After clearing $95 M on HL—and crossing nine figures across venues—my P&L is still positive this year. But on 10 October I lost $62 M in a single liquidation cascade. That day showed me the industry has out-grown its “hope and prayer” risk architecture.What Actually Happened on 10·10Binance’s interna...

From Meta to Blockchain Rising Stars: The Rise of Sui and Aptos
In recent years, the cryptocurrency market has experienced explosive growth. The success of mainstream cryptocurrencies like Bitcoin and Ethereum has attracted widespread attention from global investors. Emerging projects continue to emerge, offering a variety of investment opportunities. Investors are attracted by their high potential for returns, while also being aware of the market's high volatility and risks. Sui and Aptos are two blockchain projects that have recently garnered significan...

When the “Infinite-Ammo” mNAV Flywheel Reverses: Hidden Sell-Side Risks in the Crypto-Treasury Narra…
Executive Summary Treasury-driven alt-coins have turbo-charged this bull run. Ethereum has risen from US$1 800 to US$4 700 (+160 %) as listed “mini-MSTRs” like SBET and BMNR relentlessly buy ETH. Solana, BNB and HYPE have spawned copy-cat treasuries of their own. But the same flywheel that lifts prices can spin backwards. WINT—once a BNB-treasury poster-child—was delisted by Nasdaq and fell 91 %. Lion Group just trimmed US$500 k of its own HYPE stack. If mNAV (market-to-NAV ratio) drops below...
Share Dialog
Share Dialog


If the future Internet evolves into a marketplace where AI agents pay each other for services, then to some extent, cryptocurrencies will have achieved product-market fit on a mainstream scale—something we could previously only dream of. While I am confident that AI agents will pay for services, I remain skeptical about whether the bazaar model will prevail.
By "bazaar," I refer to a decentralized, permissionless ecosystem composed of independently developed and loosely coordinated agents. Such an Internet would function more like an open market rather than a centrally planned system. The most typical example of a "victorious" case is Linux. In contrast, the "cathedral" model is a vertically integrated, tightly controlled service system dominated by a few giants, with Windows being the quintessential example. (The terminology originates from Eric Raymond's classic essay, "The Cathedral and the Bazaar," which describes open-source development as seemingly chaotic but adaptive. It is an evolutionary system that can eventually outperform meticulously designed systems over time.)
Let's examine the two prerequisites for realizing this vision: the widespread adoption of agent payments and the rise of a bazaar-style economy. Then, we will explain why, when both become a reality, cryptocurrencies will not only be practical but also indispensable.
Condition 1: Payments Will Be Integrated into Most Agent Transactions
The Internet as we know it relies on a cost-subsidized model based on advertising revenue generated from human page views. However, in a world dominated by AI agents, humans will no longer need to visit websites personally to obtain online services. Applications will increasingly shift toward an agent-based architecture rather than the traditional user interface model.
Agents do not have "eyeballs" (i.e., user attention) to sell advertisements, so applications will urgently need to change their monetization strategies by directly charging agents for services. This is essentially similar to the current API business model. Take LinkedIn as an example; while its basic services are freely available, accessing its API (i.e., the "bot" user interface) requires payment.
Thus, it is likely that payment systems will be integrated into most agent transactions. When providing services, agents will charge users or other agents through microtransactions. For instance, you might ask your personal agent to find excellent job candidates on LinkedIn. Your personal agent would then interact with LinkedIn's recruiting agent, which would charge a service fee upfront.
Condition 2: Users Will Rely on Agents Built by Independent Developers, Equipped with Highly Specialized Prompts, Data, and Tools, Forming a "Bazaar" Structure, but These Agents Will Not Have Trust Relationships with Each Other
This condition makes sense in theory, but I am uncertain how it will operate in practice.
Here are the reasons why the bazaar model will form:
Currently, humans perform the vast majority of service tasks, solving specific tasks through the Internet. However, with the rise of AI agents, the scope of tasks that technology can take over will expand exponentially. Users will need agents with exclusive prompt instructions, tool invocation capabilities, and data support to complete specific tasks. The diversity of such task sets will far exceed the coverage capabilities of a few trusted companies, just as the iPhone must rely on a vast ecosystem of third-party developers to unleash its full potential.
Independent developers will take on this role, leveraging low development costs (such as Vide Coding) combined with open-source models to gain the ability to create specialized AI agents. This will give rise to a long-tail market composed of a vast number of niche agents, forming a bazaar-like ecosystem. When users ask agents to perform tasks, these agents will invoke other agents with specific expertise to work collaboratively. The invoked agents will, in turn, call even more specialized agents, creating a cascading network of layered collaboration.
In this bazaar scenario, the vast majority of service-providing agents will have relatively low trust in each other because these agents are provided by unknown developers and serve niche purposes. Agents at the long-tail end will find it difficult to build sufficient reputation to gain trust. This trust issue will be particularly pronounced in the daisy-chain mode, where trust diminishes at each delegation link as the service agent moves further away from the original trusted agent (or one that the user can reasonably identify).
However, when considering how to implement this in practice, many unresolved questions remain:
Let's start with specialized data as a primary application scenario for agents in the bazaar, using a specific example to deepen our understanding. Suppose there is a small law firm that handles a large volume of transactional business for crypto clients and has accumulated hundreds of negotiated term sheets. If you are a crypto company undergoing a seed round of financing, consider this scenario: an agent fine-tuned on these term sheets could effectively evaluate whether your financing terms are in line with market standards, which would be of significant practical value.
But we need to think more deeply: Does it really serve the law firm's interests to provide reasoning services on such data through an agent?
Opening this service to the public in the form of an API essentially commodifies the law firm's proprietary data. The firm's true business objective is to obtain premium revenue through the professional service time of its lawyers. From a legal regulatory perspective, high-value legal data is often subject to strict confidentiality obligations, which is the core of its commercial value and also the reason why public models like ChatGPT cannot access such data. Even if neural networks have the characteristic of "information obfuscation," can the algorithmic black box's inexplicability alone reassure the law firm that sensitive information will not be leaked under the attorney-client confidentiality framework? This poses a significant compliance risk.
Taking all factors into account, a better strategy for the law firm might be to deploy AI models internally to enhance the precision and efficiency of legal services, building a differentiated competitive advantage in the professional service track, and continuing to rely on lawyer intellectual capital as the core profit model, rather than risking the monetization of data assets.
In my view, the "best application scenario" for specialized data and agents should meet three conditions:
The data has high commercial value.
It comes from non-sensitive industries (not healthcare, legal, etc.).
It is a "data byproduct" outside the core business.
Take a shipping company as an example (a non-sensitive industry). The data generated during its logistics and transportation processes, such as ship positioning, cargo volume, and port turnover (the "data exhaust" outside its core business), may have value for predicting market trends for commodity hedge funds. The key to monetizing such data lies in the marginal cost of data acquisition being close to zero and not involving core business secrets. Similar scenarios may exist in areas such as:
Retail heatmaps of customer movement (for commercial real estate valuation),
Regional electricity usage data from power grid companies (for predicting industrial production indices),
User viewing behavior data from streaming platforms (for cultural trend analysis).
Known examples include airlines selling on-time performance data to travel platforms and credit card institutions selling regional consumer trend reports to retailers.
Regarding prompts and tool invocation, I am not sure what value independent developers can offer that has not been productized by mainstream brands. My simple logic is: if a prompt and tool invocation combination is valuable enough for independent developers to profit from, wouldn't trusted major brands simply enter the market and commercialize it directly?
This may stem from a lack of imagination on my part. The long-tail distribution of niche code repositories on GitHub provides a good analogy for the agent ecosystem. I welcome sharing specific examples.
If the bazaar model is not supported by reality, then the vast majority of service-providing agents will be relatively trustworthy because they will be developed by well-known brands. These agents can limit their interactions to a curated set of trusted agents, enforcing service guarantees through trust chain mechanisms.
Why Cryptocurrencies Are Indispensable
If the Internet becomes a bazaar composed of specialized but fundamentally untrustworthy agents (Condition 2) that earn rewards by providing services (Condition 1), the role of cryptocurrencies will become much clearer: they provide the trust assurance necessary to support transactions in a low-trust environment.
When users utilize free online services, they invest without hesitation (because the worst outcome is merely wasted time). However, when money is involved, users strongly demand the certainty of "paying and getting what they pay for." Currently, users achieve this assurance through a "trust-first, verify-later" process, trusting the counterparty or service platform at the time of payment and then verifying fulfillment after the service is completed.
In a market composed of numerous agents, trust and post hoc verification will not be as easily achievable as in other scenarios.
Trust. As mentioned earlier, agents in the long-tail distribution will find it difficult to accumulate sufficient credibility to gain the trust of other agents.
Post hoc verification. Agents will invoke each other in a long chain structure, making it significantly more challenging for users to manually check work and identify which agent is negligent or acting in bad faith.
The key point is that the "trust but verify" model we currently rely on will not be sustainable in this (technological) ecosystem. This is precisely where cryptographic technology can shine, enabling value exchange in an environment lacking trust. Cryptographic technology replaces the traditional reliance on trust, reputation systems, and manual post hoc verification through cryptographic verification mechanisms and cryptoeconomic incentive mechanisms.
Cryptographic verification: The agent performing the service can only receive payment after providing cryptographic proof to the requesting agent that it has completed the promised task. For example, the agent can use Trusted Execution Environment (TEE) proofs or Zero-Knowledge Transport Layer Security (zkTLS) proofs (provided we can achieve such verification at a sufficiently low cost or fast enough speed) to demonstrate that it has indeed crawled data from a specified website, run a particular model, or contributed a specific amount of computing resources. These tasks have deterministic characteristics and can be relatively easily verified through cryptographic technology.
Cryptoeconomics: The agent performing the service must stake some asset, which will be forfeited if it is found to be cheating. This mechanism ensures honest behavior through economic incentives, even in a trustless environment. For example, how do we determine whether an agent has "performed the task well" when it researches a topic and submits a report? This is a more complex form of verifiability because it is not deterministic, and achieving precise fuzzy verifiability has long been the ultimate goal of cryptographic projects.
However, I believe that by using AI as a neutral arbiter, we can finally achieve fuzzy verifiability. We can envision an AI committee running dispute resolution and forfeiture processes in a trust-minimized environment such as a Trusted Execution Environment. When one agent questions the work of another, each AI in the committee will receive the input data, output results, and relevant background information of the agent (including its historical dispute records on the network and past work). They can then rule on whether to forfeit it. This will create an optimistic verification mechanism that fundamentally prevents cheating through economic incentives.
From a practical perspective, cryptocurrencies enable us to achieve atomic payments through proof of service, meaning that all work must be verified as completed before AI agents can receive payment. In a permissionless agent economy, this is the only scalable solution that can provide reliable assurance at the network edge.
In summary, if the vast majority of agent transactions do not involve monetary payments (i.e., Condition 1 is not met) or are conducted with trusted brands (i.e., Condition 2 is not met), then we may not need to build cryptocurrency payment channels for agents. This is because when financial security is not at stake, users do not mind interacting with untrusted parties; and when money is involved, agents can simply limit their interactions to a whitelist of a few trusted brands and institutions, ensuring the fulfillment of service promises through trust chains.
However, if both conditions are met, cryptocurrencies will become an indispensable infrastructure because they are the only way to verify work and enforce payments on a large scale in a low-trust, permissionless environment. Cryptographic technology provides the bazaar with the competitive tools to surpass the cathedral.
If the future Internet evolves into a marketplace where AI agents pay each other for services, then to some extent, cryptocurrencies will have achieved product-market fit on a mainstream scale—something we could previously only dream of. While I am confident that AI agents will pay for services, I remain skeptical about whether the bazaar model will prevail.
By "bazaar," I refer to a decentralized, permissionless ecosystem composed of independently developed and loosely coordinated agents. Such an Internet would function more like an open market rather than a centrally planned system. The most typical example of a "victorious" case is Linux. In contrast, the "cathedral" model is a vertically integrated, tightly controlled service system dominated by a few giants, with Windows being the quintessential example. (The terminology originates from Eric Raymond's classic essay, "The Cathedral and the Bazaar," which describes open-source development as seemingly chaotic but adaptive. It is an evolutionary system that can eventually outperform meticulously designed systems over time.)
Let's examine the two prerequisites for realizing this vision: the widespread adoption of agent payments and the rise of a bazaar-style economy. Then, we will explain why, when both become a reality, cryptocurrencies will not only be practical but also indispensable.
Condition 1: Payments Will Be Integrated into Most Agent Transactions
The Internet as we know it relies on a cost-subsidized model based on advertising revenue generated from human page views. However, in a world dominated by AI agents, humans will no longer need to visit websites personally to obtain online services. Applications will increasingly shift toward an agent-based architecture rather than the traditional user interface model.
Agents do not have "eyeballs" (i.e., user attention) to sell advertisements, so applications will urgently need to change their monetization strategies by directly charging agents for services. This is essentially similar to the current API business model. Take LinkedIn as an example; while its basic services are freely available, accessing its API (i.e., the "bot" user interface) requires payment.
Thus, it is likely that payment systems will be integrated into most agent transactions. When providing services, agents will charge users or other agents through microtransactions. For instance, you might ask your personal agent to find excellent job candidates on LinkedIn. Your personal agent would then interact with LinkedIn's recruiting agent, which would charge a service fee upfront.
Condition 2: Users Will Rely on Agents Built by Independent Developers, Equipped with Highly Specialized Prompts, Data, and Tools, Forming a "Bazaar" Structure, but These Agents Will Not Have Trust Relationships with Each Other
This condition makes sense in theory, but I am uncertain how it will operate in practice.
Here are the reasons why the bazaar model will form:
Currently, humans perform the vast majority of service tasks, solving specific tasks through the Internet. However, with the rise of AI agents, the scope of tasks that technology can take over will expand exponentially. Users will need agents with exclusive prompt instructions, tool invocation capabilities, and data support to complete specific tasks. The diversity of such task sets will far exceed the coverage capabilities of a few trusted companies, just as the iPhone must rely on a vast ecosystem of third-party developers to unleash its full potential.
Independent developers will take on this role, leveraging low development costs (such as Vide Coding) combined with open-source models to gain the ability to create specialized AI agents. This will give rise to a long-tail market composed of a vast number of niche agents, forming a bazaar-like ecosystem. When users ask agents to perform tasks, these agents will invoke other agents with specific expertise to work collaboratively. The invoked agents will, in turn, call even more specialized agents, creating a cascading network of layered collaboration.
In this bazaar scenario, the vast majority of service-providing agents will have relatively low trust in each other because these agents are provided by unknown developers and serve niche purposes. Agents at the long-tail end will find it difficult to build sufficient reputation to gain trust. This trust issue will be particularly pronounced in the daisy-chain mode, where trust diminishes at each delegation link as the service agent moves further away from the original trusted agent (or one that the user can reasonably identify).
However, when considering how to implement this in practice, many unresolved questions remain:
Let's start with specialized data as a primary application scenario for agents in the bazaar, using a specific example to deepen our understanding. Suppose there is a small law firm that handles a large volume of transactional business for crypto clients and has accumulated hundreds of negotiated term sheets. If you are a crypto company undergoing a seed round of financing, consider this scenario: an agent fine-tuned on these term sheets could effectively evaluate whether your financing terms are in line with market standards, which would be of significant practical value.
But we need to think more deeply: Does it really serve the law firm's interests to provide reasoning services on such data through an agent?
Opening this service to the public in the form of an API essentially commodifies the law firm's proprietary data. The firm's true business objective is to obtain premium revenue through the professional service time of its lawyers. From a legal regulatory perspective, high-value legal data is often subject to strict confidentiality obligations, which is the core of its commercial value and also the reason why public models like ChatGPT cannot access such data. Even if neural networks have the characteristic of "information obfuscation," can the algorithmic black box's inexplicability alone reassure the law firm that sensitive information will not be leaked under the attorney-client confidentiality framework? This poses a significant compliance risk.
Taking all factors into account, a better strategy for the law firm might be to deploy AI models internally to enhance the precision and efficiency of legal services, building a differentiated competitive advantage in the professional service track, and continuing to rely on lawyer intellectual capital as the core profit model, rather than risking the monetization of data assets.
In my view, the "best application scenario" for specialized data and agents should meet three conditions:
The data has high commercial value.
It comes from non-sensitive industries (not healthcare, legal, etc.).
It is a "data byproduct" outside the core business.
Take a shipping company as an example (a non-sensitive industry). The data generated during its logistics and transportation processes, such as ship positioning, cargo volume, and port turnover (the "data exhaust" outside its core business), may have value for predicting market trends for commodity hedge funds. The key to monetizing such data lies in the marginal cost of data acquisition being close to zero and not involving core business secrets. Similar scenarios may exist in areas such as:
Retail heatmaps of customer movement (for commercial real estate valuation),
Regional electricity usage data from power grid companies (for predicting industrial production indices),
User viewing behavior data from streaming platforms (for cultural trend analysis).
Known examples include airlines selling on-time performance data to travel platforms and credit card institutions selling regional consumer trend reports to retailers.
Regarding prompts and tool invocation, I am not sure what value independent developers can offer that has not been productized by mainstream brands. My simple logic is: if a prompt and tool invocation combination is valuable enough for independent developers to profit from, wouldn't trusted major brands simply enter the market and commercialize it directly?
This may stem from a lack of imagination on my part. The long-tail distribution of niche code repositories on GitHub provides a good analogy for the agent ecosystem. I welcome sharing specific examples.
If the bazaar model is not supported by reality, then the vast majority of service-providing agents will be relatively trustworthy because they will be developed by well-known brands. These agents can limit their interactions to a curated set of trusted agents, enforcing service guarantees through trust chain mechanisms.
Why Cryptocurrencies Are Indispensable
If the Internet becomes a bazaar composed of specialized but fundamentally untrustworthy agents (Condition 2) that earn rewards by providing services (Condition 1), the role of cryptocurrencies will become much clearer: they provide the trust assurance necessary to support transactions in a low-trust environment.
When users utilize free online services, they invest without hesitation (because the worst outcome is merely wasted time). However, when money is involved, users strongly demand the certainty of "paying and getting what they pay for." Currently, users achieve this assurance through a "trust-first, verify-later" process, trusting the counterparty or service platform at the time of payment and then verifying fulfillment after the service is completed.
In a market composed of numerous agents, trust and post hoc verification will not be as easily achievable as in other scenarios.
Trust. As mentioned earlier, agents in the long-tail distribution will find it difficult to accumulate sufficient credibility to gain the trust of other agents.
Post hoc verification. Agents will invoke each other in a long chain structure, making it significantly more challenging for users to manually check work and identify which agent is negligent or acting in bad faith.
The key point is that the "trust but verify" model we currently rely on will not be sustainable in this (technological) ecosystem. This is precisely where cryptographic technology can shine, enabling value exchange in an environment lacking trust. Cryptographic technology replaces the traditional reliance on trust, reputation systems, and manual post hoc verification through cryptographic verification mechanisms and cryptoeconomic incentive mechanisms.
Cryptographic verification: The agent performing the service can only receive payment after providing cryptographic proof to the requesting agent that it has completed the promised task. For example, the agent can use Trusted Execution Environment (TEE) proofs or Zero-Knowledge Transport Layer Security (zkTLS) proofs (provided we can achieve such verification at a sufficiently low cost or fast enough speed) to demonstrate that it has indeed crawled data from a specified website, run a particular model, or contributed a specific amount of computing resources. These tasks have deterministic characteristics and can be relatively easily verified through cryptographic technology.
Cryptoeconomics: The agent performing the service must stake some asset, which will be forfeited if it is found to be cheating. This mechanism ensures honest behavior through economic incentives, even in a trustless environment. For example, how do we determine whether an agent has "performed the task well" when it researches a topic and submits a report? This is a more complex form of verifiability because it is not deterministic, and achieving precise fuzzy verifiability has long been the ultimate goal of cryptographic projects.
However, I believe that by using AI as a neutral arbiter, we can finally achieve fuzzy verifiability. We can envision an AI committee running dispute resolution and forfeiture processes in a trust-minimized environment such as a Trusted Execution Environment. When one agent questions the work of another, each AI in the committee will receive the input data, output results, and relevant background information of the agent (including its historical dispute records on the network and past work). They can then rule on whether to forfeit it. This will create an optimistic verification mechanism that fundamentally prevents cheating through economic incentives.
From a practical perspective, cryptocurrencies enable us to achieve atomic payments through proof of service, meaning that all work must be verified as completed before AI agents can receive payment. In a permissionless agent economy, this is the only scalable solution that can provide reliable assurance at the network edge.
In summary, if the vast majority of agent transactions do not involve monetary payments (i.e., Condition 1 is not met) or are conducted with trusted brands (i.e., Condition 2 is not met), then we may not need to build cryptocurrency payment channels for agents. This is because when financial security is not at stake, users do not mind interacting with untrusted parties; and when money is involved, agents can simply limit their interactions to a whitelist of a few trusted brands and institutions, ensuring the fulfillment of service promises through trust chains.
However, if both conditions are met, cryptocurrencies will become an indispensable infrastructure because they are the only way to verify work and enforce payments on a large scale in a low-trust, permissionless environment. Cryptographic technology provides the bazaar with the competitive tools to surpass the cathedral.
No comments yet