By the end of 2025, the total market value of the AI sector is expected to exceed $150 billion, with at least ten new AI protocols reaching a market cap of over $1 billion each.
The future of the crypto AI sector is highly attractive. Although there are no historical precedents or clear trends to follow, this also means that it is at an entirely new starting point, waiting for future development. The thought of looking back on all this in 2026 and seeing the gap between the expectations at the beginning of 2025 and the actual situation will be even more exciting.
Currently, crypto AI tokens account for only 2.9% of the altcoin market value, but this proportion is not expected to last long. As artificial intelligence gradually expands into new areas such as smart contract platforms, Memes, decentralized physical infrastructure (DePIN), agent platforms, data networks, and intelligent coordination layers, its integration with DeFi and Meme tokens has become an inevitable trend.
Bittensor (with the token name TAO) has been around for several years. It is a veteran player in this field. Despite the hype surrounding artificial intelligence, its token price has remained at the level of a year ago. However, the digital beehive thinking behind Bittensor has been quietly advancing, with more subnets emerging, registration fees decreasing, and subnets surpassing their Web2 counterparts in actual performance such as inference speed. Meanwhile, the introduction of EVM compatibility has brought DeFi-like functions into Bittensor's network, further enriching it.
So, why hasn't TAO soared? A steep inflation schedule and a shift in focus towards AI Agent platforms have limited it. However, dTAO (expected in Q1 2025) could be a major turning point. With dTAO, each subnet will have its own token, and the relative prices of these tokens will determine how the release is allocated.
Why Bittensor Has a Chance to Ignite Again:
Market-based release: dTAO directly links block rewards to innovation and measurable performance. The better a subnet performs, the more valuable its token becomes—and thus, the more release it receives.
Focused capital flow: Investors will finally be able to invest in specific subnets they are bullish on. If a subnet succeeds with an innovative approach in distributed training, capital can flow into that subnet to express an investment view.
EVM integration: Compatibility with EVM attracts a broader crypto-native developer community into Bittensor, bridging the gap with other networks.
Currently, attention is being paid to each subnet, recording their actual progress in their respective fields. It is expected that at some point, there will be a DeFi summer similar to the @opentensor version.
The insatiable demand for computing will become an obvious mega-trend. Nvidia CEO Jensen Huang once said that the demand for inference will surge by a "billion times." This exponential growth will break the planning of traditional infrastructure and urgently call for "new solutions."
Decentralized computing layers provide raw computing (for training and inference) in a verifiable and cost-effective manner. Startups like @spheronfdn, @gensynai, @atoma_network, and @kuzco_xyz are quietly building a strong foundation to capitalize on this, focusing on products rather than tokens (none of these companies currently have tokens). As decentralized AI model training becomes feasible, the addressable market is expected to rise sharply.
For the crypto AI sector, comparing it with the L1 sector:
Like 2021: Remember how Solana, Terra, and Avalanche fought to be the "best" L1? We will see similar battles among computing protocols, competing for developers and AI applications to build on their computing layer.
Web2 demand: The cloud computing market of $680 billion to $2.5 trillion far exceeds the crypto AI market. If these decentralized computing solutions can capture even a small portion of traditional cloud customers, there will be an opportunity to see the next 10x or 100x growth wave.
Just as Solana stood out in the L1 field, the winner here will dominate an entirely new frontier. It is crucial to closely monitor three criteria: reliability, cost-effectiveness, and developer-friendly tools.
By the end of 2025, 90% of on-chain transactions will no longer be triggered by humans manually clicking "send." Instead, these transactions will be executed by an army of AI Agents, which will continuously rebalance liquidity pools, allocate rewards, or execute micropayments based on real-time data sources.
This may not seem as far-fetched as it sounds. Everything we have built over the past seven years (L1s, rollups, DeFi, NFTs, etc.) has quietly paved the way for an on-chain world dominated by AI.
Why is this shift happening?
No human error: Smart contracts execute precisely as coded. AI Agents can process large amounts of data faster and more accurately than a group of humans.
Micropayments: AI Agent-driven transactions will become smaller, more frequent, and more efficient. Especially as transaction costs on Solana, Base, and other L1/L2s tend to decrease.
Invisible infrastructure: Humans will be happy to give up direct control if it means less hassle. Trusting Netflix to automatically renew is a natural next step to trusting an AI Agent to automatically rebalance a user's DeFi position.
AI Agents will generate an astonishing volume of on-chain activity, but the biggest challenge will be to make these AI Agent-driven systems accountable to humans. As the ratio of AI Agent-initiated transactions to human-initiated transactions increases, new governance mechanisms, analytics platforms, and auditing tools will be needed.
An AI Agent swarm refers to tiny artificial intelligence entities seamlessly collaborating to execute grand plans, which sounds like the plot of the next hot sci-fi or horror movie. Currently, most AI Agents operate in isolation, with little and unpredictable interaction. However, AI Agent swarms will change this, allowing multiple AI Agents to exchange information, negotiate, and make joint decisions within a network.
These AI Agent swarms can be seen as decentralized collectives of specialized models, each contributing its unique expertise to larger, more complex tasks. The potential is astonishing. For example, one swarm might coordinate distributed computing resources on a platform like Bittensor, while another can verify content sources in real-time to prevent the spread of misinformation on social media. Each AI Agent in the swarm is an expert, precisely executing its own task.
The intelligence of these swarm networks will far exceed that of any single isolated AI. To enable the flourishing of agent swarms, universal communication standards are crucial. Agents need to be able to discover, authenticate, and collaborate without being limited by underlying frameworks. Teams like Story, FXN, ZEREBRO, and ai16z are working to lay the foundation for the rise of agent swarms.
At the same time, this also highlights the key role of decentralization, assigning tasks to agent swarms managed by transparent on-chain rules, giving the system higher resilience and adaptability. If one agent fails, other agents can step in to fill the gap and keep the system running continuously.
Story hired Luna (an AI Agent project) as their social media intern, paying her $1,000 per day. It may sound strange, but this is a harbinger of a future in which AI Agents will become true collaborators, with their own autonomy, responsibility, and even salaries. Companies across various industries are testing human-AI hybrid teams.
We will work with AI Agents not as our slaves, but as equal partners:
Productivity surge: AI Agents can process large amounts of data, communicate with each other, and make decisions 24/7 without the need for sleep or coffee breaks.
Building trust through smart contracts: The blockchain is an impartial, tireless, and never-forgetting supervisor. An on-chain ledger ensures that important AI Agent actions follow specific boundary conditions/rules.
Evolving social norms: Soon, there will be etiquette for interacting with agents—whether to say "please" and "thank you" to AI, whether to take moral responsibility for their mistakes, or to blame their developers.
The boundary between "employees" and "software" will begin to blur in 2025. It is believed that more crypto teams will get involved, as AI Agents excel in content generation, being able to live stream and post content on social media around the clock. If you are developing an AI protocol, why not demonstrate its capabilities by deploying AI Agents internally?
We will witness a Darwinian elimination among AI Agents. This is because running an AI Agent consumes computing power, which is the cost of inference. If an AI Agent cannot generate enough value to cover its "rent," it will face extinction.
Take the AI Agent survival game as an example. First, there is the carbon credit AI: Suppose there is an AI Agent that seeks inefficient links in a decentralized energy network and autonomously trades tokenized carbon credits. If it can earn enough income to pay for its computing costs, this AI Agent will thrive. Another example is the DEX arbitrage bot: This AI Agent earns a stable income by exploiting price differences between decentralized exchanges, sufficient to cover its inference costs. In contrast, the memer on X: A fun but unsustainable virtual AI influencer without a steady income source will gradually disappear as the novelty wears off and the token price drops, unable to make a living.
The difference is clear: utility-oriented AI Agents will thrive, while those relying on disruption and hype will become irrelevant. Such natural selection is beneficial to the industry, prompting developers to innovate constantly, prioritizing productive applications over flashy technology. As more powerful and productive AI Agents rise, skeptics will gradually fall silent.
The saying "data is the new oil" is widely spread. However, the high dependence of artificial intelligence on data has also raised concerns about the impending data shortage. The traditional view is that ways should be found to collect private real data from users, even paying them for it.
However, in highly regulated industries or where real data is scarce, a more practical solution may be synthetic data. Synthetic data is artificially generated to simulate real-world data distributions. It offers a scalable, ethical, and privacy-safe alternative to human data. The advantages of synthetic data are:
Unlimited scale: Need a million medical X-rays or 3D scans of factories? Synthetic data can be generated in unlimited quantities without relying on real patients or factories.
Privacy-friendly: When dealing with synthetic data, personal privacy information is not at risk.
Customizable: Data distributions can be adjusted according to specific training needs, and even edge cases that are rare in reality or ethically complex can be inserted.
Although human data is still important in many cases, if synthetic data continues to improve in authenticity, it may surpass user data in terms of quantity, generation speed, and lack of privacy restrictions. Future decentralized artificial intelligence may revolve around "mini labs" that focus on creating highly specialized synthetic datasets to meet specific use cases.
In 2024, pioneers such as Prime Intellect and Nous Research pushed the boundaries of decentralized training. For example, they successfully trained a 15-billion-parameter model in a low-bandwidth environment, proving that large-scale training can be achieved outside traditional centralized settings. Although these models did not perform ideally in practical applications compared to existing base models and had lower performance, there were few reasons to use them. However, this is expected to change in 2025.
EXO Labs further advanced progress with SPARTA, reducing communication between GPUs by more than 1,000 times. SPARTA makes large model training in low-bandwidth environments possible without relying on specialized infrastructure. Most impressively, they declared, "SPARTA works independently, but can also be combined with synchronous low-communication training algorithms like DiLoCo for better performance." This means that these improvements are additive, and efficiency gains are cumulative.
As model technology continues to advance, smaller and more efficient models will become more useful. The future of artificial intelligence will no longer focus solely on scale, but more on quality and accessibility. Soon, there will be high-performance models capable of running on edge devices and even mobile phones.
Although many compare Virtuals and ai16z to the early days of smartphones (like iOS and Android), believing that current leaders will continue to win, this market is vast and underdeveloped, and cannot be dominated by just two participants. By the end of 2025, it is expected that at least ten new crypto AI protocols (not yet tokenized) will reach a market cap of over $1 billion each.
Decentralized artificial intelligence is still in its infancy, and a large number of talents are gathering. New protocols, new token models, and new open-source frameworks will continue to emerge. These new entrants may replace existing players through incentives (such as airdrops or clever staking), technological breakthroughs (such as low-latency inference or cross-chain interoperability), and user experience improvements (such as no-code). Changes in public perception can be instantaneous and dramatic.
Bittensor, Virtuals, and ai16z will not be alone for long. The next billion-dollar crypto AI protocol is coming, and smart investors will face numerous opportunities. This is precisely why this market is so exciting.
By the end of 2025, the total market value of the AI sector is expected to exceed $150 billion, with at least ten new AI protocols reaching a market cap of over $1 billion each.
The future of the crypto AI sector is highly attractive. Although there are no historical precedents or clear trends to follow, this also means that it is at an entirely new starting point, waiting for future development. The thought of looking back on all this in 2026 and seeing the gap between the expectations at the beginning of 2025 and the actual situation will be even more exciting.
Currently, crypto AI tokens account for only 2.9% of the altcoin market value, but this proportion is not expected to last long. As artificial intelligence gradually expands into new areas such as smart contract platforms, Memes, decentralized physical infrastructure (DePIN), agent platforms, data networks, and intelligent coordination layers, its integration with DeFi and Meme tokens has become an inevitable trend.
Bittensor (with the token name TAO) has been around for several years. It is a veteran player in this field. Despite the hype surrounding artificial intelligence, its token price has remained at the level of a year ago. However, the digital beehive thinking behind Bittensor has been quietly advancing, with more subnets emerging, registration fees decreasing, and subnets surpassing their Web2 counterparts in actual performance such as inference speed. Meanwhile, the introduction of EVM compatibility has brought DeFi-like functions into Bittensor's network, further enriching it.
So, why hasn't TAO soared? A steep inflation schedule and a shift in focus towards AI Agent platforms have limited it. However, dTAO (expected in Q1 2025) could be a major turning point. With dTAO, each subnet will have its own token, and the relative prices of these tokens will determine how the release is allocated.
Why Bittensor Has a Chance to Ignite Again:
Market-based release: dTAO directly links block rewards to innovation and measurable performance. The better a subnet performs, the more valuable its token becomes—and thus, the more release it receives.
Focused capital flow: Investors will finally be able to invest in specific subnets they are bullish on. If a subnet succeeds with an innovative approach in distributed training, capital can flow into that subnet to express an investment view.
EVM integration: Compatibility with EVM attracts a broader crypto-native developer community into Bittensor, bridging the gap with other networks.
Currently, attention is being paid to each subnet, recording their actual progress in their respective fields. It is expected that at some point, there will be a DeFi summer similar to the @opentensor version.
The insatiable demand for computing will become an obvious mega-trend. Nvidia CEO Jensen Huang once said that the demand for inference will surge by a "billion times." This exponential growth will break the planning of traditional infrastructure and urgently call for "new solutions."
Decentralized computing layers provide raw computing (for training and inference) in a verifiable and cost-effective manner. Startups like @spheronfdn, @gensynai, @atoma_network, and @kuzco_xyz are quietly building a strong foundation to capitalize on this, focusing on products rather than tokens (none of these companies currently have tokens). As decentralized AI model training becomes feasible, the addressable market is expected to rise sharply.
For the crypto AI sector, comparing it with the L1 sector:
Like 2021: Remember how Solana, Terra, and Avalanche fought to be the "best" L1? We will see similar battles among computing protocols, competing for developers and AI applications to build on their computing layer.
Web2 demand: The cloud computing market of $680 billion to $2.5 trillion far exceeds the crypto AI market. If these decentralized computing solutions can capture even a small portion of traditional cloud customers, there will be an opportunity to see the next 10x or 100x growth wave.
Just as Solana stood out in the L1 field, the winner here will dominate an entirely new frontier. It is crucial to closely monitor three criteria: reliability, cost-effectiveness, and developer-friendly tools.
By the end of 2025, 90% of on-chain transactions will no longer be triggered by humans manually clicking "send." Instead, these transactions will be executed by an army of AI Agents, which will continuously rebalance liquidity pools, allocate rewards, or execute micropayments based on real-time data sources.
This may not seem as far-fetched as it sounds. Everything we have built over the past seven years (L1s, rollups, DeFi, NFTs, etc.) has quietly paved the way for an on-chain world dominated by AI.
Why is this shift happening?
No human error: Smart contracts execute precisely as coded. AI Agents can process large amounts of data faster and more accurately than a group of humans.
Micropayments: AI Agent-driven transactions will become smaller, more frequent, and more efficient. Especially as transaction costs on Solana, Base, and other L1/L2s tend to decrease.
Invisible infrastructure: Humans will be happy to give up direct control if it means less hassle. Trusting Netflix to automatically renew is a natural next step to trusting an AI Agent to automatically rebalance a user's DeFi position.
AI Agents will generate an astonishing volume of on-chain activity, but the biggest challenge will be to make these AI Agent-driven systems accountable to humans. As the ratio of AI Agent-initiated transactions to human-initiated transactions increases, new governance mechanisms, analytics platforms, and auditing tools will be needed.
An AI Agent swarm refers to tiny artificial intelligence entities seamlessly collaborating to execute grand plans, which sounds like the plot of the next hot sci-fi or horror movie. Currently, most AI Agents operate in isolation, with little and unpredictable interaction. However, AI Agent swarms will change this, allowing multiple AI Agents to exchange information, negotiate, and make joint decisions within a network.
These AI Agent swarms can be seen as decentralized collectives of specialized models, each contributing its unique expertise to larger, more complex tasks. The potential is astonishing. For example, one swarm might coordinate distributed computing resources on a platform like Bittensor, while another can verify content sources in real-time to prevent the spread of misinformation on social media. Each AI Agent in the swarm is an expert, precisely executing its own task.
The intelligence of these swarm networks will far exceed that of any single isolated AI. To enable the flourishing of agent swarms, universal communication standards are crucial. Agents need to be able to discover, authenticate, and collaborate without being limited by underlying frameworks. Teams like Story, FXN, ZEREBRO, and ai16z are working to lay the foundation for the rise of agent swarms.
At the same time, this also highlights the key role of decentralization, assigning tasks to agent swarms managed by transparent on-chain rules, giving the system higher resilience and adaptability. If one agent fails, other agents can step in to fill the gap and keep the system running continuously.
Story hired Luna (an AI Agent project) as their social media intern, paying her $1,000 per day. It may sound strange, but this is a harbinger of a future in which AI Agents will become true collaborators, with their own autonomy, responsibility, and even salaries. Companies across various industries are testing human-AI hybrid teams.
We will work with AI Agents not as our slaves, but as equal partners:
Productivity surge: AI Agents can process large amounts of data, communicate with each other, and make decisions 24/7 without the need for sleep or coffee breaks.
Building trust through smart contracts: The blockchain is an impartial, tireless, and never-forgetting supervisor. An on-chain ledger ensures that important AI Agent actions follow specific boundary conditions/rules.
Evolving social norms: Soon, there will be etiquette for interacting with agents—whether to say "please" and "thank you" to AI, whether to take moral responsibility for their mistakes, or to blame their developers.
The boundary between "employees" and "software" will begin to blur in 2025. It is believed that more crypto teams will get involved, as AI Agents excel in content generation, being able to live stream and post content on social media around the clock. If you are developing an AI protocol, why not demonstrate its capabilities by deploying AI Agents internally?
We will witness a Darwinian elimination among AI Agents. This is because running an AI Agent consumes computing power, which is the cost of inference. If an AI Agent cannot generate enough value to cover its "rent," it will face extinction.
Take the AI Agent survival game as an example. First, there is the carbon credit AI: Suppose there is an AI Agent that seeks inefficient links in a decentralized energy network and autonomously trades tokenized carbon credits. If it can earn enough income to pay for its computing costs, this AI Agent will thrive. Another example is the DEX arbitrage bot: This AI Agent earns a stable income by exploiting price differences between decentralized exchanges, sufficient to cover its inference costs. In contrast, the memer on X: A fun but unsustainable virtual AI influencer without a steady income source will gradually disappear as the novelty wears off and the token price drops, unable to make a living.
The difference is clear: utility-oriented AI Agents will thrive, while those relying on disruption and hype will become irrelevant. Such natural selection is beneficial to the industry, prompting developers to innovate constantly, prioritizing productive applications over flashy technology. As more powerful and productive AI Agents rise, skeptics will gradually fall silent.
The saying "data is the new oil" is widely spread. However, the high dependence of artificial intelligence on data has also raised concerns about the impending data shortage. The traditional view is that ways should be found to collect private real data from users, even paying them for it.
However, in highly regulated industries or where real data is scarce, a more practical solution may be synthetic data. Synthetic data is artificially generated to simulate real-world data distributions. It offers a scalable, ethical, and privacy-safe alternative to human data. The advantages of synthetic data are:
Unlimited scale: Need a million medical X-rays or 3D scans of factories? Synthetic data can be generated in unlimited quantities without relying on real patients or factories.
Privacy-friendly: When dealing with synthetic data, personal privacy information is not at risk.
Customizable: Data distributions can be adjusted according to specific training needs, and even edge cases that are rare in reality or ethically complex can be inserted.
Although human data is still important in many cases, if synthetic data continues to improve in authenticity, it may surpass user data in terms of quantity, generation speed, and lack of privacy restrictions. Future decentralized artificial intelligence may revolve around "mini labs" that focus on creating highly specialized synthetic datasets to meet specific use cases.
In 2024, pioneers such as Prime Intellect and Nous Research pushed the boundaries of decentralized training. For example, they successfully trained a 15-billion-parameter model in a low-bandwidth environment, proving that large-scale training can be achieved outside traditional centralized settings. Although these models did not perform ideally in practical applications compared to existing base models and had lower performance, there were few reasons to use them. However, this is expected to change in 2025.
EXO Labs further advanced progress with SPARTA, reducing communication between GPUs by more than 1,000 times. SPARTA makes large model training in low-bandwidth environments possible without relying on specialized infrastructure. Most impressively, they declared, "SPARTA works independently, but can also be combined with synchronous low-communication training algorithms like DiLoCo for better performance." This means that these improvements are additive, and efficiency gains are cumulative.
As model technology continues to advance, smaller and more efficient models will become more useful. The future of artificial intelligence will no longer focus solely on scale, but more on quality and accessibility. Soon, there will be high-performance models capable of running on edge devices and even mobile phones.
Although many compare Virtuals and ai16z to the early days of smartphones (like iOS and Android), believing that current leaders will continue to win, this market is vast and underdeveloped, and cannot be dominated by just two participants. By the end of 2025, it is expected that at least ten new crypto AI protocols (not yet tokenized) will reach a market cap of over $1 billion each.
Decentralized artificial intelligence is still in its infancy, and a large number of talents are gathering. New protocols, new token models, and new open-source frameworks will continue to emerge. These new entrants may replace existing players through incentives (such as airdrops or clever staking), technological breakthroughs (such as low-latency inference or cross-chain interoperability), and user experience improvements (such as no-code). Changes in public perception can be instantaneous and dramatic.
Bittensor, Virtuals, and ai16z will not be alone for long. The next billion-dollar crypto AI protocol is coming, and smart investors will face numerous opportunities. This is precisely why this market is so exciting.