<100 subscribers

The AI Agent sector continues to accelerate its correction today. Except for a few DeepSeek-related memes, major Framework/Launchpad/Agent tokens such as VIRTUAL, AI16Z, AIXBT, and ARC, along with other smaller tokens, have all recorded double-digit declines.
While the overall market weakness is a contributing factor, some investors attribute the collective downturn in the AI Agent sector to the sudden popularity of DeepSeek. They believe that DeepSeek, as a new-generation, more cost-effective LLM (Large Language Model), could disrupt or even reshape the current Crypto AI landscape.
Personally, I do not support this view.
The core of the skepticism lies in whether the iterative development of LLMs will overturn existing Agents. Setting aside the debate over whether DeepSeek's current reasoning quality surpasses that of ChatGPT or Claude, and assuming DeepSeek has more promising iterative potential from an economic sustainability perspective, will the next iteration of DeepSeek create a new batch of Agents, rendering the current ones obsolete?
To answer this question, we first need to understand the relationship between LLMs and Agents. Let’s use DeepSeek itself and another "domestic star," Kimi, to explore this issue (honestly, Kimi seems to provide better answers on this topic).
DeepSeek:
AI Agents (Artificial Intelligence Agents) and LLMs are related in that LLMs can serve as a core component of Agents, especially in tasks involving natural language understanding and generation. An AI Agent is an intelligent system capable of autonomously executing tasks and interacting with its environment, while LLMs provide Agents with powerful language processing capabilities, enabling them to better understand and generate human language. This makes LLMs crucial for intelligent language interactions in scenarios like dialogue, recommendation, and analysis. In short, LLMs are a vital technological foundation for AI Agents to achieve intelligent language interactions.
Kimi:
Agents typically rely on LLMs to understand and generate natural language. LLMs provide robust language processing capabilities, allowing Agents to interact with users in natural language, understand user needs, and generate appropriate responses.
However, Agents do not solely depend on LLMs for language processing. They also integrate decision-making, problem-solving, interaction with external environments, and execution of actions. LLMs provide the foundational ability to understand and generate language, while Agents build on this to plan and execute tasks.
Agents can also extend their functionality by integrating other tools and APIs. For example, Agents can call search engines, database query tools, file read/write tools, etc., to accomplish more complex tasks. In this process, LLMs provide the language understanding and generation capabilities, enabling AI Agents to better interact with these tools.
In summary:
The operation of Agents relies on LLMs, but the quality of their expression and interaction is not entirely determined by LLMs. In fact, what differentiates Agents are the capabilities beyond LLMs.
For example, the reason aixbt "outperforms" other similar Agents in output quality is essentially due to its superior design in prompt engineering, post-processing mechanisms, context management, fine-tuning strategies, randomness control, external tool integration, and user feedback mechanisms. This allows it to generate industry-specific expressions more effectively—whether you call it a first-mover advantage or a moat, this is aixbt's current strength.
With this understanding, let’s revisit the core question: Will the iterative development of LLMs overturn existing Agents?
The answer is no, because Agents can easily integrate the capabilities of next-generation LLMs through APIs, enabling them to evolve, improve interaction quality, enhance efficiency, and expand application scenarios. This is especially true given that DeepSeek itself provides an API format compatible with OpenAI.
In fact, some quick-reacting Agents have already integrated DeepSeek. Shaw, the founder of ai16z, mentioned this morning that Eliza, the AI Agent development framework created by ai16z DAO, had already added support for DeepSeek two weeks ago.
Under the current trend, we can reasonably assume that following ai16z's Eliza, other major frameworks and Agents will also quickly integrate DeepSeek. Even if there is short-term competition from new DeepSeek-based Agents, in the long run, the competition among Agents will still depend on the external capabilities mentioned earlier. At that point, the accumulated development advantages from being first-movers will once again come into play.
Finally, let’s share some comments from industry leaders to boost the confidence of those holding onto the AI Agent sector.
Frank, the founder of DeGods, said yesterday: "People are wrong about this (DeepSeek disrupting the old market). Current AI projects will benefit from new models like DeepSeek. They just need to replace OpenAI API calls with DeepSeek, and their output will improve overnight. New models won’t disrupt Agents; they will accelerate their development."
Daniele, a trader focused on the AI sector, added: "If you're selling AI tokens because DeepSeek models are cheap and open-source, you need to understand that DeepSeek actually helps scale AI applications to millions of users with low barriers to entry. This could be the best thing to happen to the industry so far."
Shaw also posted a lengthy response this morning addressing the impact of DeepSeek. The opening sentence reads: "A more powerful model is always a good thing for Agents. Over the years, major AI labs have been leapfrogging each other. Sometimes Google leads, sometimes OpenAI, sometimes Claude, and today it’s DeepSeek’s turn…"

The AI Agent sector continues to accelerate its correction today. Except for a few DeepSeek-related memes, major Framework/Launchpad/Agent tokens such as VIRTUAL, AI16Z, AIXBT, and ARC, along with other smaller tokens, have all recorded double-digit declines.
While the overall market weakness is a contributing factor, some investors attribute the collective downturn in the AI Agent sector to the sudden popularity of DeepSeek. They believe that DeepSeek, as a new-generation, more cost-effective LLM (Large Language Model), could disrupt or even reshape the current Crypto AI landscape.
Personally, I do not support this view.
The core of the skepticism lies in whether the iterative development of LLMs will overturn existing Agents. Setting aside the debate over whether DeepSeek's current reasoning quality surpasses that of ChatGPT or Claude, and assuming DeepSeek has more promising iterative potential from an economic sustainability perspective, will the next iteration of DeepSeek create a new batch of Agents, rendering the current ones obsolete?
To answer this question, we first need to understand the relationship between LLMs and Agents. Let’s use DeepSeek itself and another "domestic star," Kimi, to explore this issue (honestly, Kimi seems to provide better answers on this topic).
DeepSeek:
AI Agents (Artificial Intelligence Agents) and LLMs are related in that LLMs can serve as a core component of Agents, especially in tasks involving natural language understanding and generation. An AI Agent is an intelligent system capable of autonomously executing tasks and interacting with its environment, while LLMs provide Agents with powerful language processing capabilities, enabling them to better understand and generate human language. This makes LLMs crucial for intelligent language interactions in scenarios like dialogue, recommendation, and analysis. In short, LLMs are a vital technological foundation for AI Agents to achieve intelligent language interactions.
Kimi:
Agents typically rely on LLMs to understand and generate natural language. LLMs provide robust language processing capabilities, allowing Agents to interact with users in natural language, understand user needs, and generate appropriate responses.
However, Agents do not solely depend on LLMs for language processing. They also integrate decision-making, problem-solving, interaction with external environments, and execution of actions. LLMs provide the foundational ability to understand and generate language, while Agents build on this to plan and execute tasks.
Agents can also extend their functionality by integrating other tools and APIs. For example, Agents can call search engines, database query tools, file read/write tools, etc., to accomplish more complex tasks. In this process, LLMs provide the language understanding and generation capabilities, enabling AI Agents to better interact with these tools.
In summary:
The operation of Agents relies on LLMs, but the quality of their expression and interaction is not entirely determined by LLMs. In fact, what differentiates Agents are the capabilities beyond LLMs.
For example, the reason aixbt "outperforms" other similar Agents in output quality is essentially due to its superior design in prompt engineering, post-processing mechanisms, context management, fine-tuning strategies, randomness control, external tool integration, and user feedback mechanisms. This allows it to generate industry-specific expressions more effectively—whether you call it a first-mover advantage or a moat, this is aixbt's current strength.
With this understanding, let’s revisit the core question: Will the iterative development of LLMs overturn existing Agents?
The answer is no, because Agents can easily integrate the capabilities of next-generation LLMs through APIs, enabling them to evolve, improve interaction quality, enhance efficiency, and expand application scenarios. This is especially true given that DeepSeek itself provides an API format compatible with OpenAI.
In fact, some quick-reacting Agents have already integrated DeepSeek. Shaw, the founder of ai16z, mentioned this morning that Eliza, the AI Agent development framework created by ai16z DAO, had already added support for DeepSeek two weeks ago.
Under the current trend, we can reasonably assume that following ai16z's Eliza, other major frameworks and Agents will also quickly integrate DeepSeek. Even if there is short-term competition from new DeepSeek-based Agents, in the long run, the competition among Agents will still depend on the external capabilities mentioned earlier. At that point, the accumulated development advantages from being first-movers will once again come into play.
Finally, let’s share some comments from industry leaders to boost the confidence of those holding onto the AI Agent sector.
Frank, the founder of DeGods, said yesterday: "People are wrong about this (DeepSeek disrupting the old market). Current AI projects will benefit from new models like DeepSeek. They just need to replace OpenAI API calls with DeepSeek, and their output will improve overnight. New models won’t disrupt Agents; they will accelerate their development."
Daniele, a trader focused on the AI sector, added: "If you're selling AI tokens because DeepSeek models are cheap and open-source, you need to understand that DeepSeek actually helps scale AI applications to millions of users with low barriers to entry. This could be the best thing to happen to the industry so far."
Shaw also posted a lengthy response this morning addressing the impact of DeepSeek. The opening sentence reads: "A more powerful model is always a good thing for Agents. Over the years, major AI labs have been leapfrogging each other. Sometimes Google leads, sometimes OpenAI, sometimes Claude, and today it’s DeepSeek’s turn…"
Share Dialog
Share Dialog
No comments yet