<100 subscribers
Share Dialog
Share Dialog
The trend of Ai + Crypto seems to be unfolding rapidly, but not in the way many had imagined. This time, it’s playing out in the form of one side crashing the other. Ai first disrupted traditional capital markets, and then it took down the Crypto market.
On January 27, the rising Chinese Ai model DeepSeek surpassed ChatGPT in downloads for the first time, topping the U.S. App Store charts. This sparked widespread attention and coverage from the global tech, investment, and media communities.
DeepSeek Goes Viral, Crypto Market Crashes?
Behind this event lies the possibility of a shift in the future landscape of U.S.-China tech development. It also sent a wave of short-term panic through U.S. capital markets. As a result, Nvidia fell by 5.3%, ARM by 5.5%, Broadcom by 4.9%, and TSMC by 4.5%. Micron, AMD, and Intel also saw significant declines. Even Nasdaq 100 futures dropped by 400 points, potentially marking the largest single-day drop since December 18. According to incomplete statistics, the U.S. stock market could lose over $1 trillion in market value during Monday’s trading session, equivalent to one-third of the total market capitalization of the crypto market.
Following the U.S. stock market, the crypto market also experienced a sharp decline, seemingly triggered by DeepSeek. Bitcoin fell below $100,500, with a 24-hour drop of 4.48%. ETH dropped below $3,200, with a 24-hour decline of 3.83%. Many are still puzzled as to why the crypto market experienced such a rapid crash. It may be related to reduced expectations of a Fed rate cut or other macroeconomic factors.
So, where is the market panic coming from? DeepSeek did not develop by amassing vast capital and GPUs like OpenAI, Meta, or Google. OpenAI was founded 10 years ago, has 4,500 employees, and has raised $6.6 billion to date. Meta spent $60 billion to develop an AI data center nearly the size of Manhattan. In contrast, DeepSeek was founded less than two years ago, has 200 employees, and cost less than $10 million to develop. It did not spend heavily on Nvidia GPUs.
Some can’t help but ask: How can they compete with DeepSeek?
DeepSeek has not only disrupted the cost advantages of capital and technology but also challenged traditional mindsets and ideologies.
The VP of Product at Dropbox lamented on social media platform X that DeepSeek is a classic disruptive story. Existing companies are optimizing existing processes, while disruptors are rethinking fundamental approaches. DeepSeek asked: What if we do this smarter instead of throwing more hardware at it?
It’s worth noting that training top-tier AI models is currently extremely expensive. Companies like OpenAI and Anthropic spend over $100 million on computing alone. They require large data centers equipped with thousands of GPUs costing $40,000 each, akin to needing an entire power plant to run a factory.
Then DeepSeek comes along and says, “What if we do this for $5 million?” And they didn’t just talk—they actually did it. Their models perform on par with or even surpass GPT-4 and Claude in many tasks. How did they do it? They rethought everything from scratch. Traditional AI is like writing every number with 32 decimal places. DeepSeek is like, “What if we only use 8 decimal places? It’s still accurate enough!” This reduces memory requirements by 75%.
The VP of Product at Dropbox said the results are staggering: training costs dropped from $100 million to $5 million. The number of GPUs required fell from 100,000 to 2,000. API costs decreased by 95%. The models can run on gaming GPUs without the need for data center hardware. Most importantly, they are open-source. This isn’t magic—it’s just incredibly clever engineering.
Some have also pointed out that DeepSeek has completely overturned traditional notions in the AI field:
China only produces closed-source/proprietary technology.
Silicon Valley is the center of global AI development, with a massive lead.
OpenAI has an unparalleled moat.
You need to spend billions or even tens of billions of dollars to develop state-of-the-art (SOTA) models.
Model value will continue to accumulate (the “fat model” hypothesis).
The scalability hypothesis suggests that model performance scales linearly with training input costs (compute, data, GPUs).
All these traditional views, if not completely overturned overnight, have been shaken.
Archerman Capital, a well-known U.S. equity investment firm, commented on DeepSeek in a briefing: First, DeepSeek represents a victory for open-source over closed-source. Contributions to the community will quickly translate into the prosperity of the entire open-source community. I believe that open-source forces, including Meta, will further develop open-source models on this foundation. Open-source is a case of “many hands make light work.”
Second, OpenAI’s approach of “brute force leads to miracles” seems a bit crude for now, but it’s not impossible that a new qualitative change could emerge at a certain scale, widening the gap between closed-source and open-source again. This remains uncertain. Based on the historical experience of AI development over the past 70 years, computing power has been crucial and likely will remain so.
Third, DeepSeek has made open-source models as good as closed-source ones, and even more efficient. The necessity of buying OpenAI’s API has decreased. Private deployment and fine-tuning will provide more room for downstream applications. In the next year or two, we will likely witness a richer variety of inference chips and a more vibrant LLM application ecosystem.
Finally, the demand for computing power will not decline. There’s a Jevons paradox that states during the first industrial revolution, the increase in steam engine efficiency led to a rise in total coal consumption. Similarly, from the era of brick phones to the widespread adoption of Nokia phones, it was precisely because they became cheaper that they became popular, and because they became popular, total market consumption increased.
The trend of Ai + Crypto seems to be unfolding rapidly, but not in the way many had imagined. This time, it’s playing out in the form of one side crashing the other. Ai first disrupted traditional capital markets, and then it took down the Crypto market.
On January 27, the rising Chinese Ai model DeepSeek surpassed ChatGPT in downloads for the first time, topping the U.S. App Store charts. This sparked widespread attention and coverage from the global tech, investment, and media communities.
DeepSeek Goes Viral, Crypto Market Crashes?
Behind this event lies the possibility of a shift in the future landscape of U.S.-China tech development. It also sent a wave of short-term panic through U.S. capital markets. As a result, Nvidia fell by 5.3%, ARM by 5.5%, Broadcom by 4.9%, and TSMC by 4.5%. Micron, AMD, and Intel also saw significant declines. Even Nasdaq 100 futures dropped by 400 points, potentially marking the largest single-day drop since December 18. According to incomplete statistics, the U.S. stock market could lose over $1 trillion in market value during Monday’s trading session, equivalent to one-third of the total market capitalization of the crypto market.
Following the U.S. stock market, the crypto market also experienced a sharp decline, seemingly triggered by DeepSeek. Bitcoin fell below $100,500, with a 24-hour drop of 4.48%. ETH dropped below $3,200, with a 24-hour decline of 3.83%. Many are still puzzled as to why the crypto market experienced such a rapid crash. It may be related to reduced expectations of a Fed rate cut or other macroeconomic factors.
So, where is the market panic coming from? DeepSeek did not develop by amassing vast capital and GPUs like OpenAI, Meta, or Google. OpenAI was founded 10 years ago, has 4,500 employees, and has raised $6.6 billion to date. Meta spent $60 billion to develop an AI data center nearly the size of Manhattan. In contrast, DeepSeek was founded less than two years ago, has 200 employees, and cost less than $10 million to develop. It did not spend heavily on Nvidia GPUs.
Some can’t help but ask: How can they compete with DeepSeek?
DeepSeek has not only disrupted the cost advantages of capital and technology but also challenged traditional mindsets and ideologies.
The VP of Product at Dropbox lamented on social media platform X that DeepSeek is a classic disruptive story. Existing companies are optimizing existing processes, while disruptors are rethinking fundamental approaches. DeepSeek asked: What if we do this smarter instead of throwing more hardware at it?
It’s worth noting that training top-tier AI models is currently extremely expensive. Companies like OpenAI and Anthropic spend over $100 million on computing alone. They require large data centers equipped with thousands of GPUs costing $40,000 each, akin to needing an entire power plant to run a factory.
Then DeepSeek comes along and says, “What if we do this for $5 million?” And they didn’t just talk—they actually did it. Their models perform on par with or even surpass GPT-4 and Claude in many tasks. How did they do it? They rethought everything from scratch. Traditional AI is like writing every number with 32 decimal places. DeepSeek is like, “What if we only use 8 decimal places? It’s still accurate enough!” This reduces memory requirements by 75%.
The VP of Product at Dropbox said the results are staggering: training costs dropped from $100 million to $5 million. The number of GPUs required fell from 100,000 to 2,000. API costs decreased by 95%. The models can run on gaming GPUs without the need for data center hardware. Most importantly, they are open-source. This isn’t magic—it’s just incredibly clever engineering.
Some have also pointed out that DeepSeek has completely overturned traditional notions in the AI field:
China only produces closed-source/proprietary technology.
Silicon Valley is the center of global AI development, with a massive lead.
OpenAI has an unparalleled moat.
You need to spend billions or even tens of billions of dollars to develop state-of-the-art (SOTA) models.
Model value will continue to accumulate (the “fat model” hypothesis).
The scalability hypothesis suggests that model performance scales linearly with training input costs (compute, data, GPUs).
All these traditional views, if not completely overturned overnight, have been shaken.
Archerman Capital, a well-known U.S. equity investment firm, commented on DeepSeek in a briefing: First, DeepSeek represents a victory for open-source over closed-source. Contributions to the community will quickly translate into the prosperity of the entire open-source community. I believe that open-source forces, including Meta, will further develop open-source models on this foundation. Open-source is a case of “many hands make light work.”
Second, OpenAI’s approach of “brute force leads to miracles” seems a bit crude for now, but it’s not impossible that a new qualitative change could emerge at a certain scale, widening the gap between closed-source and open-source again. This remains uncertain. Based on the historical experience of AI development over the past 70 years, computing power has been crucial and likely will remain so.
Third, DeepSeek has made open-source models as good as closed-source ones, and even more efficient. The necessity of buying OpenAI’s API has decreased. Private deployment and fine-tuning will provide more room for downstream applications. In the next year or two, we will likely witness a richer variety of inference chips and a more vibrant LLM application ecosystem.
Finally, the demand for computing power will not decline. There’s a Jevons paradox that states during the first industrial revolution, the increase in steam engine efficiency led to a rise in total coal consumption. Similarly, from the era of brick phones to the widespread adoption of Nokia phones, it was precisely because they became cheaper that they became popular, and because they became popular, total market consumption increased.


No comments yet