
Recession Trade Overrides Rate-Cut Hopes: Where Do U.S. Equities and Crypto Go Next?
August non-farm payrolls badly missed expectations, pushing the market-implied probability of a September Fed cut to 100 %. Yet traders are treating the number as a harbinger of recession, not a green light for risk assets. Below are key takes from analysts, translated and edited for clarity. --- Tom Lee: “Rate-Cut Rally” Could Echo 1998 and 2024 Bitmine CEO Tom Lee expects the Fed to begin cutting in September. In both 1998 (LTCM bailout) and 2024 (regional-bank scare), equities and crypto r...

AI + DeFi = Financial Freedom? Unveiling How DeFAI Disrupts Fintech!
Artificial Intelligence (AI) is a technology that simulates human intelligence to perform tasks, capable of processing vast amounts of data, recognizing patterns, and providing decision support. Decentralized Finance (DeFi) is a financial system based on blockchain technology, aiming to provide financial services without intermediaries through smart contracts, such as lending, trading, and yield farming. In the fintech field, AI enhances the efficiency and precision of financial services thro...

How to Achieve Financial Freedom Without Relying on Luck
The Core Path to Financial Freedom Based on the wealth logic of "The Almanack of Naval Ravikant," the key lies in three directions: increasing income, investing for compound growth, and managing desires. Key to Increasing Income By "productizing yourself" and leveraging tools like capital, labor, code, or media—especially permissionless leverage such as code and media—build a personal brand and directly reach your audience. Long-Term Investment Strategy Prioritize improving your knowledge and...



Recession Trade Overrides Rate-Cut Hopes: Where Do U.S. Equities and Crypto Go Next?
August non-farm payrolls badly missed expectations, pushing the market-implied probability of a September Fed cut to 100 %. Yet traders are treating the number as a harbinger of recession, not a green light for risk assets. Below are key takes from analysts, translated and edited for clarity. --- Tom Lee: “Rate-Cut Rally” Could Echo 1998 and 2024 Bitmine CEO Tom Lee expects the Fed to begin cutting in September. In both 1998 (LTCM bailout) and 2024 (regional-bank scare), equities and crypto r...

AI + DeFi = Financial Freedom? Unveiling How DeFAI Disrupts Fintech!
Artificial Intelligence (AI) is a technology that simulates human intelligence to perform tasks, capable of processing vast amounts of data, recognizing patterns, and providing decision support. Decentralized Finance (DeFi) is a financial system based on blockchain technology, aiming to provide financial services without intermediaries through smart contracts, such as lending, trading, and yield farming. In the fintech field, AI enhances the efficiency and precision of financial services thro...

How to Achieve Financial Freedom Without Relying on Luck
The Core Path to Financial Freedom Based on the wealth logic of "The Almanack of Naval Ravikant," the key lies in three directions: increasing income, investing for compound growth, and managing desires. Key to Increasing Income By "productizing yourself" and leveraging tools like capital, labor, code, or media—especially permissionless leverage such as code and media—build a personal brand and directly reach your audience. Long-Term Investment Strategy Prioritize improving your knowledge and...
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
In the past week, the DeepSeek R1 model from China has stirred up the entire overseas AI community.
On one hand, it achieves performance comparable to OpenAI o1 at a relatively low training cost, demonstrating China's advantages in engineering capabilities and scale innovation. On the other hand, it upholds the spirit of open - source and is enthusiastic about sharing technical details.
Recently, a research team led by Jiayi Pan, a PhD student at the University of California, Berkeley, successfully reproduced the key technology - "Eureka Moment" of DeepSeek R1 - Zero at an extremely low cost (less than $30).
No wonder Meta CEO Mark Zuckerberg, Turing Award winner Yann LeCun, Deepmind CEO Demis Hassabis, and others have highly praised DeepSeek.
As the popularity of DeepSeek R1 continues to rise, this afternoon, the DeepSeek App experienced a brief server - busy situation due to a surge in user visits, and it even "crashed" for a while.
OpenAI CEO Sam Altman also just tried to reveal the usage quota of o3 - mini to regain the front - page headlines of international media - ChatGPT Plus members can query 100 times a day.
However, few people know that before becoming famous, DeepSeek's parent company, Huanshi Quant, was actually one of the leading enterprises in the domestic quantitative private equity field.
On December 26, 2024, DeepSeek officially released the DeepSeek - V3 large - scale model.
This model performs excellently in multiple benchmark tests, surpassing mainstream top - notch models in the industry, especially in knowledge - based question - answering, long - text processing, code generation, and mathematical abilities. For example, in knowledge - related tasks such as MMLU and GPQA, the performance of DeepSeek - V3 is close to that of the international top - notch model Claude - 3.5 - Sonnet.
In terms of mathematical ability, it set new records in tests such as AIME 2024 and CNMO 2024, surpassing all known open - source and closed - source models. At the same time, its generation speed increased by 200% compared to the previous generation, reaching 60 TPS, greatly improving the user experience.
According to the analysis of the independent evaluation website Artificial Analysis, DeepSeek - V3 surpasses other open - source models in multiple key indicators and is on a par with the world - leading closed - source models GPT - 4o and Claude - 3.5 - Sonnet in terms of performance.
The core technical advantages of DeepSeek - V3 include:
Mixture of Experts (MoE) Architecture: DeepSeek - V3 has 671 billion parameters, but in actual operation, only 37 billion parameters are activated for each input. This selective activation method greatly reduces the computational cost while maintaining high performance.
Multi - Head Latent Attention (MLA): This architecture has been verified in DeepSeek - V2 and enables efficient training and inference.
Load - Balancing Strategy without Auxiliary Loss: This strategy aims to minimize the negative impact of load - balancing on model performance.
Multi - tokens Prediction Training Objective: This strategy improves the overall performance of the model.
Efficient Training Framework: It adopts the HAI - LLM framework, supports 16 - way Pipeline Parallelism (PP), 64 - way Expert Parallelism (EP), and ZeRO - 1 Data Parallelism (DP), and reduces the training cost through various optimization methods.
More importantly, the training cost of DeepSeek - V3 is only $5.58 million, much lower than that of GPT - 4, which has a training cost as high as $78 million. Moreover, the price of its API service continues its previous affordable approach.
It only costs 0.5 yuan (for cache hits) or 2 yuan (for cache misses) per million input tokens, and 8 yuan per million output tokens.
The Financial Times described it as a "dark horse that shocked the international tech community," believing that its performance is comparable to that of well - funded American competitors' models such as OpenAI. Chris McKay, the founder of Maginative, further pointed out that the success of DeepSeek - V3 may re - define the established methods of AI model development.
In other words, the success of DeepSeek - V3 is also regarded as a direct response to the US's restrictions on computing power exports. This external pressure has instead stimulated innovation in China.
The rise of DeepSeek has made Silicon Valley restless. Liang Wenfeng, the founder behind this model that stirs the global AI industry, perfectly interprets the growth trajectory of a genius in the traditional Chinese sense - achieving success at a young age and remaining innovative over time.
A good leader of an AI company needs to understand both technology and business, have both vision and practicality, and possess both the courage to innovate and engineering discipline. Such compound talents are inherently scarce resources.
Liang Wenfeng was admitted to the Department of Information and Electronic Engineering at Zhejiang University at the age of 17. At the age of 30, he founded Huanshi Quant (Hquant) and began leading the team to explore fully automated quantitative trading. Liang Wenfeng's story proves that a genius always does the right thing at the right time.
2010: With the launch of the CSI 300 stock index futures, quantitative investment 迎来了发展机遇,and the Huanshi team seized the opportunity, with self - managed funds growing rapidly.
2015: Liang Wenfeng co - founded Huanshi Quant with his alumni. The following year, they launched the first AI model and launched trading positions generated by deep learning.
2017: Huanshi Quant claimed to have fully AI - enabled its investment strategies.
2018: AI was established as the company's main development direction.
2019: The fund management scale exceeded 10 billion yuan, making it one of the "big four" domestic quantitative private equity firms.
2021: Huanshi Quant became the first domestic quantitative private equity giant to exceed 100 billion yuan in scale.
You can't only remember this company's days of being in the doldrums in the past few years when it is successful. However, just like the transformation of a quantitative trading company into AI, it may seem unexpected, but it is actually logical - because they are both data - driven and technology - intensive industries.
Huang Renxun just wanted to sell gaming graphics cards and earn a little money from us gamers, but he didn't expect to become the world's largest AI "arsenal." The entry of Huanshi into the AI field is simliar. This kind of evolution is more viable than many industries' forced adoption of AI large - scale models.
Huanshi Quant accumulated a lot of experience in data processing and algorithm optimization during the process of quantitative investment. At the same time, it has a large number of A100 chips, providing strong hardware support for AI model training. Since 2017, Huanshi Quant has made large - scale arrangements for AI computing power, building high - performance computing clusters such as "Firefly One" and "Firefly Two" to provide powerful computing power support for AI model training.
In 2023, Huanshi Quant officially established DeepSeek, focusing on the research and development of AI large - scale models. DeepSeek inherited the accumulation of Huanshi Quant in terms of technology, talent, and resources and quickly emerged in the AI field.
In an in - depth interview with "Dark Current," Liang Wenfeng, the founder of DeepSeek, also demonstrated a unique strategic vision.
Unlike most Chinese companies that choose to replicate the Llama architecture, DeepSeek starts directly from the model structure, aiming at the grand goal of AGI.
Liang Wenfeng does not shy away from the current gap. There is a significant gap between the current Chinese AI and the international top - level. The comprehensive gap in model structure, training dynamics, and data efficiency requires four times the computing power to achieve the same effect.
This attitude of facing challenges directly stems from Liang Wenfeng's years of experience accumulation in Huanshi.
He emphasizes that open - source is not only a technical sharing but also a cultural expression. The real moat lies in the team's continuous innovation ability. DeepSeek's unique organizational culture encourages bottom - up innovation, 淡化层级,and values the enthusiasm and creativity of talents.
The team is mainly composed of young people from top universities and adopts a natural division - of - labor model, allowing employees to explore and collaborate independently. When recruiting, it values employees' love and curiosity more than traditional experience and background.
Regarding the industry's prospects, Liang Wenfeng believes that AI is in a period of technological innovation explosion, not an application explosion. He emphasizes that China needs more original technological innovation and cannot always be in the imitation stage. Someone needs to stand at the forefront of technology.
Even though companies like OpenAI are currently leading, there are still opportunities for innovation.
In the past week, the DeepSeek R1 model from China has stirred up the entire overseas AI community.
On one hand, it achieves performance comparable to OpenAI o1 at a relatively low training cost, demonstrating China's advantages in engineering capabilities and scale innovation. On the other hand, it upholds the spirit of open - source and is enthusiastic about sharing technical details.
Recently, a research team led by Jiayi Pan, a PhD student at the University of California, Berkeley, successfully reproduced the key technology - "Eureka Moment" of DeepSeek R1 - Zero at an extremely low cost (less than $30).
No wonder Meta CEO Mark Zuckerberg, Turing Award winner Yann LeCun, Deepmind CEO Demis Hassabis, and others have highly praised DeepSeek.
As the popularity of DeepSeek R1 continues to rise, this afternoon, the DeepSeek App experienced a brief server - busy situation due to a surge in user visits, and it even "crashed" for a while.
OpenAI CEO Sam Altman also just tried to reveal the usage quota of o3 - mini to regain the front - page headlines of international media - ChatGPT Plus members can query 100 times a day.
However, few people know that before becoming famous, DeepSeek's parent company, Huanshi Quant, was actually one of the leading enterprises in the domestic quantitative private equity field.
On December 26, 2024, DeepSeek officially released the DeepSeek - V3 large - scale model.
This model performs excellently in multiple benchmark tests, surpassing mainstream top - notch models in the industry, especially in knowledge - based question - answering, long - text processing, code generation, and mathematical abilities. For example, in knowledge - related tasks such as MMLU and GPQA, the performance of DeepSeek - V3 is close to that of the international top - notch model Claude - 3.5 - Sonnet.
In terms of mathematical ability, it set new records in tests such as AIME 2024 and CNMO 2024, surpassing all known open - source and closed - source models. At the same time, its generation speed increased by 200% compared to the previous generation, reaching 60 TPS, greatly improving the user experience.
According to the analysis of the independent evaluation website Artificial Analysis, DeepSeek - V3 surpasses other open - source models in multiple key indicators and is on a par with the world - leading closed - source models GPT - 4o and Claude - 3.5 - Sonnet in terms of performance.
The core technical advantages of DeepSeek - V3 include:
Mixture of Experts (MoE) Architecture: DeepSeek - V3 has 671 billion parameters, but in actual operation, only 37 billion parameters are activated for each input. This selective activation method greatly reduces the computational cost while maintaining high performance.
Multi - Head Latent Attention (MLA): This architecture has been verified in DeepSeek - V2 and enables efficient training and inference.
Load - Balancing Strategy without Auxiliary Loss: This strategy aims to minimize the negative impact of load - balancing on model performance.
Multi - tokens Prediction Training Objective: This strategy improves the overall performance of the model.
Efficient Training Framework: It adopts the HAI - LLM framework, supports 16 - way Pipeline Parallelism (PP), 64 - way Expert Parallelism (EP), and ZeRO - 1 Data Parallelism (DP), and reduces the training cost through various optimization methods.
More importantly, the training cost of DeepSeek - V3 is only $5.58 million, much lower than that of GPT - 4, which has a training cost as high as $78 million. Moreover, the price of its API service continues its previous affordable approach.
It only costs 0.5 yuan (for cache hits) or 2 yuan (for cache misses) per million input tokens, and 8 yuan per million output tokens.
The Financial Times described it as a "dark horse that shocked the international tech community," believing that its performance is comparable to that of well - funded American competitors' models such as OpenAI. Chris McKay, the founder of Maginative, further pointed out that the success of DeepSeek - V3 may re - define the established methods of AI model development.
In other words, the success of DeepSeek - V3 is also regarded as a direct response to the US's restrictions on computing power exports. This external pressure has instead stimulated innovation in China.
The rise of DeepSeek has made Silicon Valley restless. Liang Wenfeng, the founder behind this model that stirs the global AI industry, perfectly interprets the growth trajectory of a genius in the traditional Chinese sense - achieving success at a young age and remaining innovative over time.
A good leader of an AI company needs to understand both technology and business, have both vision and practicality, and possess both the courage to innovate and engineering discipline. Such compound talents are inherently scarce resources.
Liang Wenfeng was admitted to the Department of Information and Electronic Engineering at Zhejiang University at the age of 17. At the age of 30, he founded Huanshi Quant (Hquant) and began leading the team to explore fully automated quantitative trading. Liang Wenfeng's story proves that a genius always does the right thing at the right time.
2010: With the launch of the CSI 300 stock index futures, quantitative investment 迎来了发展机遇,and the Huanshi team seized the opportunity, with self - managed funds growing rapidly.
2015: Liang Wenfeng co - founded Huanshi Quant with his alumni. The following year, they launched the first AI model and launched trading positions generated by deep learning.
2017: Huanshi Quant claimed to have fully AI - enabled its investment strategies.
2018: AI was established as the company's main development direction.
2019: The fund management scale exceeded 10 billion yuan, making it one of the "big four" domestic quantitative private equity firms.
2021: Huanshi Quant became the first domestic quantitative private equity giant to exceed 100 billion yuan in scale.
You can't only remember this company's days of being in the doldrums in the past few years when it is successful. However, just like the transformation of a quantitative trading company into AI, it may seem unexpected, but it is actually logical - because they are both data - driven and technology - intensive industries.
Huang Renxun just wanted to sell gaming graphics cards and earn a little money from us gamers, but he didn't expect to become the world's largest AI "arsenal." The entry of Huanshi into the AI field is simliar. This kind of evolution is more viable than many industries' forced adoption of AI large - scale models.
Huanshi Quant accumulated a lot of experience in data processing and algorithm optimization during the process of quantitative investment. At the same time, it has a large number of A100 chips, providing strong hardware support for AI model training. Since 2017, Huanshi Quant has made large - scale arrangements for AI computing power, building high - performance computing clusters such as "Firefly One" and "Firefly Two" to provide powerful computing power support for AI model training.
In 2023, Huanshi Quant officially established DeepSeek, focusing on the research and development of AI large - scale models. DeepSeek inherited the accumulation of Huanshi Quant in terms of technology, talent, and resources and quickly emerged in the AI field.
In an in - depth interview with "Dark Current," Liang Wenfeng, the founder of DeepSeek, also demonstrated a unique strategic vision.
Unlike most Chinese companies that choose to replicate the Llama architecture, DeepSeek starts directly from the model structure, aiming at the grand goal of AGI.
Liang Wenfeng does not shy away from the current gap. There is a significant gap between the current Chinese AI and the international top - level. The comprehensive gap in model structure, training dynamics, and data efficiency requires four times the computing power to achieve the same effect.
This attitude of facing challenges directly stems from Liang Wenfeng's years of experience accumulation in Huanshi.
He emphasizes that open - source is not only a technical sharing but also a cultural expression. The real moat lies in the team's continuous innovation ability. DeepSeek's unique organizational culture encourages bottom - up innovation, 淡化层级,and values the enthusiasm and creativity of talents.
The team is mainly composed of young people from top universities and adopts a natural division - of - labor model, allowing employees to explore and collaborate independently. When recruiting, it values employees' love and curiosity more than traditional experience and background.
Regarding the industry's prospects, Liang Wenfeng believes that AI is in a period of technological innovation explosion, not an application explosion. He emphasizes that China needs more original technological innovation and cannot always be in the imitation stage. Someone needs to stand at the forefront of technology.
Even though companies like OpenAI are currently leading, there are still opportunities for innovation.
No comments yet