
Cloud Computing in 2025: AI-Fueled Growth and New Challenges
Cloud computing hits $2 trillion by 2030. AI drives data center growth, power demand, sustainability challenges, and new regulations.

The Energy Constraint
How AI, electrification, and grid bottlenecks are colliding faster than infrastructure can adapt

Policy Lag in a Compute-Driven Economy
Why exponential compute growth is outpacing policy
<100 subscribers

Cloud Computing in 2025: AI-Fueled Growth and New Challenges
Cloud computing hits $2 trillion by 2030. AI drives data center growth, power demand, sustainability challenges, and new regulations.

The Energy Constraint
How AI, electrification, and grid bottlenecks are colliding faster than infrastructure can adapt

Policy Lag in a Compute-Driven Economy
Why exponential compute growth is outpacing policy
Share Dialog
Share Dialog


In March 2024, Microsoft and OpenAI announced plans for a data center project called “Stargate” that would cost $100 billion and require up to 5 gigawatts of power, which is more electricity than many cities consume. The facility would house millions of AI chips and be 100 times more expensive than the largest data centers operating today [1]. Microsoft was considering dedicated nuclear power plants as the energy source.
China’s response was systematic and scaled. Throughout 2023 and 2024, the country accelerated investment in national computing hubs and regional data-center clusters. Official reports noted billions of dollars directed toward expanding core AI infrastructure, including major computing hubs backed by state-owned enterprises and government funds [2]. The message was clear: whoever builds the infrastructure to train the most powerful AI models wins everything. “Everything” is not just wealth; it is setting the rules for the global economy, redefining military power, and steering the direction of human progress.
This is today’s reality, and only two countries are in the lead: the United States and China. The US has its tech champions like Google and OpenAI, fueled by private money and global brainpower. China has its national AI strategy, backed by a vast market and tightly controlled data.
America bets on its tech giants and star researchers, while China leverages its huge population and coordinated government plans.
But both need the same fundamental resource: immense computing power. This is where the battle gets physical, taking the form of massive, power-hungry data centers that serve as the engine rooms of the AI race. Who builds them faster and smarter might just decide who wins.
To understand the AI battle between the United States and China, it helps to break the competition into layers. Each layer exposes a different strategic advantage, vulnerable point, or pressure zone. To create AI Applications, you need Large Language Models (LLMs), and to train these LLMs, you need massive Infrastructure. These Infrastructures rely on the supply of advanced chips which leads us to using Semiconductors to Produce these chips. And at the very bottom governing it all, is the Design of the most critical component: the semiconductor. At every layer, these two countries fight for domination.
The very first layer is the one you and I interact with, shaping how businesses and governments interact with technology everyday.
In the United States, AI applications have grown through a culture of open experimentation and private sector innovation. Tools like ChatGPT, Gemini, Meta AI, and Claude power everything from text generation to image creation, coding assistants, educational tutors, productivity copilots, research companions, and health or science related discovery engines. This creates enormous global influence because users in many countries experience American AI by default.
China, on the other hand, approaches AI applications through national direction and scale. Platforms like Baidu, Alibaba, Tencent, ByteDance, and iFlytek build AI tools that integrate deeply with commerce, payments, transportation, and social life. They also operate within a unified ecosystem of data, regulation, and state oversight that gives them unparalleled reach inside China. Although geopolitical constraints limit their global expansion, they dominate one of the largest digital markets on earth.
What makes the application layer strategically important is its feedback loop: the more widely AI is deployed, the more data is generated, and the more powerful the underlying models become. China’s digital ecosystem produces enormous real-time behavioral data, fueling companies such as ByteDance and Baidu. Meanwhile, American firms dominate in global SaaS penetration and developer tooling, creating the default platforms on which international startups build. Stanford HAI’s 2024 AI Index Report indicates that organizations in North America, including the US, remain the leading region in corporate‑level AI adoption [3].
Beneath these AI Applications are LLMs trained on massive amounts of data. These models demonstrate not only advanced language understanding but also capabilities in reasoning, coding, and content generation. The competition here is not just about model size, but also about training data diversity, alignment with human values, and real-world adaptability.
America’s approach is to push the raw frontier of capability, often prioritizing power over practicality in the initial sprint. Models like GPT-4 and GPT-4o (OpenAI), Claude 3 (Anthropic), and Gemini (Google) are the current gold standard for general reasoning and fluency. They are massive, trained on a significant portion of the global internet, and designed to be versatile “generalists”.
But training a frontier model is heavily expensive. Analysts at SemiAnalysis estimate that training a model like GPT‑4 could cost on the order of tens of millions to low hundreds of millions of dollars, consuming massive compute power in advanced data centers [4]. This creates a “brute force” ceiling that only a few well-funded US players can currently reach. Furthermore, the concentration of top AI researchers in US labs and universities creates an innovation moat that is hard to breach quickly.
China, facing some constraints on accessing the most advanced chips and, to a degree, the highest-quality global data, is pioneering a path of efficiency and specialization. Companies like Baidu (Ernie), Alibaba (Qwen), Tencent (Hunyuan), and 01.AI (Yi) have rapidly developed powerful models. While they may not always beat GPT-4 on every broad benchmark, they are increasingly competitive, as shown by models like Alibaba’s Qwen 2.5 topping certain global performance rankings [4]. Crucially, Chinese models are often fine-tuned from the start for Chinese language, culture, and compliance.
China’s key strength is creating models super-aligned with its specific ecosystem. This isn’t just about language; it’s about deeply understanding Chinese regulatory requirements, socialist values, and unique commercial practices. Their barrier is domestic relevance and vertical integration. They are building brains that are perfectly adapted to run within China’s “walled garden”, making them more immediately useful for domestic applications than a generalized Western model could be. Their progress also demonstrates a formidable ability to maximize results from constrained compute resources through algorithmic ingenuity.
It’s a race between scale versus specialization. The US currently holds the crown for raw, general-purpose cognitive power, protected by its resource advantage. China is rapidly closing the gap by building smarter, more focused, and ecosystem-native models. However, creating and running these massive digital brains requires an enormous physical foundation which can require a network of computing power spanned across the continent. This brings the competition down to the most tangible layer yet: the infrastructure that turns electricity into intelligence.
If applications and LLMs are the visible face of the AI race, infrastructure is the skeleton holding everything upright. This is the layer where raw computational force, data pipelines, networking fabrics, cloud platforms, and chip ecosystems converge into the horsepower that determines how fast a country can iterate, train, deploy, and scale AI systems.
The US Play: The Scale and Silicon Nexus
The United States possesses a formidable, integrated advantage: it designs the world’s most advanced AI chips and operates its largest cloud computing platforms.
Core Infrastructure: Dominance is anchored by hyperscale cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These giants operate global networks of data centers and have designed their own custom AI chips (like Google’s TPU and AWS’s Trainium) to complement purchases from leaders like NVIDIA. Meta planned a GPU cluster of up to 350,000 NVIDIA H100 GPUs to train its recent LLaMA generation, a scale made possible because the United States sits at the center of the global AI hardware ecosystem and maintains a deep relationship with Nvidia hardware [5].
The Barrier to Entry: The US barrier is a self-reinforcing cycle of scale, innovation, and capital. Building a competitive global cloud AI infrastructure requires hundreds of billions in investment over a decade – a barrier only a few can match. More critically, there’s the close symbiotic relationship between US chip designers (NVIDIA), cloud giants (AWS, Microsoft Azure, Google Cloud) and AI model builders (OpenAI, Anthropic) which creates a high-performance feedback loop. New chip architectures are tested and deployed at scale immediately, directly fueling the next leap in AI capabilities. This creates an ecosystem moat that is about more than just hardware; it’s about a tightly integrated pipeline from silicon to service.

China’s Play: The Sovereign Stack
Facing stringent restrictions on importing advanced AI chips, China’s strategy has crystallized around building a sovereign, domestic AI infrastructure stack, a concept now central to its national policy.
Core Infrastructure: This effort is led by “National Computing Hubs” and domestic cloud champions like Alibaba Cloud and Huawei Cloud. China cannot access H100 chips at the scale its US competitor does due to export bans, which led companies like DeepSeek to adopt the less powerful H800 GPUs and operate with only 2,048 units. Despite these constraints, they claim to have produced competitive results via aggressive optimization, distributed training, and software/hardware co‑design [6].
The Barrier to Entry: China’s barrier to entry is centered on restricted access to cutting edge chips and the enormous capital required to build a fully domestic hardware ecosystem. While firms like Huawei and Biren are advancing chip design, scaling to match US compute capacity remains costly and technically complex. As a result, China compensates with software level efficiencies, model compression, and distributed training, but the gap in unrestricted access to top tier silicon still shapes the pace and scale of its AI progress.
Before this layer, it was all software, strategy and electricity. Here, it is pure physics, chemistry, and geopolitics. Production & Design refers to the creation of the most critical hardware: the semiconductors (chips) that power every data center, run every LLM, and enable every application.
US dominance lies in controlling the critical choke points. American companies like NVIDIA, AMD, and Apple design the world’s most advanced AI chip architectures. Even more crucially, the software (EDA tools) to design them and the core intellectual property (IP) are American-dominated [7]. Even in 2025, US export‑control policy remains in force. For manufacturing, the US retains a lead in the most sophisticated production through Taiwan Semiconductor Manufacturing Company (TSMC), whose security it guarantees, and is heavily subsidizing the revival of its own cutting-edge production via the CHIPS Act.
To understand the US advantage clearly, it rests on a cluster of tightly controlled assets:
The world’s leading chip designers (NVIDIA, AMD, Apple) and the EDA software required to design advanced semiconductors.
Strategic alliances that secure global fabrication capacity, especially TSMC’s most advanced nodes.
Nonetheless, it’s not just about making chips; it’s about controlling the entire ecosystem required to produce the most advanced AI hardware. The U.S. and its allies (notably the Netherlands and Japan) dominate critical technologies such as extreme ultraviolet (EUV) lithography machines, supplied by ASML, which are required to fabricate cutting-edge transistors. Because these capabilities are concentrated in a small set of companies and countries, replicating the full ecosystem is extremely difficult. Furthermore, because much of the hardware, design software, and IP originates in the U.S., export-control regulations effectively block their sale or re-export to China unless licensed, creating a structural barrier to China’s ability to build a fully competitive chip-design and manufacturing stack [8].
China’s strategy focuses on building a self‑sufficient chip ecosystem that reduces reliance on foreign technology and gradually narrows the gap in advanced design and manufacturing. Major Chinese firms such as Huawei, HiSilicon, SMIC, and others are part of a broader push toward domestically designed AI accelerators and increasingly advanced manufacturing nodes [9].
Because China is restricted from accessing much of the most advanced lithography and chip‑making equipment, it has focused on expanding its deep‑ultraviolet (DUV) capacity and relying on aggressive process optimization and alternative manufacturing strategies [10]. Supported by massive state-led investment and industrial-policy programs, the country is attempting to climb the design and fabrication ladder simultaneously. It is building an alternative semiconductor stack that could, over time, reduce its vulnerability to export controls and supply-chain pressure.
This combination of scale, optimization, and state orchestration forms the backbone of China’s long-run strategy: to build a semiconductor system that may not match the US on absolute frontier performance today, but could rival it in resilience, independence, and production capacity over time.
These layers in the AI race are more than a technological competition; it is a contest to define the next world order. Whether one side wins decisively, or the race ends in a protracted stalemate, the consequences of its outcome will shape the lives of billions. If neither side pulls ahead, we may instead enter an era defined by parallel technological spheres and rising geopolitical tension.
Key Consequences to Watch
If one side wins: The dominant country sets the standards, controls the supply chains, shapes the global AI safety framework, and secures disproportionate economic and military advantages.
If neither side wins: The world may fragment into competing technological blocs, with duplicated supply chains, incompatible standards, and escalating competition in both commercial and military AI.
The final question, therefore, is not merely “Who wins?” but “To what end, and under what safeguards?” The true measure of success in the AI race may ultimately be whether its winner can channel this civilization-altering power toward collective human advancement, or whether it becomes an instrument of unchecked control.
Griffith, E. (2024, March 29). Microsoft and OpenAI Plot $100 Billion Stargate A.I. Supercomputer. Reuters.
https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/
Xinhua News Agency. “China invests over 6.1 billion USD in major computing hubs: official.” English.news.cn, 2024.
https://english.news.cn/20240829/b1ef10d7cfce43039005a94a021a07bb/c.html
Stanford HAI. AI Index Report 2024: Chapter 4 – AI Adoption and Use. Stanford Institute for Human-Centered Artificial Intelligence, 2024.
https://hai.stanford.edu/assets/files/hai_ai-index-report-2024_chapter4.pdf
Patel, D. (2023, April 5). GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE. SemiAnalysis.
Meta. Building Meta’s GenAI Infrastructure — Data‑Center Engineering Blog Post. 2024.
https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/
Center for Strategic and International Studies. “DeepSeek, Huawei, Export Controls, and the Future of the US–China AI Race.” 2025.
https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race
Harithas, Barath and Andreas Schumacher. Where the Chips Fall: U.S. Export Controls Under the Biden Administration 2022–2024. Center for Strategic & International Studies (CSIS). December 12, 2024.
https://www.csis.org/analysis/where-chips-fall-us-export-controls-under-biden-administration-2022-2024
Center for Strategic and International Studies (CSIS). U.S. Export Controls on AI and Semiconductors: Global Implications. 2024.
laweconcenter.org/resources/us-export-controls-on-ai-and-semiconductors-two-divergent-visions.
U.S.–China Economic and Security Review Commission. Made in China 2025: Evaluating China’s Performance. 2024.
https://www.uscc.gov/research/made-china-2025-evaluating-chinas-performance
Center for Strategic and International Studies (CSIS). China’s New Strategy for Waging the Microchip Tech War. 2024.
In March 2024, Microsoft and OpenAI announced plans for a data center project called “Stargate” that would cost $100 billion and require up to 5 gigawatts of power, which is more electricity than many cities consume. The facility would house millions of AI chips and be 100 times more expensive than the largest data centers operating today [1]. Microsoft was considering dedicated nuclear power plants as the energy source.
China’s response was systematic and scaled. Throughout 2023 and 2024, the country accelerated investment in national computing hubs and regional data-center clusters. Official reports noted billions of dollars directed toward expanding core AI infrastructure, including major computing hubs backed by state-owned enterprises and government funds [2]. The message was clear: whoever builds the infrastructure to train the most powerful AI models wins everything. “Everything” is not just wealth; it is setting the rules for the global economy, redefining military power, and steering the direction of human progress.
This is today’s reality, and only two countries are in the lead: the United States and China. The US has its tech champions like Google and OpenAI, fueled by private money and global brainpower. China has its national AI strategy, backed by a vast market and tightly controlled data.
America bets on its tech giants and star researchers, while China leverages its huge population and coordinated government plans.
But both need the same fundamental resource: immense computing power. This is where the battle gets physical, taking the form of massive, power-hungry data centers that serve as the engine rooms of the AI race. Who builds them faster and smarter might just decide who wins.
To understand the AI battle between the United States and China, it helps to break the competition into layers. Each layer exposes a different strategic advantage, vulnerable point, or pressure zone. To create AI Applications, you need Large Language Models (LLMs), and to train these LLMs, you need massive Infrastructure. These Infrastructures rely on the supply of advanced chips which leads us to using Semiconductors to Produce these chips. And at the very bottom governing it all, is the Design of the most critical component: the semiconductor. At every layer, these two countries fight for domination.
The very first layer is the one you and I interact with, shaping how businesses and governments interact with technology everyday.
In the United States, AI applications have grown through a culture of open experimentation and private sector innovation. Tools like ChatGPT, Gemini, Meta AI, and Claude power everything from text generation to image creation, coding assistants, educational tutors, productivity copilots, research companions, and health or science related discovery engines. This creates enormous global influence because users in many countries experience American AI by default.
China, on the other hand, approaches AI applications through national direction and scale. Platforms like Baidu, Alibaba, Tencent, ByteDance, and iFlytek build AI tools that integrate deeply with commerce, payments, transportation, and social life. They also operate within a unified ecosystem of data, regulation, and state oversight that gives them unparalleled reach inside China. Although geopolitical constraints limit their global expansion, they dominate one of the largest digital markets on earth.
What makes the application layer strategically important is its feedback loop: the more widely AI is deployed, the more data is generated, and the more powerful the underlying models become. China’s digital ecosystem produces enormous real-time behavioral data, fueling companies such as ByteDance and Baidu. Meanwhile, American firms dominate in global SaaS penetration and developer tooling, creating the default platforms on which international startups build. Stanford HAI’s 2024 AI Index Report indicates that organizations in North America, including the US, remain the leading region in corporate‑level AI adoption [3].
Beneath these AI Applications are LLMs trained on massive amounts of data. These models demonstrate not only advanced language understanding but also capabilities in reasoning, coding, and content generation. The competition here is not just about model size, but also about training data diversity, alignment with human values, and real-world adaptability.
America’s approach is to push the raw frontier of capability, often prioritizing power over practicality in the initial sprint. Models like GPT-4 and GPT-4o (OpenAI), Claude 3 (Anthropic), and Gemini (Google) are the current gold standard for general reasoning and fluency. They are massive, trained on a significant portion of the global internet, and designed to be versatile “generalists”.
But training a frontier model is heavily expensive. Analysts at SemiAnalysis estimate that training a model like GPT‑4 could cost on the order of tens of millions to low hundreds of millions of dollars, consuming massive compute power in advanced data centers [4]. This creates a “brute force” ceiling that only a few well-funded US players can currently reach. Furthermore, the concentration of top AI researchers in US labs and universities creates an innovation moat that is hard to breach quickly.
China, facing some constraints on accessing the most advanced chips and, to a degree, the highest-quality global data, is pioneering a path of efficiency and specialization. Companies like Baidu (Ernie), Alibaba (Qwen), Tencent (Hunyuan), and 01.AI (Yi) have rapidly developed powerful models. While they may not always beat GPT-4 on every broad benchmark, they are increasingly competitive, as shown by models like Alibaba’s Qwen 2.5 topping certain global performance rankings [4]. Crucially, Chinese models are often fine-tuned from the start for Chinese language, culture, and compliance.
China’s key strength is creating models super-aligned with its specific ecosystem. This isn’t just about language; it’s about deeply understanding Chinese regulatory requirements, socialist values, and unique commercial practices. Their barrier is domestic relevance and vertical integration. They are building brains that are perfectly adapted to run within China’s “walled garden”, making them more immediately useful for domestic applications than a generalized Western model could be. Their progress also demonstrates a formidable ability to maximize results from constrained compute resources through algorithmic ingenuity.
It’s a race between scale versus specialization. The US currently holds the crown for raw, general-purpose cognitive power, protected by its resource advantage. China is rapidly closing the gap by building smarter, more focused, and ecosystem-native models. However, creating and running these massive digital brains requires an enormous physical foundation which can require a network of computing power spanned across the continent. This brings the competition down to the most tangible layer yet: the infrastructure that turns electricity into intelligence.
If applications and LLMs are the visible face of the AI race, infrastructure is the skeleton holding everything upright. This is the layer where raw computational force, data pipelines, networking fabrics, cloud platforms, and chip ecosystems converge into the horsepower that determines how fast a country can iterate, train, deploy, and scale AI systems.
The US Play: The Scale and Silicon Nexus
The United States possesses a formidable, integrated advantage: it designs the world’s most advanced AI chips and operates its largest cloud computing platforms.
Core Infrastructure: Dominance is anchored by hyperscale cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These giants operate global networks of data centers and have designed their own custom AI chips (like Google’s TPU and AWS’s Trainium) to complement purchases from leaders like NVIDIA. Meta planned a GPU cluster of up to 350,000 NVIDIA H100 GPUs to train its recent LLaMA generation, a scale made possible because the United States sits at the center of the global AI hardware ecosystem and maintains a deep relationship with Nvidia hardware [5].
The Barrier to Entry: The US barrier is a self-reinforcing cycle of scale, innovation, and capital. Building a competitive global cloud AI infrastructure requires hundreds of billions in investment over a decade – a barrier only a few can match. More critically, there’s the close symbiotic relationship between US chip designers (NVIDIA), cloud giants (AWS, Microsoft Azure, Google Cloud) and AI model builders (OpenAI, Anthropic) which creates a high-performance feedback loop. New chip architectures are tested and deployed at scale immediately, directly fueling the next leap in AI capabilities. This creates an ecosystem moat that is about more than just hardware; it’s about a tightly integrated pipeline from silicon to service.

China’s Play: The Sovereign Stack
Facing stringent restrictions on importing advanced AI chips, China’s strategy has crystallized around building a sovereign, domestic AI infrastructure stack, a concept now central to its national policy.
Core Infrastructure: This effort is led by “National Computing Hubs” and domestic cloud champions like Alibaba Cloud and Huawei Cloud. China cannot access H100 chips at the scale its US competitor does due to export bans, which led companies like DeepSeek to adopt the less powerful H800 GPUs and operate with only 2,048 units. Despite these constraints, they claim to have produced competitive results via aggressive optimization, distributed training, and software/hardware co‑design [6].
The Barrier to Entry: China’s barrier to entry is centered on restricted access to cutting edge chips and the enormous capital required to build a fully domestic hardware ecosystem. While firms like Huawei and Biren are advancing chip design, scaling to match US compute capacity remains costly and technically complex. As a result, China compensates with software level efficiencies, model compression, and distributed training, but the gap in unrestricted access to top tier silicon still shapes the pace and scale of its AI progress.
Before this layer, it was all software, strategy and electricity. Here, it is pure physics, chemistry, and geopolitics. Production & Design refers to the creation of the most critical hardware: the semiconductors (chips) that power every data center, run every LLM, and enable every application.
US dominance lies in controlling the critical choke points. American companies like NVIDIA, AMD, and Apple design the world’s most advanced AI chip architectures. Even more crucially, the software (EDA tools) to design them and the core intellectual property (IP) are American-dominated [7]. Even in 2025, US export‑control policy remains in force. For manufacturing, the US retains a lead in the most sophisticated production through Taiwan Semiconductor Manufacturing Company (TSMC), whose security it guarantees, and is heavily subsidizing the revival of its own cutting-edge production via the CHIPS Act.
To understand the US advantage clearly, it rests on a cluster of tightly controlled assets:
The world’s leading chip designers (NVIDIA, AMD, Apple) and the EDA software required to design advanced semiconductors.
Strategic alliances that secure global fabrication capacity, especially TSMC’s most advanced nodes.
Nonetheless, it’s not just about making chips; it’s about controlling the entire ecosystem required to produce the most advanced AI hardware. The U.S. and its allies (notably the Netherlands and Japan) dominate critical technologies such as extreme ultraviolet (EUV) lithography machines, supplied by ASML, which are required to fabricate cutting-edge transistors. Because these capabilities are concentrated in a small set of companies and countries, replicating the full ecosystem is extremely difficult. Furthermore, because much of the hardware, design software, and IP originates in the U.S., export-control regulations effectively block their sale or re-export to China unless licensed, creating a structural barrier to China’s ability to build a fully competitive chip-design and manufacturing stack [8].
China’s strategy focuses on building a self‑sufficient chip ecosystem that reduces reliance on foreign technology and gradually narrows the gap in advanced design and manufacturing. Major Chinese firms such as Huawei, HiSilicon, SMIC, and others are part of a broader push toward domestically designed AI accelerators and increasingly advanced manufacturing nodes [9].
Because China is restricted from accessing much of the most advanced lithography and chip‑making equipment, it has focused on expanding its deep‑ultraviolet (DUV) capacity and relying on aggressive process optimization and alternative manufacturing strategies [10]. Supported by massive state-led investment and industrial-policy programs, the country is attempting to climb the design and fabrication ladder simultaneously. It is building an alternative semiconductor stack that could, over time, reduce its vulnerability to export controls and supply-chain pressure.
This combination of scale, optimization, and state orchestration forms the backbone of China’s long-run strategy: to build a semiconductor system that may not match the US on absolute frontier performance today, but could rival it in resilience, independence, and production capacity over time.
These layers in the AI race are more than a technological competition; it is a contest to define the next world order. Whether one side wins decisively, or the race ends in a protracted stalemate, the consequences of its outcome will shape the lives of billions. If neither side pulls ahead, we may instead enter an era defined by parallel technological spheres and rising geopolitical tension.
Key Consequences to Watch
If one side wins: The dominant country sets the standards, controls the supply chains, shapes the global AI safety framework, and secures disproportionate economic and military advantages.
If neither side wins: The world may fragment into competing technological blocs, with duplicated supply chains, incompatible standards, and escalating competition in both commercial and military AI.
The final question, therefore, is not merely “Who wins?” but “To what end, and under what safeguards?” The true measure of success in the AI race may ultimately be whether its winner can channel this civilization-altering power toward collective human advancement, or whether it becomes an instrument of unchecked control.
Griffith, E. (2024, March 29). Microsoft and OpenAI Plot $100 Billion Stargate A.I. Supercomputer. Reuters.
https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/
Xinhua News Agency. “China invests over 6.1 billion USD in major computing hubs: official.” English.news.cn, 2024.
https://english.news.cn/20240829/b1ef10d7cfce43039005a94a021a07bb/c.html
Stanford HAI. AI Index Report 2024: Chapter 4 – AI Adoption and Use. Stanford Institute for Human-Centered Artificial Intelligence, 2024.
https://hai.stanford.edu/assets/files/hai_ai-index-report-2024_chapter4.pdf
Patel, D. (2023, April 5). GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE. SemiAnalysis.
Meta. Building Meta’s GenAI Infrastructure — Data‑Center Engineering Blog Post. 2024.
https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/
Center for Strategic and International Studies. “DeepSeek, Huawei, Export Controls, and the Future of the US–China AI Race.” 2025.
https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race
Harithas, Barath and Andreas Schumacher. Where the Chips Fall: U.S. Export Controls Under the Biden Administration 2022–2024. Center for Strategic & International Studies (CSIS). December 12, 2024.
https://www.csis.org/analysis/where-chips-fall-us-export-controls-under-biden-administration-2022-2024
Center for Strategic and International Studies (CSIS). U.S. Export Controls on AI and Semiconductors: Global Implications. 2024.
laweconcenter.org/resources/us-export-controls-on-ai-and-semiconductors-two-divergent-visions.
U.S.–China Economic and Security Review Commission. Made in China 2025: Evaluating China’s Performance. 2024.
https://www.uscc.gov/research/made-china-2025-evaluating-chinas-performance
Center for Strategic and International Studies (CSIS). China’s New Strategy for Waging the Microchip Tech War. 2024.
No comments yet