
Cloud Computing in 2025: AI-Fueled Growth and New Challenges
Cloud computing hits $2 trillion by 2030. AI drives data center growth, power demand, sustainability challenges, and new regulations.

The Energy Constraint
How AI, electrification, and grid bottlenecks are colliding faster than infrastructure can adapt

Policy Lag in a Compute-Driven Economy
Why exponential compute growth is outpacing policy
<100 subscribers

Cloud Computing in 2025: AI-Fueled Growth and New Challenges
Cloud computing hits $2 trillion by 2030. AI drives data center growth, power demand, sustainability challenges, and new regulations.

The Energy Constraint
How AI, electrification, and grid bottlenecks are colliding faster than infrastructure can adapt

Policy Lag in a Compute-Driven Economy
Why exponential compute growth is outpacing policy
Share Dialog
Share Dialog


The rapid expansion of artificial intelligence has shifted the locus of competition from purely algorithmic innovation to physical infrastructure. While attention frequently focuses on GPU shortages, chip fabrication, and model architectures, the underlying constraint for many large-scale AI data centers is access to electrical power. High-capacity facilities cannot connect to the grid on usable timelines, limiting deployment despite available land, capital, or software capability. Interconnection queue, the administrative processes that govern connections to the electrical grid, are now a critical factor in determining which AI facilities can operate. These queues, managed by Independent System Operators (ISOs) and utilities, require any large new load or generator to submit requests, undergo feasibility and system impact studies, finance necessary grid upgrades, and receive formal permission to connect. Timelines have grown from one to two years historically to five to seven years, sometimes longer, creating systemic challenges for AI infrastructure deployment.
This article examines the interconnection queue crisis, analyzing its historical evolution, factors driving growth, implications for AI data centers, the mismatch between capital and physical infrastructure, economic filtering effects, and relevant policy considerations. The analysis demonstrates that access to reliable and timely electrical interconnection is as decisive as algorithmic or computational innovations for AI deployment.
Interconnection queues are formal administrative mechanisms managed by transmission operators and regulated by federal authorities to control access to the high-voltage electricity grid. These queues exist to ensure that proposed connections are technically feasible, will not compromise system reliability, and that developers cover costs associated with necessary network upgrades. Any large load or generator must submit an interconnection request, undergo feasibility and system impact studies, allocate costs for required grid modifications, and obtain formal approval prior to energization [1]. Historically, interconnection queues processed requests within a timeframe of approximately one to two years, reflecting the moderate growth in generation and load that characterized the U.S. electricity system in the early 2000s [1].
In recent years, the volume of interconnection requests has grown significantly. According to Lawrence Berkeley National Laboratory, as of the end of 2024, approximately 10,300 projects were actively seeking transmission interconnection across the United States. These projects represented roughly 1,400 gigawatts of generation capacity and 890 gigawatts of energy storage capacity [1]. These figures span all major ISOs/RTOs and non-ISO balancing authorities, covering nearly the entire U.S. electricity grid. Extended queue durations have become common due to several structural factors, including high volumes of interconnection requests, underbuilt transmission infrastructure, and the lengthy processes required for feasibility studies and system impact analyses [1]. Median project durations from request submission to commercial operation have increased, with many projects now waiting four years or more before obtaining final approval to connect [1].
These developments create operational and financial uncertainty for high-capacity electricity users. Facilities that require reliable, high-continuity power, such as large-scale AI data centers, face significant risk when interconnection timelines are uncertain. Delays in grid access can affect site selection, investment planning, and the timing of project deployment [1]. The expansion of interconnection queues illustrates a structural limitation in U.S. grid design: the system was not originally built to accommodate simultaneous rapid growth in generation and large electricity loads. This limitation has become a critical constraint for infrastructure-intensive industries, including those driving the AI economy.
The expansion of interconnection queues in the United States is driven by multiple structural factors, which collectively extend the timeline for new generation and load projects. These factors have significant implications for high-capacity electricity consumers, such as AI data centers, that require reliable and continuous power.
One of the primary contributors to queue growth is the rapid expansion of renewable energy resources, particularly utility-scale solar and wind projects. According to Lawrence Berkeley National Laboratory, a significant portion of the 10,300 projects in the U.S. interconnection queue at the end of 2024 consists of renewable generation proposals, collectively representing over 800 gigawatts of potential capacity [1]. Renewable energy projects often require high-capacity transmission access and extensive system impact studies due to variability and integration requirements. The Federal Energy Regulatory Commission confirms that the volume of renewable interconnection requests has increased sharply over the past decade, driven by state clean energy mandates and declining costs of renewable technologies [2]. This surge has contributed to systemic delays across most regional transmission organizations (RTOs) and independent system operators (ISOs).
Increasing electrification of transport and heating is another significant factor. The adoption of electric vehicles and electrification of building heating systems increases peak loads and overall electricity demand. While individual electrified loads are typically smaller than utility-scale generators, the cumulative effect on local distribution networks and transmission interconnections has contributed to longer queue times [2]. LBNL reports that some regions experience delays not only from generation projects but also from large industrial and commercial loads, amplifying congestion within interconnection queues [1].
Modern AI data centers have emerged as a distinct driver of queue growth. Facilities supporting AI workloads often require hundreds of megawatts per site, operating continuously with high reliability. These facilities compete for limited interconnection points and frequently necessitate substantial transmission upgrades or dedicated substation construction [1]. The inclusion of AI data center projects has further lengthened queue timelines, especially in regions already experiencing high renewable interconnection demand.
The U.S. transmission system was designed for slower rates of growth in generation and load. LBNL identifies network constraints, including limited line capacity, substation congestion, and constrained corridors, as key factors delaying interconnection approvals [1]. FERC’s 2023 State of the Markets Report similarly notes that the existing transmission network has not kept pace with the rapid expansion of generation and load, particularly in regions with high renewable deployment [2]. Consequently, even financially viable projects may experience multi-year delays due to insufficient transmission infrastructure.
Queue study processes were historically structured for gradual, sequential growth. Many RTOs and ISOs continue to evaluate requests using first-come, first-served batch-study models, requiring repeated re-studies as new projects enter the queue [1]. These processes, while ensuring reliability, were not designed to accommodate the simultaneous surge of renewable generation, electrification, and large industrial loads, resulting in further systemic delays.
The combined effect of these drivers is a fundamental mismatch between the speed of capital deployment and the physical realities of grid expansion. Interconnection queues now act as economic and operational filters, determining which projects can connect promptly and which face prolonged delays [1]. For energy-intensive industries, including AI, this creates a strategic constraint: project feasibility depends not only on financing and site selection but also on the pace at which transmission infrastructure can be expanded.
Interconnection queues have become a binding constraint for artificial intelligence data center deployment because modern AI workloads impose sustained, high density electricity demand that must be supported by reliable grid access. As AI adoption accelerates, delays in securing interconnection approval increasingly determine where data centers can be built and how quickly new compute capacity can come online.
AI data centers require substantially higher power density than traditional enterprise or cloud facilities. AI focused data centers increasingly operate at rack densities exceeding 30 kilowatts, compared with historical averages of 5 to 10 kilowatts for conventional data centers [3]. These facilities are designed for continuous operation, with utilization levels significantly higher than legacy workloads, increasing total electricity consumption per site. Projections indicate that global data center electricity demand could more than double by 2030, with AI workloads accounting for a disproportionate share of incremental growth [3]. Power demand growth is driven by both large scale model training and the expansion of inference workloads, which require persistent availability rather than intermittent compute cycles. Large industrial loads, including data centers, now enter transmission interconnection queues alongside new generation projects, competing for limited grid capacity [1]. This competition increases queue congestion and extends approval timelines for both load and generation projects.
In several major data center markets, the time required to secure grid connection and deliver power now exceeds 36 months [3]. These timelines represent a significant increase from earlier periods when power delivery could often be achieved within 12 to 18 months in unconstrained regions. Power availability has become a primary determinant of site selection. Locations with faster access to grid capacity are increasingly favored, even when land costs, construction costs, or operating expenses are higher [3]. Sites without timely interconnection approval cannot support AI data center deployment regardless of other economic advantages.
Median interconnection queue durations for large projects have increased substantially over the past decade, with many projects remaining in queue for multiple years before receiving final approval or withdrawing [1]. Extended queue exposure introduces uncertainty into project timelines and capital deployment schedules.
Grid congestion directly constrains the pace at which AI compute capacity can scale. AI data centers require predictable power delivery schedules to align with hardware procurement, construction sequencing, and customer demand. When grid access timelines are uncertain, developers face increased risk of delayed commissioning or underutilized assets. Electricity infrastructure expansion has not kept pace with the combined growth of renewable generation, electrification of transport and heating, and large scale industrial loads [1]. Transmission systems were not designed for simultaneous growth in both generation and load at current rates, creating structural congestion in interconnection queues. As AI workloads scale, power availability rather than compute hardware availability increasingly determines deployment feasibility [3]. In constrained regions, grid access has become the limiting factor for new AI capacity.
Interconnection queues now function as economic filters rather than neutral administrative processes. Projects that can fund required transmission upgrades or absorb extended delays are more likely to reach operation. Projects without these capabilities face higher withdrawal rates. Interconnection queue data shows that a significant share of projects exit the queue before completion due to rising upgrade costs, prolonged study timelines, or uncertainty regarding final approval [1]. These dynamics favor large, well capitalized operators that can tolerate delay risk and finance grid upgrades. Access to physical grid infrastructure now plays a decisive role in determining which firms can deploy large scale AI systems. Infrastructure constraints, rather than algorithmic capability alone, increasingly shape competitive outcomes in the AI economy [1][3].
The growing scale of AI related electricity demand has intensified debate over whether large loads should be permitted to finance grid upgrades to accelerate interconnection. Self funded upgrades could reduce public expenditure and accelerate capacity additions, but they also risk creating unequal access to critical infrastructure. Without reforms to interconnection processes and transmission planning, grid access constraints are expected to persist as AI driven electricity demand continues to grow [1][3]. Grid infrastructure capacity is therefore likely to remain a binding constraint on AI deployment in the near to medium term.
Interconnection queues in the United States have grown to unprecedented scale in recent years, with capacity active in queues now measured in the thousands of gigawatts. Nearly 2,290 gigawatts (GW) of generation and storage capacity were actively seeking grid interconnection as of the end of 2024, and the typical project built that year spent about 55 months in the queue from initial request to commercial operation, significantly longer than in previous decades [1]. These extended timelines raise the implicit cost of capital for project developers. Firms that can sustain long development periods without immediate returns, typically large corporations, established energy developers, and hyperscalers, are better able to absorb the financial impact of multi‑year delays. They can integrate extended interconnection timelines into broader investment strategies and spread financing risk across diversified portfolios.
Conversely, entities with more constrained financial resources often struggle to justify capital deployment when revenue is deferred by years. Long waits increase financing costs, reduce predictability for project timelines, and heighten exposure to market and regulatory uncertainty, thereby limiting participation by developers lacking robust balance sheets or long investment horizons.
Securing grid access often requires developers to finance network upgrades, such as new transmission lines, substations, and system enhancements, before a formal interconnection agreement can be executed. Rising interconnection costs have become a major barrier for many projects. A Department of Energy analysis of interconnection costs in major U.S. grid regions found that average assigned network upgrade costs rose substantially for projects remaining in queues, with some territories experiencing doubling or even 800 % increases in upgrade cost estimates over recent cycles [4].
Developers with access to significant capital, including utilities, major energy companies, and large tech firms, are more capable of pre‑funding these upgrades. They can absorb upfront expenditures, negotiate cost allocations with transmission operators, and manage cost escalation risk through financial hedging or internal project financing.
In contrast, smaller developers, particularly those pursuing renewable or storage projects without deep financial reserves, face more severe constraints. High upfront cost obligations can render otherwise viable projects financially infeasible, leading to queue withdrawal, project cancellation, or redirection to less congested (but often less optimal) interconnection points. This dynamic reinforces the competitive advantage of firms that can finance or underwrite transmission upgrades early in the interconnection process, while limiting options for less capitalized participants.
The structural characteristics of current interconnection processes disproportionately disadvantage smaller developers. Active interconnection queues in the U.S. have grown to levels more than twice the total installed generation capacity of the existing power plant fleet, with the majority of queued capacity tied up in solar, wind, and storage projects [1]. However, long wait times and high assigned costs have contributed to high withdrawal rates, indicating that many projects, especially those lacking substantial financial resilience, never reach commercial operation.
Because the queue backlog concentrates study, engineering, and transmission resources on a large volume of competing requests, smaller developers often encounter lengthy study durations and escalating cost assignments that they cannot reliably finance or forecast. Projects that remain in queue for years face shifting cost estimates, revised upgrade obligations, and repeated restudies as new backlog entrants alter system impact assumptions. These conditions elevate the complexity and risk associated with interconnection, limiting the ability of smaller firms to plan and execute projects effectively. In aggregate, this dynamic transforms interconnection queues into economic bottlenecks that favor well‑capitalized players while raising barriers for smaller entrants. Without reforms that address cost allocation, queue structure, and infrastructure planning, smaller developers will continue to face systemic disadvantages in accessing transmission capacity for both generation and large load projects.
Although interconnection queue delays create economic and deployment challenges, there is a technical justification for why current interconnection procedures cannot be eliminated entirely. Grid operators and regulators emphasize that interconnection studies and requirements are fundamentally designed to protect system reliability, and preserving that reliability remains a priority even as policy reforms aim to reduce backlog and friction.
The Federal Energy Regulatory Commission (FERC) has highlighted that previous interconnection processes were insufficiently equipped to accommodate the rapid growth in transmission connection requests without compromising operational reliability. In Order No. 2023, FERC adopted a suite of reforms to improve queue efficiency, but these reforms do not remove the need for structured planning and reliability analysis; instead, they reorganize how studies are conducted (for example, shifting from serial to cluster study processes to better assess reliability and cost impacts across multiple projects) [5]. FERC’s reforms reflect a recognition that interconnection studies must still ensure that new generators or loads do not create overloads, voltage stability issues, or adverse interactions on the bulk power system. The cluster study mechanism and increased readiness requirements, which include enhanced demonstration of site control and financial readiness, are intended to reduce speculative entries and focus reliability analysis on projects that are likely to be built [6].
Moreover, the sheer scale of queued capacity influences reliability considerations. According to analysts, as of 2024 the U.S. had over 2,200 gigawatts (GW) of interconnection requests, around 1.7 times the size of the country’s existing grid‑scale generation fleet, with renewables accounting for over 94 % of that backlog [7]. The rapid influx of variable renewable resources, which have different operating characteristics than traditional dispatchable plants, requires careful engineering analysis to ensure stable operation, particularly under contingency conditions such as high demand or equipment failures.
In practice, the reliability perspective asserts that some procedural friction is necessary. Without adequate study windows, cost allocation processes, and system impact analysis, the risk of unanticipated grid stress, such as equipment overloads or voltage issues, increases, with potential consequences for outages or failure to meet demand. FERC’s ongoing authority and annual compliance reviews aim to balance speed with reliability, ensuring that reform efforts reduce unnecessary delays while preserving the grid’s ability to operate securely as it continues to evolve [5][6]. Without adequate study windows, cost allocation processes, and system impact analysis, the risk of unanticipated grid stress, such as equipment overloads or voltage issues, increases, with potential consequences for outages or failure to meet demand. FERC’s ongoing authority and annual compliance reviews aim to balance speed with reliability, ensuring that reform efforts reduce unnecessary delays while preserving the grid’s ability to operate securely as it continues to evolve [5][6].
The interconnection queue crisis reveals a structural constraint on AI deployment that is often overlooked. As data center power requirements grow, access to timely and reliable grid connections increasingly determines which projects move forward and which stall. Interconnection processes, once administrative, now function as economic filters shaped by long build timelines, capital intensity, and regulatory limits.
This mismatch between fast-moving digital investment and slow-moving physical infrastructure places a premium on scale, balance-sheet strength, and risk tolerance. While interconnection procedures are essential for maintaining grid reliability, their current form has become a binding constraint on AI infrastructure expansion. Until grid planning, transmission development, and interconnection processes better align with the pace of load growth, power access, not compute capability, will remain a decisive factor in AI deployment.
Queued Up: 2025 Edition, Characteristics of Power Plants Seeking Transmission Interconnection as of the End of 2024 | Lawrence Berkeley National Laboratory, U.S. Department of Energy (2025)
https://emp.lbl.gov/sites/default/files/2024-04/Queued%20Up%202024%20Edition%20-%20Webinar%20Version.pdf
FERC State of the Markets Report 2023 | Federal Energy Regulatory Commission (2023)
https://www.ferc.gov/sites/default/files/2024-03/24_State-of-the-market_0320_1715.pdf
The next big shifts in AI workloads and hyperscaler strategies | McKinsey & Company (2024) https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-next-big-shifts-in-ai-workloads-and-hyperscaler-strategies
Tackling High Costs and Long Delays for Clean Energy Interconnection | U.S. Department of Energy (2025)
https://www.energy.gov/eere/i2x/articles/tackling-high-costs-and-long-delays-clean-energy-interconnection
Order No. 2023: Improvements to Generator Interconnection Procedures and Agreements | Federal Energy Regulatory Commission (2023)
https://www.ferc.gov/explainer-interconnection-final-rule
Explainer on the Interconnection Rule and Cluster Study Process | Federal Energy Regulatory Commission (2023)
https://www.ferc.gov/news-events/news/ferc-proposes-interconnection-reforms-address-queue-backlogs
Interconnection Queues Show Swelling Volume but FERC Reforms Slowly Taking Hold | S&P Global Market Intelligence (2024)
https://www.spglobal.com/market-intelligence/en/news-insights/research/interconnection-queues-show-swelling-volume-but-ferc-reforms-slowly-taking-hold
The rapid expansion of artificial intelligence has shifted the locus of competition from purely algorithmic innovation to physical infrastructure. While attention frequently focuses on GPU shortages, chip fabrication, and model architectures, the underlying constraint for many large-scale AI data centers is access to electrical power. High-capacity facilities cannot connect to the grid on usable timelines, limiting deployment despite available land, capital, or software capability. Interconnection queue, the administrative processes that govern connections to the electrical grid, are now a critical factor in determining which AI facilities can operate. These queues, managed by Independent System Operators (ISOs) and utilities, require any large new load or generator to submit requests, undergo feasibility and system impact studies, finance necessary grid upgrades, and receive formal permission to connect. Timelines have grown from one to two years historically to five to seven years, sometimes longer, creating systemic challenges for AI infrastructure deployment.
This article examines the interconnection queue crisis, analyzing its historical evolution, factors driving growth, implications for AI data centers, the mismatch between capital and physical infrastructure, economic filtering effects, and relevant policy considerations. The analysis demonstrates that access to reliable and timely electrical interconnection is as decisive as algorithmic or computational innovations for AI deployment.
Interconnection queues are formal administrative mechanisms managed by transmission operators and regulated by federal authorities to control access to the high-voltage electricity grid. These queues exist to ensure that proposed connections are technically feasible, will not compromise system reliability, and that developers cover costs associated with necessary network upgrades. Any large load or generator must submit an interconnection request, undergo feasibility and system impact studies, allocate costs for required grid modifications, and obtain formal approval prior to energization [1]. Historically, interconnection queues processed requests within a timeframe of approximately one to two years, reflecting the moderate growth in generation and load that characterized the U.S. electricity system in the early 2000s [1].
In recent years, the volume of interconnection requests has grown significantly. According to Lawrence Berkeley National Laboratory, as of the end of 2024, approximately 10,300 projects were actively seeking transmission interconnection across the United States. These projects represented roughly 1,400 gigawatts of generation capacity and 890 gigawatts of energy storage capacity [1]. These figures span all major ISOs/RTOs and non-ISO balancing authorities, covering nearly the entire U.S. electricity grid. Extended queue durations have become common due to several structural factors, including high volumes of interconnection requests, underbuilt transmission infrastructure, and the lengthy processes required for feasibility studies and system impact analyses [1]. Median project durations from request submission to commercial operation have increased, with many projects now waiting four years or more before obtaining final approval to connect [1].
These developments create operational and financial uncertainty for high-capacity electricity users. Facilities that require reliable, high-continuity power, such as large-scale AI data centers, face significant risk when interconnection timelines are uncertain. Delays in grid access can affect site selection, investment planning, and the timing of project deployment [1]. The expansion of interconnection queues illustrates a structural limitation in U.S. grid design: the system was not originally built to accommodate simultaneous rapid growth in generation and large electricity loads. This limitation has become a critical constraint for infrastructure-intensive industries, including those driving the AI economy.
The expansion of interconnection queues in the United States is driven by multiple structural factors, which collectively extend the timeline for new generation and load projects. These factors have significant implications for high-capacity electricity consumers, such as AI data centers, that require reliable and continuous power.
One of the primary contributors to queue growth is the rapid expansion of renewable energy resources, particularly utility-scale solar and wind projects. According to Lawrence Berkeley National Laboratory, a significant portion of the 10,300 projects in the U.S. interconnection queue at the end of 2024 consists of renewable generation proposals, collectively representing over 800 gigawatts of potential capacity [1]. Renewable energy projects often require high-capacity transmission access and extensive system impact studies due to variability and integration requirements. The Federal Energy Regulatory Commission confirms that the volume of renewable interconnection requests has increased sharply over the past decade, driven by state clean energy mandates and declining costs of renewable technologies [2]. This surge has contributed to systemic delays across most regional transmission organizations (RTOs) and independent system operators (ISOs).
Increasing electrification of transport and heating is another significant factor. The adoption of electric vehicles and electrification of building heating systems increases peak loads and overall electricity demand. While individual electrified loads are typically smaller than utility-scale generators, the cumulative effect on local distribution networks and transmission interconnections has contributed to longer queue times [2]. LBNL reports that some regions experience delays not only from generation projects but also from large industrial and commercial loads, amplifying congestion within interconnection queues [1].
Modern AI data centers have emerged as a distinct driver of queue growth. Facilities supporting AI workloads often require hundreds of megawatts per site, operating continuously with high reliability. These facilities compete for limited interconnection points and frequently necessitate substantial transmission upgrades or dedicated substation construction [1]. The inclusion of AI data center projects has further lengthened queue timelines, especially in regions already experiencing high renewable interconnection demand.
The U.S. transmission system was designed for slower rates of growth in generation and load. LBNL identifies network constraints, including limited line capacity, substation congestion, and constrained corridors, as key factors delaying interconnection approvals [1]. FERC’s 2023 State of the Markets Report similarly notes that the existing transmission network has not kept pace with the rapid expansion of generation and load, particularly in regions with high renewable deployment [2]. Consequently, even financially viable projects may experience multi-year delays due to insufficient transmission infrastructure.
Queue study processes were historically structured for gradual, sequential growth. Many RTOs and ISOs continue to evaluate requests using first-come, first-served batch-study models, requiring repeated re-studies as new projects enter the queue [1]. These processes, while ensuring reliability, were not designed to accommodate the simultaneous surge of renewable generation, electrification, and large industrial loads, resulting in further systemic delays.
The combined effect of these drivers is a fundamental mismatch between the speed of capital deployment and the physical realities of grid expansion. Interconnection queues now act as economic and operational filters, determining which projects can connect promptly and which face prolonged delays [1]. For energy-intensive industries, including AI, this creates a strategic constraint: project feasibility depends not only on financing and site selection but also on the pace at which transmission infrastructure can be expanded.
Interconnection queues have become a binding constraint for artificial intelligence data center deployment because modern AI workloads impose sustained, high density electricity demand that must be supported by reliable grid access. As AI adoption accelerates, delays in securing interconnection approval increasingly determine where data centers can be built and how quickly new compute capacity can come online.
AI data centers require substantially higher power density than traditional enterprise or cloud facilities. AI focused data centers increasingly operate at rack densities exceeding 30 kilowatts, compared with historical averages of 5 to 10 kilowatts for conventional data centers [3]. These facilities are designed for continuous operation, with utilization levels significantly higher than legacy workloads, increasing total electricity consumption per site. Projections indicate that global data center electricity demand could more than double by 2030, with AI workloads accounting for a disproportionate share of incremental growth [3]. Power demand growth is driven by both large scale model training and the expansion of inference workloads, which require persistent availability rather than intermittent compute cycles. Large industrial loads, including data centers, now enter transmission interconnection queues alongside new generation projects, competing for limited grid capacity [1]. This competition increases queue congestion and extends approval timelines for both load and generation projects.
In several major data center markets, the time required to secure grid connection and deliver power now exceeds 36 months [3]. These timelines represent a significant increase from earlier periods when power delivery could often be achieved within 12 to 18 months in unconstrained regions. Power availability has become a primary determinant of site selection. Locations with faster access to grid capacity are increasingly favored, even when land costs, construction costs, or operating expenses are higher [3]. Sites without timely interconnection approval cannot support AI data center deployment regardless of other economic advantages.
Median interconnection queue durations for large projects have increased substantially over the past decade, with many projects remaining in queue for multiple years before receiving final approval or withdrawing [1]. Extended queue exposure introduces uncertainty into project timelines and capital deployment schedules.
Grid congestion directly constrains the pace at which AI compute capacity can scale. AI data centers require predictable power delivery schedules to align with hardware procurement, construction sequencing, and customer demand. When grid access timelines are uncertain, developers face increased risk of delayed commissioning or underutilized assets. Electricity infrastructure expansion has not kept pace with the combined growth of renewable generation, electrification of transport and heating, and large scale industrial loads [1]. Transmission systems were not designed for simultaneous growth in both generation and load at current rates, creating structural congestion in interconnection queues. As AI workloads scale, power availability rather than compute hardware availability increasingly determines deployment feasibility [3]. In constrained regions, grid access has become the limiting factor for new AI capacity.
Interconnection queues now function as economic filters rather than neutral administrative processes. Projects that can fund required transmission upgrades or absorb extended delays are more likely to reach operation. Projects without these capabilities face higher withdrawal rates. Interconnection queue data shows that a significant share of projects exit the queue before completion due to rising upgrade costs, prolonged study timelines, or uncertainty regarding final approval [1]. These dynamics favor large, well capitalized operators that can tolerate delay risk and finance grid upgrades. Access to physical grid infrastructure now plays a decisive role in determining which firms can deploy large scale AI systems. Infrastructure constraints, rather than algorithmic capability alone, increasingly shape competitive outcomes in the AI economy [1][3].
The growing scale of AI related electricity demand has intensified debate over whether large loads should be permitted to finance grid upgrades to accelerate interconnection. Self funded upgrades could reduce public expenditure and accelerate capacity additions, but they also risk creating unequal access to critical infrastructure. Without reforms to interconnection processes and transmission planning, grid access constraints are expected to persist as AI driven electricity demand continues to grow [1][3]. Grid infrastructure capacity is therefore likely to remain a binding constraint on AI deployment in the near to medium term.
Interconnection queues in the United States have grown to unprecedented scale in recent years, with capacity active in queues now measured in the thousands of gigawatts. Nearly 2,290 gigawatts (GW) of generation and storage capacity were actively seeking grid interconnection as of the end of 2024, and the typical project built that year spent about 55 months in the queue from initial request to commercial operation, significantly longer than in previous decades [1]. These extended timelines raise the implicit cost of capital for project developers. Firms that can sustain long development periods without immediate returns, typically large corporations, established energy developers, and hyperscalers, are better able to absorb the financial impact of multi‑year delays. They can integrate extended interconnection timelines into broader investment strategies and spread financing risk across diversified portfolios.
Conversely, entities with more constrained financial resources often struggle to justify capital deployment when revenue is deferred by years. Long waits increase financing costs, reduce predictability for project timelines, and heighten exposure to market and regulatory uncertainty, thereby limiting participation by developers lacking robust balance sheets or long investment horizons.
Securing grid access often requires developers to finance network upgrades, such as new transmission lines, substations, and system enhancements, before a formal interconnection agreement can be executed. Rising interconnection costs have become a major barrier for many projects. A Department of Energy analysis of interconnection costs in major U.S. grid regions found that average assigned network upgrade costs rose substantially for projects remaining in queues, with some territories experiencing doubling or even 800 % increases in upgrade cost estimates over recent cycles [4].
Developers with access to significant capital, including utilities, major energy companies, and large tech firms, are more capable of pre‑funding these upgrades. They can absorb upfront expenditures, negotiate cost allocations with transmission operators, and manage cost escalation risk through financial hedging or internal project financing.
In contrast, smaller developers, particularly those pursuing renewable or storage projects without deep financial reserves, face more severe constraints. High upfront cost obligations can render otherwise viable projects financially infeasible, leading to queue withdrawal, project cancellation, or redirection to less congested (but often less optimal) interconnection points. This dynamic reinforces the competitive advantage of firms that can finance or underwrite transmission upgrades early in the interconnection process, while limiting options for less capitalized participants.
The structural characteristics of current interconnection processes disproportionately disadvantage smaller developers. Active interconnection queues in the U.S. have grown to levels more than twice the total installed generation capacity of the existing power plant fleet, with the majority of queued capacity tied up in solar, wind, and storage projects [1]. However, long wait times and high assigned costs have contributed to high withdrawal rates, indicating that many projects, especially those lacking substantial financial resilience, never reach commercial operation.
Because the queue backlog concentrates study, engineering, and transmission resources on a large volume of competing requests, smaller developers often encounter lengthy study durations and escalating cost assignments that they cannot reliably finance or forecast. Projects that remain in queue for years face shifting cost estimates, revised upgrade obligations, and repeated restudies as new backlog entrants alter system impact assumptions. These conditions elevate the complexity and risk associated with interconnection, limiting the ability of smaller firms to plan and execute projects effectively. In aggregate, this dynamic transforms interconnection queues into economic bottlenecks that favor well‑capitalized players while raising barriers for smaller entrants. Without reforms that address cost allocation, queue structure, and infrastructure planning, smaller developers will continue to face systemic disadvantages in accessing transmission capacity for both generation and large load projects.
Although interconnection queue delays create economic and deployment challenges, there is a technical justification for why current interconnection procedures cannot be eliminated entirely. Grid operators and regulators emphasize that interconnection studies and requirements are fundamentally designed to protect system reliability, and preserving that reliability remains a priority even as policy reforms aim to reduce backlog and friction.
The Federal Energy Regulatory Commission (FERC) has highlighted that previous interconnection processes were insufficiently equipped to accommodate the rapid growth in transmission connection requests without compromising operational reliability. In Order No. 2023, FERC adopted a suite of reforms to improve queue efficiency, but these reforms do not remove the need for structured planning and reliability analysis; instead, they reorganize how studies are conducted (for example, shifting from serial to cluster study processes to better assess reliability and cost impacts across multiple projects) [5]. FERC’s reforms reflect a recognition that interconnection studies must still ensure that new generators or loads do not create overloads, voltage stability issues, or adverse interactions on the bulk power system. The cluster study mechanism and increased readiness requirements, which include enhanced demonstration of site control and financial readiness, are intended to reduce speculative entries and focus reliability analysis on projects that are likely to be built [6].
Moreover, the sheer scale of queued capacity influences reliability considerations. According to analysts, as of 2024 the U.S. had over 2,200 gigawatts (GW) of interconnection requests, around 1.7 times the size of the country’s existing grid‑scale generation fleet, with renewables accounting for over 94 % of that backlog [7]. The rapid influx of variable renewable resources, which have different operating characteristics than traditional dispatchable plants, requires careful engineering analysis to ensure stable operation, particularly under contingency conditions such as high demand or equipment failures.
In practice, the reliability perspective asserts that some procedural friction is necessary. Without adequate study windows, cost allocation processes, and system impact analysis, the risk of unanticipated grid stress, such as equipment overloads or voltage issues, increases, with potential consequences for outages or failure to meet demand. FERC’s ongoing authority and annual compliance reviews aim to balance speed with reliability, ensuring that reform efforts reduce unnecessary delays while preserving the grid’s ability to operate securely as it continues to evolve [5][6]. Without adequate study windows, cost allocation processes, and system impact analysis, the risk of unanticipated grid stress, such as equipment overloads or voltage issues, increases, with potential consequences for outages or failure to meet demand. FERC’s ongoing authority and annual compliance reviews aim to balance speed with reliability, ensuring that reform efforts reduce unnecessary delays while preserving the grid’s ability to operate securely as it continues to evolve [5][6].
The interconnection queue crisis reveals a structural constraint on AI deployment that is often overlooked. As data center power requirements grow, access to timely and reliable grid connections increasingly determines which projects move forward and which stall. Interconnection processes, once administrative, now function as economic filters shaped by long build timelines, capital intensity, and regulatory limits.
This mismatch between fast-moving digital investment and slow-moving physical infrastructure places a premium on scale, balance-sheet strength, and risk tolerance. While interconnection procedures are essential for maintaining grid reliability, their current form has become a binding constraint on AI infrastructure expansion. Until grid planning, transmission development, and interconnection processes better align with the pace of load growth, power access, not compute capability, will remain a decisive factor in AI deployment.
Queued Up: 2025 Edition, Characteristics of Power Plants Seeking Transmission Interconnection as of the End of 2024 | Lawrence Berkeley National Laboratory, U.S. Department of Energy (2025)
https://emp.lbl.gov/sites/default/files/2024-04/Queued%20Up%202024%20Edition%20-%20Webinar%20Version.pdf
FERC State of the Markets Report 2023 | Federal Energy Regulatory Commission (2023)
https://www.ferc.gov/sites/default/files/2024-03/24_State-of-the-market_0320_1715.pdf
The next big shifts in AI workloads and hyperscaler strategies | McKinsey & Company (2024) https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-next-big-shifts-in-ai-workloads-and-hyperscaler-strategies
Tackling High Costs and Long Delays for Clean Energy Interconnection | U.S. Department of Energy (2025)
https://www.energy.gov/eere/i2x/articles/tackling-high-costs-and-long-delays-clean-energy-interconnection
Order No. 2023: Improvements to Generator Interconnection Procedures and Agreements | Federal Energy Regulatory Commission (2023)
https://www.ferc.gov/explainer-interconnection-final-rule
Explainer on the Interconnection Rule and Cluster Study Process | Federal Energy Regulatory Commission (2023)
https://www.ferc.gov/news-events/news/ferc-proposes-interconnection-reforms-address-queue-backlogs
Interconnection Queues Show Swelling Volume but FERC Reforms Slowly Taking Hold | S&P Global Market Intelligence (2024)
https://www.spglobal.com/market-intelligence/en/news-insights/research/interconnection-queues-show-swelling-volume-but-ferc-reforms-slowly-taking-hold
No comments yet