Data centres draw power like industrial facilities. Grids and governments are only now catching up to what that means.
When a new data centre applies for a grid connection, the utility does not see a website. It does not see a streaming service or an AI assistant. It sees a load curve — megawatts, continuous, around the clock, every day of the year. The International Energy Agency has described this directly: a large data centre can draw the same power as an electric arc furnace steel mill.
The steel mill analogy is not a provocation. It is a description. And it matters because steel mills take fifteen to twenty years to site, permit, and connect — during which grid operators plan transmission upgrades, build substations, and model demand. Hyperscale data centres are arriving in three to five years. The grid is absorbing an industrial-scale load on a technology-industry timeline.
For fifteen years, one of the quieter achievements in global energy was the flat electricity demand of advanced economies. From 2009 to 2023, the United States, Australia, the European Union, and most of the developed world grew their economies while barely growing their electricity use. LED lighting replaced incandescent bulbs. Appliances improved. Heavy industry restructured. The story was: we decoupled economic growth from energy growth.
That period is over. Global electricity demand grew 4.3 per cent in 2024 — the largest absolute increase ever recorded outside of recession-recovery years, according to the IEA. Advanced economies, after fifteen years of near-flat consumption, are growing again. Three structural forces broke the trend simultaneously: electric vehicles, heat pumps replacing gas appliances, and AI-driven compute demand from data centres.
The efficiency story inside data centres is real and deserves honest acknowledgement. A modern server performs roughly ten times the computation of one from 2005 for the same energy draw. Average Power Usage Effectiveness across the industry has fallen from around 2.5 in 2007 to approximately 1.5 today. These gains are documented and verifiable. We covered them in Issue 004.
They were absorbed by demand growth. The IEA projects data centre electricity consumption will rise from 460 terawatt-hours in 2022 to around 945 terawatt-hours by 2030 — doubling in eight years. AI workloads have reset the density baseline in ways that efficiency metrics did not anticipate: a GPU rack running an AI training job draws 60 to 100 kilowatts. A standard server rack five years ago drew five to fifteen. The efficiency gains of a decade are being partially unwound at the rack level by the workload shift of the last three years. The industry answers the question of energy per unit of compute. The grid must answer a different question: how much total energy, from where, on what timeline.
The IEA’s steel mill framing is useful precisely because it locates data centres accurately in the industrial energy landscape — not as exceptional or uniquely alarming, but as a class of heavy industrial load that grid operators have a specific vocabulary for, and a specific planning process around. A traditional colocation facility draws around 9 megawatts — the continuous load profile of a large regional hospital. A hyperscale cloud campus draws 47 megawatts or more — in the range of an electric arc furnace steel mill. A purpose-built AI training cluster draws 100 to 150 megawatts — approaching the consumption of a semiconductor fabrication plant. TSMC’s first phase in Arizona draws approximately 200 megawatts.
The cooling system changes how much of that draw is productive. Toggle between air, liquid, and immersion cooling below to see what the grid sees — and how much that number shrinks as the cooling approach improves.
Enjoying this? Arc Brief in your inbox every Friday.
The crucial difference between data centres and the industries they resemble in power draw is time. An electric arc furnace or a chip fabrication plant takes fifteen to twenty years from planning to connection, during which grid operators plan accordingly. Singapore’s transmission network cannot currently handle the 400kV connections that facilities above 100 megawatts require — upgrading takes five years and approximately two billion dollars for fifty kilometres of underground cable. Data centres at hyperscale are arriving faster than that. The load is industrial. The timeline is not.
The regulatory responses across the region range from sophisticated to absent. The differences are not primarily technical. They are choices about whether grid capacity is treated as a controlled resource or an assumed right.
Singapore has moved further and faster than any other government in the region. After imposing a full moratorium in 2019 — when the sector already consumed seven per cent of national electricity — it resumed development with hard conditions. In December 2025, Singapore launched DC-CFA2: a competitive application process requiring operators to achieve a PUE of 1.25 at full load, the most stringent target in Asia-Pacific, and to source at least fifty per cent of power from actual energy supply, not financial certificates. Traditional renewable energy certificates that do not represent real power flow do not qualify. Grid access is an allocation, not an entitlement. The scarcity is the instrument of enforcement.
Australia has moved partially. Data centres serving federal government must meet a NABERS five-star energy rating — approximately PUE 1.4. This applies to government procurement, not the sector. New South Wales published a consultation paper in March 2026. The sector’s grid share is expected to triple by 2030. The framework governing that growth remains a consultation paper. Australia is presently choosing between two available stories: Singapore’s, or Ireland’s.
Ireland reached twenty-two per cent of national electricity from data centres before a framework arrived. The Commission for Regulation of Utilities now requires grid impact assessments for new applications. The Climate Action Plan 2024 introduced sector-specific provisions. The instruments are the right ones. They arrived after the problem had compounded for a decade.
Malaysia’s data centre market is the fastest-growing in Asia-Pacific, absorbing significant overflow from Singapore. It has no binding energy efficiency standards, no mandatory PUE threshold, and no green energy sourcing requirement. When Johor deferred new evaporative cooling connections in 2025, it was an emergency measure — not a planned policy response. The energy equivalent of that instrument does not exist.
India has no data centre-specific energy regulations. Investment is flowing at scale — 604 megawatts of capacity additions projected by 2026 — against a grid that remains predominantly thermal and subject to reliability gaps. Andhra Pradesh demonstrated with the Vizag approval, as described in Issue 005, that government can structurally separate facility demand from domestic supply when it chooses to. That instrument has not been adopted as standard practice.
The renewable energy commitments of the major hyperscalers are substantial. AWS is the largest corporate purchaser of renewable energy globally. Microsoft, Google, and Meta have made commitments at the scale of national utilities. The limitation — established in Issue 004 — is additionality. Matching consumption with certificates does not always mean new capacity was built, or that the specific substation serving a specific facility is drawing from clean sources. Singapore’s DC-CFA2 closes this gap by requiring actual supply rather than financial offset. At the local grid level, the distinction matters even when the global figures are clean.
The more significant development is nuclear. Microsoft has signed a twenty-year agreement to restart Three Mile Island — 835 megawatts from an existing reactor, targeting 2028. This is the only near-term credible deal in the group because it involves an existing facility with known engineering. Google has a 500-megawatt agreement with Kairos Power for small modular reactors, with the first unit targeting 2030. Amazon has invested $500 million in X-energy and committed to purchasing 320 megawatts via Energy Northwest in the early 2030s. Meta has secured power from an existing Illinois nuclear plant and issued a request for proposals for between one and four gigawatts of new nuclear generation. Deloitte estimates nuclear could cover approximately ten per cent of data centre electricity demand by 2035.
The honest caveat is that no small modular reactor currently operates commercially in the United States or Europe. Every SMR deal is a contract for electricity that does not yet exist. Microsoft’s Three Mile Island is the exception — an existing large reactor, not a new technology — and it remains two years away. In the interim, gas is filling the gap. Meta’s AI training campus in Ohio is partly served by a new 200-megawatt gas-fired plant. The commitment to nuclear is genuine and strategic. The timeline is a decade away. What bridges them is fossil fuel, and that gap deserves to be named.
Virginia’s PJM grid — which serves Northern Virginia, home to the world’s largest concentration of data centre capacity — increased its capacity charges by 833 per cent in a single year. Those charges are not levied only on data centres. They are distributed across all ratepayers connected to the grid: households, manufacturers, hospitals, schools. The infrastructure cost of serving hyperscale AI demand is being socialised. The facilities that drive the demand do not bear the full cost of the grid investment required to serve them. The remaining cost lands on electricity bills across the system.
The instrument that would address this already exists, demonstrated in Vizag: structural separation of facility demand from domestic supply networks, built into the approval condition rather than managed after connection. Singapore’s capacity allocation model achieves a version of this at the city-state level — making grid capacity a controlled resource rather than an assumed right, and embedding the cost of connection within the terms of approval rather than distributing it across the system.
Not everyone who has looked at this problem has landed on tighter pre-approval control as the primary instrument. A credible alternative case runs like this: the grid conflict is fundamentally a supply-side problem, not an approval problem. Faster interconnection reform — the US Federal Energy Regulatory Commission’s Order 2023 is the most significant attempt to address queue backlogs in two decades, shifting from a serial to a cluster-based process — can bring new generation online faster than licensing restrictions can moderate demand. Data centres are technically interruptible loads: large facilities can, in principle, reduce consumption during peak events under the right contractual structure, functioning as a grid management tool rather than a grid burden. And transmission buildout, particularly high-voltage direct current links, can move stranded renewable generation to where the demand is, rather than constraining the demand to where the grid currently has headroom.
These are not hypothetical positions. They represent a legitimate reading of the same evidence. The distinction is whether you believe the binding constraint is governance and cost recovery — in which case Singapore’s model is instructive — or infrastructure and market design — in which case supply-side expansion is the faster path. The two approaches are not mutually exclusive. The honest answer is that most grids will need both, and the sequencing will depend on local market structure in ways that no single framework fully anticipates.
Every APAC government still working on its framework is choosing whether to arrive at that instrument before its grid is under pressure or after. Ireland arrived after. Virginia’s ratepayers arrived after. Singapore arrived before — because the moratorium gave it the leverage to set terms, and it used that leverage deliberately. The framework is not complicated. The political will to treat grid capacity as finite is.
APAC tech, read once a week.
No ads, no noise.