The AI buildout is creating two completely different industries. Most policy, investment, and media coverage still treats them as one.
There is a number that stops most people when they hear it for the first time.
A traditional server rack — the floor-to-ceiling cabinet that holds computing hardware, the kind that has powered the internet for the last two decades, holding your email, your bank’s overnight statements, your company’s files — draws around 5 to 10 kilowatts of power. Roughly what five to ten suburban homes consume at any given moment.
A single rack built for AI today draws 120 to 140 kilowatts.
Not five times more. Not ten times more. Somewhere between twelve and twenty-eight times more power, from a cabinet the same physical size as the one next to it.
That number is not a footnote. It is the entire story. Because once you understand what produces that gap — and what it demands from the building around it — you understand why the term “data center” has quietly become two different things, and why almost every policy, investment thesis, and planning framework that treats them as one is already working with the wrong map.
In Issue 001 — What Is a Data Centre? — we explained the core physics: every watt of compute produces a watt of heat, and the engineering challenge of a data centre is moving that heat out as fast as electricity comes in. The cold-aisle and hot-aisle system that emerged from that challenge — cold air pushed in from one side, pulled hot from the other, air conditioning units lining the walls — works reliably up to moderate densities.
It does not work at 120 kilowatts per rack.
At that density, air cannot move heat away fast enough. The physics fails before the hardware does. AI data centres are engineered around liquid cooling from the ground up — not as an upgrade for a few high-density rows, but as the only method that works at this scale. Pipes carry chilled fluid directly to the chip, removing heat a thousand times more efficiently than air. In the most advanced facilities, servers are submerged in tanks of non-conductive fluid. The building that results looks nothing like a traditional data centre. It has different plumbing, different power infrastructure, a fundamentally different relationship with the electrical grid.
To understand why AI needs this much more power, consider what happens when you type a question into Claude or ChatGPT.
In Issue 002, we followed a WhatsApp message from Mumbai to Singapore — 170 milliseconds, six physical hops, traditional infrastructure retrieving indexed data at every step. That is what traditional data centres are built for: predictable requests, pre-existing answers, steady and manageable compute.
When you ask Claude or ChatGPT a question, the model runs your words through tens of billions of mathematical operations — every single time, for every single query, generating the answer from scratch. That process requires roughly ten times more energy than a standard search query. Now multiply that by hundreds of millions of queries a day, across every AI product running on the planet. That is why a single AI data centre can draw as much electricity as a mid-sized city.
The result is what Google’s infrastructure engineering team recently described as a bimodal environment — traditional compute on a gradual growth curve, AI systems on a much steeper one. Both must coexist in the same region simultaneously. But they are not the same infrastructure, and they do not have the same requirements.
[ tap any marker to open spec ]
The bifurcation is not theoretical. It is visible right now across the three markets Arc Brief covers most closely.
Australia is where the scale of what is happening is hardest to ignore. Microsoft committed A$25 billion by end of 2029 to expand its Azure AI supercomputing and cloud infrastructure across Australia — its largest investment in the country in four decades. AWS announced AU$20 billion between 2025 and 2029, framed explicitly around AI capability and supercomputing infrastructure. In December 2025, NEXTDC signed a Memorandum of Understanding — a non-binding planning agreement — with OpenAI to develop a 550 megawatt AI campus at Eastern Creek in western Sydney, with OpenAI as the anchor tenant. Three announcements, all within eighteen months, all explicitly framed around AI infrastructure — not general cloud, not enterprise storage, not conventional compute.
Running alongside these is AirTrunk, the company born in Australia with campuses across Sydney and Melbourne, which expanded through Singapore, Japan, Malaysia, and Hong Kong before recently entering India through the acquisition of Lumina CloudInfra — gaining around 600 megawatts of planned capacity across Mumbai, Chennai, and Hyderabad. An Australian operator following AI-grade demand across APAC, now moving into India’s fastest-growing cities.
In Issue 003, we examined the gap between what governments promise when they win the data centre race and what actually arrives — the jobs figures, the sovereignty claims, the GDP projections. That gap becomes harder to measure, and easier to widen, when the type of facility being approved is never clearly defined. A planning approval written for a 30 megawatt colocation site — a facility where multiple companies rent shared server space — is the wrong instrument for a 550 megawatt AI campus with liquid cooling infrastructure built into the foundations.
India is the volume story, but the layers matter. The country is on track to add 850 megawatts of new data centre capacity between 2024 and 2026, outpacing every APAC market outside China. The majority of that pipeline is traditional enterprise colocation in Mumbai, Hyderabad, and Chennai, serving banks, government agencies, and large corporates bringing workloads onshore under tightening data localisation rules.
The AI-native layer sits above it: smaller in megawatt terms right now, higher-stakes, and hungry for something India’s grid currently struggles to guarantee at scale — reliable, always-on electricity in blocks of hundreds of megawatts. Two distinct markets, one headline number. Most coverage does not make the distinction.
Japanhas made the most deliberate choice. Tokyo and Osaka continue to grow as enterprise and cloud hubs, driven by AI adoption in manufacturing and automotive sectors. At the same time, Japan’s AI Promotion Act — passed in May 2025, the country’s first legislation expressly directed at AI — directs investment and development support specifically at AI-grade compute infrastructure, separate from the broader data centre sector. Japan is not waiting for the market to bifurcate on its own. It is building two parallel tracks by design.
For most of the last decade, data centre policy was written as if one size fit all. That is changing, and the pace is accelerating.
The clearest example is the United States. In July 2025, the Trump administration issued an executive order specifically targeting data centres requiring more than 100 megawatts of power — streamlining permitting, opening federal land, and directing financial support to qualifying projects. The 100 megawatt threshold is not arbitrary. It is the line that separates AI-native infrastructure from the rest. The US did not just acknowledge the difference — it wrote it into law, with a specific number.
Malaysia moved differently but just as decisively. The government has effectively imposed an informal pause on non-AI data centre approvals, citing energy and water pressure. Prime Minister Anwar Ibrahim has been explicit: resources will be directed toward AI infrastructure that delivers sovereign capability — compute capacity owned and operated within Malaysia’s borders — not generic builds.
The European Union is working through a formal classification. The European Commission has proposed exempting data centres and AI gigafactories from mandatory environmental impact assessments — a move that explicitly treats AI infrastructure as a separate strategic category. The EU’s AI Continent Action Plan establishes “AI Factories” and “AI Gigafactories” as distinct infrastructure designations with their own funding and governance frameworks. It is contested — environmental groups have raised concerns — but the direction of classification is clear.
Japan’s AI Promotion Act directs investment specifically at AI-grade facilities, not the broader data centre sector. The policy frameworks are beginning to match the infrastructure reality.
The direction of travel is consistent across jurisdictions that otherwise agree on very little: AI data centres are a different policy problem, and the frameworks governing them need to reflect that. This is the question Issue 009 will take up directly — what APAC governments are actually legislating, what the gaps are, and what it is costing the markets that have not moved yet.
The bifurcation is not difficult to identify once you know what to look for. Any data centre announcement — a new campus, a government approval, an investment headline — can be sorted in seconds with four questions.
What is the rack density target? Anything above 30 kilowatts per rack is moving toward AI territory. Above 100 kilowatts, the facility was designed from the ground up for AI workloads.
What is the cooling method? Air cooling as the primary system signals a traditional build. A facility engineered around liquid cooling throughout — pipes to chips, not air to rooms — signals AI-native infrastructure.
Who is the anchor tenant? Enterprise IT departments and colocation customers are a different demand profile from hyperscale operators — the large technology platforms running AI model training and inference at scale.
What is the power commitment in megawatts? Below 50 megawatts sits in conventional territory. Above 100 megawatts, you are in the category the US government now treats as a separate infrastructure class with separate rules.
These four questions do not require a technical background. They require only that you ask them — before the planning application is approved, before the investment is made, before the jobs promise is accepted at face value.
The data centre industry bifurcated quietly, over the course of about three years, as AI chips — specialised processors known as GPUs, designed for the parallel calculations that AI requires — scaled from research tools to the dominant driver of new infrastructure investment globally. The investment community noticed first. Then the operators. Now, gradually, the governments.
The planning frameworks, the grid investment plans, the community engagement processes — they are largely still catching up.
APAC is building at speed. The capital is moving faster than the grids can be upgraded and faster than the planning frameworks can be rewritten. Decisions made with the wrong framework — approvals modelled on traditional data centre job numbers, grid connections sized for enterprise colocation, community promises based on a different class of building — get locked in. They are expensive to undo.
The critics we examined in Issue 004 were largely responding to the traditional version of this industry — its energy claims, its water use, its jobs promises. The AI-native version presents a different set of tradeoffs. The balance sheet looks different when the building is different.
The question is not whether the two categories are real. They are, and they have been for some time. The question is whether the decisions being made today are being made with the right map or the wrong one.
Issue 009 will look at the policy race in detail — which APAC governments are moving, which are stalling, and what the gap is costing.
Enjoying this? Arc Brief in your inbox every Friday.
APAC tech, read once a week.
No ads, no noise.