The word covers three ownership models, three cooling approaches, and now two entirely new construction formats. Here is what each actually means.
Mark Zuckerberg did not want to spend four years building concrete.
So his infrastructure team built fabric tents instead. Long rectangular structures, puncture-resistant and weatherproof, stretched over aluminium frames with a distinctive mushroom-pitched roof. Inside: GPU clusters — the specialised processors that power AI — liquid cooling pipes, network infrastructure. By mid-2025, five of these structures at Meta’s Prometheus campus in New Albany, Ohio — each around 125,000 square feet — had been permitted, constructed, and confirmed operational from satellite imagery. What had taken Meta two to three years to build in concrete the first time around had been done in months.
Almost a year later, Amazon’s engineers arrived at a different answer to the same question. Rather than changing the building, they changed what arrives inside it. Prefabricated data hall sections — known internally as skids — are assembled in factories in Topeka, Houston, and Salt Lake City. Each unit is around 45 feet long. It arrives at a construction site with server racks, power distribution, cabling, lighting, and fire suppression already installed. Drop it in. Connect it. Servers can be running in two to three weeks. The same process, built the traditional way on-site, takes fifteen.
The two approaches are different. The diagnosis is identical. And neither of them would make sense without first understanding what a data centre actually is — which turns out to be three different things, depending on who built it, who owns it, and what runs inside.
When a government announces a new data centre, or an investor backs one, or a community opposes one, they are often talking about completely different things without knowing it. There are three distinct ownership and operating models in the industry today, and the jobs, the tax revenue, the construction timeline, and the community impact look different under each.
Enterprise-ownedmeans the company builds, owns, and operates the facility entirely for its own workloads. Full control over design, security, and capacity decisions. Full capital exposure — no landlord, no lease, no exit. When Meta builds Prometheus in New Albany, it owns the site, pays the construction bills, employs the engineers, and runs every server itself. The upside is sovereignty. The downside is that the capital commitment is enormous and the risk sits entirely on the operator’s balance sheet.
Build-to-suitmeans a developer constructs a facility to a specific customer’s exact requirements and continues to operate the physical infrastructure — power, cooling, security — on an ongoing basis. There are two distinct layers here: the developer holds the facility layer, the hyperscaler tenant holds the compute layer and operates their own servers within it. The hyperscaler gets infrastructure it effectively controls without the capital on its own books. AirTrunk and Yondr both describe themselves explicitly as developers and operators. AirTrunk’s campuses across Australia, Singapore, Japan, Malaysia, and now India operate on this model.
Colocationis the model most enterprise customers in APAC actually use. You rent space — a cage, a suite, a row of racks — inside someone else’s building. The colocation operator provides the power, the cooling, the physical security, and the network interconnection. You bring your own servers. NEXTDC, Equinix, and ST Telemedia run colocation businesses across Australia, Singapore, India, and Japan. It is the dominant model for banks, government agencies, retailers, and the thousands of companies that need data centre infrastructure but do not want to build or own it.
In Issue 003, we traced the gap between what governments promise when they attract a data centre and what actually arrives — the jobs figures, the sovereignty claims, the GDP projections. Part of that gap starts here. A build-to-suit for a hyperscaler and an enterprise-owned campus generate different numbers of local jobs, different tax structures, different supply chain effects, and different community relationships. They are frequently announced in the same language and assessed against the same expectations.
Every data centre, regardless of who owns it or how it was built, faces the same physics problem we described in Issue 001: every watt of computing power produces a watt of heat, and that heat has to go somewhere or the hardware fails. The industry has developed three approaches, each suited to different density levels.
Air cooling remains the dominant technology across the global installed base. Cold air is pushed through rows of servers, absorbs heat, and is returned to cooling units that reject it to the atmosphere. The engineering is well understood, the equipment is widely available, and it works reliably at moderate rack densities. The majority of enterprise data centres running today — holding payroll systems, email servers, and corporate databases — are air-cooled.
Liquid cooling has moved from niche to mainstream in the last two years, driven directly by AI workload density. Instead of circulating air through a room, chilled fluid runs through pipes directly to the chip — or to a cold plate mounted on the processor. It removes heat far more efficiently than air and allows rack densities that air cannot support. Most new AI data centre construction specifies liquid cooling from the design stage. It cannot be retrofitted cleanly into air-cooled facilities, which is one reason why the question of whether a facility was designed with AI workloads in mind cannot be answered just by looking at the outside.
Immersion cooling is the frontier. Servers are submerged entirely in tanks of non-conductive dielectric fluid. There are no fans, no raised floors, no complex air management — just compute and coolant. The overhead drops dramatically, the density that can be achieved rises dramatically, and the noise level drops to near-zero. The highest-density AI training clusters — the facilities at the very edge of what is physically possible today — are moving toward full or partial immersion.
The cooling method determines more than temperature. It shapes water consumption, site selection, structural requirements, and the building’s entire relationship with local infrastructure. Two campuses with identical megawatt commitments but different cooling approaches will have fundamentally different footprints in a community.
[ tap any card to expand ]
Enjoying this? Arc Brief in your inbox every Friday.
For most of the industry’s history, the question of how quickly a data centre could be built was an engineering consideration. Over the past two years it has become a competitive emergency.
The AI arms race has compressed acceptable lead times from years to months. Every week that racks sit uninstalled in an incomplete data hall is a week of compute capacity not generating revenue, not training models, not serving the customers who signed contracts expecting capacity to be available. Hyperscalers are committing to build at a pace that traditional construction methods cannot support.
Securing land, connecting to the grid, pulling permits, and pouring the concrete shell of a data centre takes years — and that timeline has not compressed meaningfully. What has compressed is the expectation of what happens after the shell is ready. Fitting out the interior of a data centre — running conduit, installing racks, pulling cable, wiring everything on-site — could take another fifteen weeks before a single server could be installed. In a market moving at the speed of the current AI buildout, fifteen weeks is a long time.
Both Amazon and Meta arrived at the same diagnosis independently, twelve months apart. Their solutions are different.
Meta’s rapid deployment structures came first. By mid-2025, five fabric buildings at Prometheus had been built and confirmed operational — GPU clusters and liquid cooling systems housed inside weatherproof aluminium-framed structures that from the outside look like very large and very permanent tents. Construction monitoring suggests these structures can be operational in four to seven months, compared to fourteen to twenty months for a traditional build. Zuckerberg described the reasoning directly: he did not want his infrastructure team spending four years building concrete. The trade-off is explicit — the rapid deployment structures carry no diesel backup generation, reducing redundancy in exchange for speed. It is a calculated bet that getting compute online fast matters more, for this class of workload, than traditional availability guarantees.
Amazon’s Project Houdini surfaced in April 2026 through internal documents reviewed by Business Insider. The approach is different: rather than changing the building, change what arrives inside it. Prefabricated skids — each around 45 feet long, assembled in factories across Topeka, Houston, and Salt Lake City — arrive at construction sites with racks, power distribution, cabling, lighting, and fire suppression pre-installed. Server installation drops from fifteen weeks to two to three weeks per facility, eliminating up to 50,000 hours of on-site electrical work per site. Amazon confirmed to Business Insider that it is innovating in data centre construction to deliver AI infrastructure faster and at lower cost. Houdini does not solve the land, permitting, or grid connection problem. It attacks a specific window: the gap between when a building shell is ready and when compute can generate revenue.
The modular skid and the fabric tent were both born in the American midwest. The question for APAC is whether the same approaches translate — and the answer, from Johor, is that they already are.
BrightRay completed its MY-01 data centre at Sedenak Tech Park in Johor in eight months using its Full Prefab Modular Building Data Center Solution — 90% of components manufactured at a factory, shipped to Malaysia, and assembled on-site like a precision kit. The facility is three storeys and fully operational. BrightRay’s executive vice president has said a six-month timeframe for a 30 to 60 megawatt facility is achievable using the same method — among the fastest delivery timelines for a facility of that scale anywhere in the world. At the same park, Yondr Group handed over the first 25 megawatt phase of its 98 megawatt Johor AI campus six months ahead of schedule in June 2025 — an AI and machine learning facility with direct-to-chip liquid cooling, delivered fully fitted and rack-ready.
Industry data shows that highly modularised data centre projects now achieve schedule reductions of 30 to 50 percent compared to conventional builds. A delivery timeline that once ran 24 to 36 months now commonly falls between 16 and 20 months when modular strategies are applied consistently.
The scale of capital arriving in APAC makes the construction question urgent. Microsoft has committed A$25 billion in Australia and $10 billion in Japan. On April 28, Google broke ground on its $15 billion AI hub in Visakhapatnam in India — three gigawatt-scale campuses developed with AdaniConneX and Nxtra by Airtel on India’s eastern coast. Each of these commitments was made against a specific timeline. Each depends on construction keeping pace with that timeline.
The constraint that has not compressed — in APAC as everywhere — is power. Grid connections, utility approvals, and permitting processes have not followed the same curve as construction methods. In parts of Southeast Asia, permitting and utility approvals can add six to twelve months to overall timelines. The building can be ready in months. The electricity connection may take years. The operators moving fastest in the region — AirTrunk now in India, NEXTDC’s 550 megawatt campus in western Sydney, Malaysia redirecting its entire approval pipeline toward AI infrastructure — are all working against the same asymmetry: construction is getting faster, but the grid is not.
For most of the last two decades, a data centre was a building that held servers. The building itself was not particularly interesting — it was infrastructure, background, prerequisite.
That has changed. The ownership model determines who holds the risk, who captures the benefit, and who bears the cost when something goes wrong. The cooling method shapes the relationship with water, with the local grid, and with the communities that share both. And now the construction format — factory-built skid or fabric tent or concrete shell — determines whether a campus can be live in weeks or years.
The critics examined in Issue 004 and the case made against them were both largely about one version of this industry — traditional builds, conventional ownership, air-cooled assumptions. The three ownership models, the three cooling approaches, the two new construction formats each carry different tradeoffs. The balance sheet looks different depending on which building is actually being discussed.
Every data centre announcement in APAC — every government approval, every investment headline, every community consultation — carries these questions inside it. Which ownership model? Which cooling approach? Which construction format? Who operates the facility layer and who operates the compute layer? They are rarely asked. They should be.
Issue 008 examines what happens when you place two types of facility side by side — the traditional data centre and the AI-native data centre — and ask what the same name is actually covering.
APAC tech, read once a week.
No ads, no noise.