According to TheRegister.com, Turner & Townsend’s 2025-2026 Datacenter Construction Cost Index reveals nearly half of industry professionals cite power access as their biggest scheduling constraint, with US grid connection wait times stretching up to seven years. The report surveyed 300+ projects across 20+ countries with input from 280 experts, finding that OpenAI’s disclosed projects alone would consume 55.2 gigawatts – enough to power 44.2 million households. Deloitte warned in June that AI datacenter power requirements in the US may be 30 times greater within a decade, with 5 GW facilities already planned. Meanwhile, 83 percent of professionals believe local supply chains can’t support advanced cooling technology for high-density AI deployments, and AI-optimized liquid-cooled facilities cost 7-10 percent more than air-cooled designs.
Power grid reality check
Here’s the thing that really jumps out – we’re talking about seven-year wait times just to get electricity hooked up. That’s longer than some presidential terms. Basically, if you started planning an AI datacenter today, your kids might be in middle school before it gets power. And OpenAI’s planned consumption being triple California’s housing stock? That’s absolutely wild when you think about it.
The scale problem is becoming impossible to ignore. We’re not talking about adding a few extra servers here – we’re talking about infrastructure that competes directly with housing and manufacturing for limited grid capacity across the US, UK, and Europe. So what happens when communities have to choose between powering homes or powering AI models? That’s going to get messy fast.
Cooling supply chain crunch
But wait, there’s more! Even if you somehow solve the power problem, you’ve got the cooling dilemma. Traditional air-cooled systems just don’t cut it for these AI beasts, and 83% of the industry says supply chains can’t handle the advanced cooling tech needed. Liquid-cooled facilities already cost 7-10% more, and that gap will probably widen as demand outstrips supply.
Paul Barry from Turner & Townsend nailed it when he said AI datacenters are “more advanced, and by extension, costlier.” We’re basically building Ferraris when the industry has been making Toyotas for decades. The skills, materials, and manufacturing capacity just aren’t there yet.
Solutions or band-aids?
The report suggests on-site generation and energy storage, but let’s be real – when they mention “renewables” but acknowledge it’ll probably be gas-powered turbines, you know we’re looking at temporary fixes rather than sustainable solutions. Their full report recommends reviewing procurement models, but is that enough when we’re dealing with fundamental infrastructure limitations?
And don’t even get me started on the hardware side. Chip makers struggling to keep up? That’s another bottleneck waiting to happen. We’re building the equivalent of entire cities worth of computational power, and the foundation just isn’t there to support it.
What comes next
So where does this leave us? Basically, the AI gold rush is hitting the infrastructure wall hard. Companies betting everything on AI expansion might need to seriously reconsider their timelines. Governments are prioritizing these projects, but you can’t just wish power grids into existence.
The next couple years will be fascinating to watch. Either we see massive infrastructure investment that fundamentally changes our energy landscape, or we watch AI growth slow to a crawl while the physical world catches up. My money’s on a messy combination of both.
