What 100+ Years of Grid Build-On Can't Deliver to AI Data Centers

What 100+ Years of Grid Build-On Can't Deliver to AI Data Centers

What 100+ Years of Grid Build-Out Can’t Deliver to AI Data Centers

Webinar Series Blog, Part 1: Reliability

Across North America, we’re watching something the grid simply wasn’t built for: power demand that took 100+ years to develop is now set to double within just a few years. Data centers, especially AI training and inference, are at the heart of this surge.

For decades, many regions saw flat or even declining power demand. Now we’re seeing multi-hundred-megawatt, even multi-gigawatt, projects landing in places where the wires, substations, and generation capacity just can’t keep up.

The result is predictable: long interconnection queues, congestion, escalating prices, or, in some cases, developers being told “come back in eight years.”

This is why behind-the-meter (BTM) natural gas generation is suddenly on every data center developer’s radar.

This blog kicks off a multi-part series based on our recent webinar. Today we’ll set the stage and dig into Comparison #1: Reliability, because if you don’t get power reliability right, nothing else matters.

 

Why Behind-the-Meter Natural Gas Is a Serious Contender

Traditionally, data centers connected to the grid, added a UPS, paired it with diesel backup generation, and called it a day! Grid power has always been attractive:

  • High reliability,
  • Excellent ramp rate handling (ability to handle large load swings),
  • Competitive pricing, and
  • Lower carbon intensity compared to standalone generation.

 

But that standard playbook is buckling under today’s demand. In many regions, developers are facing two problems:

  1. Transmission congestion – You might have plenty of generation, but no way to move it.
  2. Insufficient generation – The wires are there, but the power isn’t.

Either way, the outcome is the same: you can’t get enough power, or you can’t get it in time.

Behind-the-meter natural gas generation steps in because it’s:

  • Fast to deploy
  • Modular
  • Scalable
  • Competitive on cost
  • Capable of high reliability with the right architecture

Which brings us to the comparison.

The Case Study: 250MW, No Grid, with High Reliability (>99.9%)

In our recent presentation, we presented an analysis around a 250MW data center with zero grid connection.

The requirements:

  • Full power within two years
  • 99.9% uptime
  • Power price at or below USD$0.09/kWh
  • New equipment only

 

We compared three solutions sized to hit the 250MW base load:

  1. 50MW simple-cycle gas turbines
  2. 20MW medium-speed engines
  3. mtu 20V4000 high-speed engines (2.5MW each)

 

 

Conventional power generation design favours bigger units. Fewer machines, simpler layout. But reliability for a data center isn’t about the reliability of a single unit, it’s about system reliability; can the entire plant still deliver full power even when individual units are down?

Reliability: Where Many Smaller Units Beat a Few Large Ones

A single unit’s reliability is straightforward: total hours minus downtime, divided by total hours. Say a machine needs 500 hours of maintenance and unplanned downtime per year. You’re looking at around 94% single-unit reliability.

The problem? In a 250MW islanded plant, a single unit being offline can’t be allowed to interrupt the full load. That’s where redundant units come in.

When you only have a handful of large units, every outage is statistically painful. When you have many smaller units, the system becomes far more resilient.

Here’s what the numbers showed:

  • The large turbine solution needed two extra turbines, nearly 100MW of overbuild, to hit 99.9% system reliability.
  • The medium-speed engine solution needed four to five extra units.
  • The mtu 20V4000 solution reached the same reliability target with roughly 15 extra 2.5MW engines, equating to about 40MW of overbuild.

Less overbuild means less capital expense, and in this 250MW case study, that’s up to $100 million in Capex savings compared to the turbine option.

The reason is simple: reliability grows rapidly when risk is spread across many smaller units. You can service them without sacrificing plant output, and your redundancy is far more efficient.

What This Means for Developers Today

For AI data centers, especially inference data centers with wild load swings, modularity isn’t a luxury. It’s survival.

The mtu 20V4000 platform brings additional advantages for behind-the-meter deployment:

  • High tolerance for start/stop frequency (500+ annually)
  • Minimal derating at high temperature or elevation
  • Compact footprint and fast deployment
  • Closed-loop cooling with minimal water use

In markets where grid access is delayed, restricted, or unaffordable, these engines offer a practical, reliable path forward without the oversized redundancy that big-unit solutions demand.

Coming Up Next

This was the first post in our webinar series. Next, we’ll break down the comparison that’s keeping developers up at night: Ramp Rate Resiliency—and why volatile AI loads expose weaknesses in traditional generation technologies.



November 27, 2025