What does NVIDIA Rubin really mean for data centre cooling?

Eric Williams featured in Tech Capital.

Data centre tech leaders and liquid cooling vendors explore the reality behind the hype of Jensen Huang’s “no chillers required” soundbite from CES.

By Jack Haddon

Deputy Editor, The Tech Capital

Earlier this month, NVIDIA CEO Jensen Huang told the audience at the Consumer Electronics Show (CES) in Las Vegas that the AI chip-maker’s latest architecture, the Vera Rubin design, could operate in data centres without water chillers.

Equity markets reacted swiftly, with manufacturers of the crucial components in data centre cooling, such as Modine Manufacturing, Johnson Controls and Trane Technologies all suffering sell-offs of over 5% shortly after the presentation took place.

While the initial reaction framed Rubin as a step-change in data centre cooling efficiency, the reality, according to operators, engineers, and cooling specialists speaking to The Tech Capital in its aftermath, is far more nuanced.

Among these more measured commentators, Rubin has been received not as an overnight transformation of the cooling market, but as a continuation on the path the industry was already travelling: higher operating temperatures and a wider adoption of liquid cooling solutions,  but with traditional systems still an important part of the mix.

What’s changed with Rubin?

The comment that caught the attention of media, analysts and equity markets related to inlet water temperatures into the direct-to-chip liquid cooling systems that will be required for Rubin deployments.

Huang noted that Rubin servers could be cooled with water at 45 degrees Celsius. While noteworthy, it is not entirely groundbreaking.

As Rich Whitmore, CEO of Motivair (now a part of Schneider Electric) notes, chip-level thermal boundaries have remained broadly consistent for the past three NVIDIA GPU generations.

This view was shared by Joe Capes, the CEO of liquid cooling specialist LiquidStack.

“Blackwell was using lead-in water temperatures of up to 40 degrees Celsius (°C). Rubin is pushing it to 45°C so it’s not an earth-shattering development,” he told The Tech Capital during an interview at PTC’26 last week.

In fact, Whitmore, Capes and Maciek Szadkowski, the CTO of DCX Liquid Cooling Systems, all agree that the industry has been moving towards higher water temperatures for some time.

“We have deployments with the NVIDIA GB200 platform that were built with 44°C in mind,” Szadkowski reveals.

So if this is a continuation of a trend that leading vendors were all well aware of, why did the Rubin announcement garner so much attention?

Capes believes that NVIDIA and its ecosystem partners are under increasing pressure to address concerns around the impact and sustainability of AI as a whole.

Against this backdrop, the unveiling of Rubin could have been an attempt to reframe a conversation around operational efficiency as one around sustainability.

“I think what NVIDIA has put out there is an ideal state; if we could do everything exactly the way we wanted to do it,” Whitmore adds.

Optimal vs realistic temperatures

This raises an important point. Just because Rubin and Blackwell NVIDIA platforms can be cooled at materially higher inlet temperatures than previous generations, it doesn’t mean that they always will, or even should.

Eric Williams, senior vice president of solutions and engineering at Compass Data Centers, tells The Tech Capital that while GPUs may be able to tolerate higher temperatures in a lab, real-world facilities need to support diverse workloads, with storage, networking and lower-density deployments sitting alongside the high-density racks.

“We consistently see hyperscale customers choosing to run certain workloads at lower temperatures, for performance, consistency, and additional risk management reasons,” he explains.

Simply put, designing a facility purely around a vendor’s upper thermal limits ignores how data centres are actually run.

“To me, it’s really more about reshaping how cooling systems need to operate across a wider range of conditions, customer preferences and workloads,” Williams says.

Compass is now designing for 80% liquid and 20% air cooling at the facility level, with the option to shift that mix further over time.

That flexibility, rather than maximum efficiency, is what he says hyperscale customers are buying:

“There certainly is an eye on whether we can operate the data centre at higher temperatures, but we just haven’t seen that hit the ground yet and be a consistent ask”.

Chillers are here to stay

One of the key benefits of using higher water temperatures is the ability to leverage free cooling, whereby ambient outside temperatures can chill water to a desired level.

For systems that require water at much lower temperatures, markets like the Nordic states in Europe have been touted as excellent conditions to leverage this technique.

According to CTO Tate Cantrell, data centre operator Verne has been “doing chiller-less cooling the Nordics for some time … even for air cooling”.

Reducing the need for as many mechanical components as possible in a data centre is an optimal outcome for a developer.

It keeps initial capex down, lowers complexity, reduces the intensity of maintenance and monitoring and removes a link from an already lengthy and entangled supply chain.

And there are secondary benefits as well. As noted by DCX’s Szadkowski, in moderate climates such as North Virginia or New Jersey, a 10°C increase in acceptable temperature to 45°C could translate to a one-third reduction in expenditure on heat rejection components – including large fans, aluminium, copper, and construction materials.

The savings are particularly significant for hyperscale deployments where heat rejection infrastructure represents a substantial portion of total cooling system costs.

But while Rubin is the latest step on the path towards this ideal environment in warmer geographies, Cantrell doesn’t anticipate a huge change immediately.

Both he and Compass’s Williams acknowledge that NVIDIA is not the only chip provider in town.

When hyperscale custom silicon or hardware from other vendors is being deployed, it might not be designed to the same specifications as Rubin, and thus, the same approach to cooling cannot be applied immediately.

Furthermore, as noted by Motivair’s Whitmore, liquid cooling can become very expensive for certain components, such as cooling memory in servers.

As a result, the need for chillers is not going away as Rubin is rolled out in H2 2026 and beyond. But, the circumstances in which they are used may well do.

Learn more about Tech Capital.