top of page

    Let’s Get Wet!: The Changing Face of Data Centre Cooling

    • ICM
    • Jul 22
    • 9 min read

    Updated: Jul 23

    Liquid cooling is going mainstream—and Nvidia is leading the charge



    Written By Real People (all the typos and poor grammar are ours. Don’t blame the robots).


    By Matthew Gould, ICM HPQC Fund Portfolio Manager
    By Matthew Gould, ICM HPQC Fund Portfolio Manager

    At ICM HPQC Fund (ICM HPQC), we do spend a modicum of time fussing over the future of cooling in high-performance data centres. Alongside chip-design and photonics, there has probably been more early-stage venture cash flooding into cooling startups than any other parts of the architecture across the last ten years – at least up until Nvidia’s Vertiv announcement in 2024 (which really spoiled the party).


    While not high on our shopping list for investments, we do still keep an eye out on cooling tech – more now for the supporting technologies that the mainstream cooling system providers will require from bright tech labs across the next decade (Graphene filters anyone?). Just why it is so hard to scale in the cooling space – and why we think there will be some chunky M&A action across the next five years -- requires a bit of a romp through the current state-of-play in the market. 

    ree

    If you are one of our LPs who is already in the DC game and has the scars to show from wrestling the crocs of cooling; or were brilliant in your last next-gen data centre design but just got the unit economics wrong, you can skip this one, you know the pain points already.

    Casual readers may find the cheat sheet below useful to burnish their credentials at the next pre-conference canape session. You see, it’s all about…


    Pipes and Promises

    When Amazon Web Services (AWS) needed to keep its new AI servers from melting down, it didn’t call a vendor. Instead, it built a custom liquid cooling system in-house—in just eleven months. Here’s a picture they published to prove it:


    With AI workloads pushing thermal envelopes to once-unthinkable levels, the world’s largest hyperscalers are rewriting the rulebook on data centre cooling. They are doing it fast, and often with novel – single install architectures. Which is fine while we are all racing to build AI training centres with not a care for cost, but commercial reality will bite at some point and standardised cooling systems will dominate once more. Which cooling system providers will be acquired, and which will leave data centre owners with legacy kit that depreciates faster than an ice cube melts in Austin Texas, is a fun party game to play.


    Over the next five years, direct-to-chip liquid cooling is expected to become the norm in AI data centres. These systems pump coolant through cold plates on processors, whisking heat away far more efficiently than air. AWS’s closed-loop warm-water design, for example, circulates fluid without evaporating a drop. Not one! Only the hottest racks at AWS tier one are cooled this way, leaving the rest to fans—an energy‑savvy hybrid approach.


    These are NOT garden hoses pouring water directly into server racks. DO NOT TRY THIS AT HOME (you know who you are…): Source: AWS News: June 2025: https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data-centers
    These are NOT garden hoses pouring water directly into server racks. DO NOT TRY THIS AT HOME (you know who you are…): Source: AWS News: June 2025: https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data-centers

    Alphabet was an early adopter: its third-generation Tensor Processing Units (TPUs) relied on liquid cooling as far back as 2018. Tesla overhauled its infrastructure to cool its Dojo supercomputer, a machine so demanding it once tripped an entire substation (who hasn’t?). Across the hyperscale landscape, liquid cooling has moved from the fringe to the foundation.


    But a single approach is unlikely to prevail. Immersion cooling—where servers are submerged in non-conductive fluid—offers even greater efficiency and density, but requires radical reworking of hardware and operations. We see it stalling. Maybe, at best, expect a split: direct liquid for mainstream AI deployments, and maybe immersion for ultra-dense installations, or engineers flagging that line of development entirely for a proven method of close-to-processor cooling. Even with photonics at every level of the future, rack heat will require a liquid to carry it away.


    Upstarts in the Flow

    While hyperscalers engineer solutions in-house, startups like ZutaCore and Iceotope are pushing novel ideas: two-phase systems that boil fluid directly on chips and condense it again in sealed loops. Others, such as Nexalus, use microjet sprays of warm water to capture waste heat for district heating. LiquidStack and Submer offer immersion tanks designed to cut cooling energy by up to 50%.


    Yet scale is the crux. Without backing from major original equipment manufacturers (OEMs) or being subsumed in corporate portfolios, few CIOs will take the plunge. Thus, innovations are only likely to become mainstream when adopted—or acquired—by the likes of Schneider Electric or Vertiv.


    Of course, it is not a hyperscaler but a chipmaker—Nvidia—that is rewriting the data-centre rulebook. Its Grace Hopper GH200 superchips can individually draw up to 1,000 W, while its upcoming Rubin Ultra NVL576 rack threatens to deliver a staggering 600 kW per rack entirely cooled by liquid. Wait. What??


    Jensen Huang at NVidia Keynote March 27, 2025. Do you think he actually owns a motor bike?
    Jensen Huang at NVidia Keynote March 27, 2025. Do you think he actually owns a motor bike?

    If you are a data centre builder, architect, or provider of compute services at scale, you need to watch the Nvidia presentation on the Rubin Ultra. The ubiquitous Nvidia CEO Jensen Huang shared the whole technical roadmap at the end of March. He knows he is asking a lot of you data centre owners. You were probably sitting back in your boardroom chair, patting backs for the prudent preparations for 130kW you’d all just made. Read that again – 600 kW baby! Get back out there!


    Data centre providers have to think like laptop providers of the past, getting ready for Intel’s new chip. Before you scoff, Jensen said it, not me: "This transition is going to take years of planning, this isn’t like buying a laptop, which is why I am telling you now. We have to plan with the land and the power for data centres with engineering teams two to three years out, which is why I [am showing] the roadmap. Here’s a link to the presentation and NO – you would not look good in the leather jacket Jensen wears. Link. (Or if you need the whole keynote, it’s here).


    Under a U.S. Department of Energy initiative Nvidia is going further, prototyping hybrid-cooling servers. These combine cold-plate loops for GPUs and CPUs with shallow immersion chambers filled with refrigerant—eliminating fans entirely. If they work, these designs could become blueprints for the next generation of AI data centres. We will cover these in another post. The designs are getting fruity.


    What began as a thermal constraint has spiralled into a baroque reimagining of architecture itself, with liquid creeping into the sacred realms once governed by airflow and aluminium fins—now overtaken by the subtle, gurgling quiet of hydrodynamic ingenuity.


    Freezing my bits off: Cryogenic Cooling for Classical Compute

    ree

    The buzz around cryogenic cooling usually centres on quantum computing—where systems must operate near absolute zero to preserve qubit coherence. But a few visionaries are exploring whether extreme cold could also accelerate conventional computing. At ICM HPQC, we have had a tech crush on Finland’s Bluefors for years – mainly because our investee companies like Diraq and Q-Ctrl use them (as do most in the QC Industry). Will they break into mainstream or AI cooling?


    Bluefors is a pioneer in cryogenic systems, and it recently installed 18 of its flagship KIDE platforms at Japan’s G-QuAT quantum–AI research centre—underscoring its expertise in sub-Kelvin environments for quantum research. Yet Bluefors is also marketing compact dilution refrigerators that sit within standard racks, complete with built-in pulse-tube coolers for reliability and low vibration—features that appeal to HPC hardware designers.


    Operating at temperatures down to 4 K, these units are being pitched not only to qubit laboratories, but as niche accelerators in HPC environments. Cooler silicon chips at these temperatures can deliver higher performance and use less energy. While still experimental, Bluefors’s compressor-and-dilution‑fridge systems are quietly being tested alongside high-performance classical compute nodes. No sign of your large scale, government leased or defence department data centre switching to helium or nitrogen to keep the racks chill anytime soon; but well worth infrastructure investors of large-scale compute parks making sure they have some deep technical knowledge of such systems for when their customers demand those big fridges. It’s a coming…


    Academic researchers are exploring cryogenic CMOS—silicon circuits operated below 120 K—for its potential in ultra-low energy computing. Microsoft Research, for example, experimented with control chips like “Gooseberry” at ~100 mK to interface with qubits—an approach that has inspired follow-on work in classical low-temperature electronics.

    The potential is tantalising: chips at <4 K show notably improved switching speeds and lower electrical resistance. For HPC workloads, this could mean greater throughput per watt—if architectural hurdles (and refrigeration costs) can be overcome.


    It is a seductive proposition—imagine a machine so cold its electrons march with near-perfect discipline, its transistors flicker like ghost lights in the vacuum, and its logic gates hum with the mechanical grace of a piano under moonlight—yet seduction alone rarely foots the bill.


    Bluefors Gas handling System Gen: 2 (Source Bluefors.com). The Finn’s even make gas handling look sleek and cool.
    Bluefors Gas handling System Gen: 2 (Source Bluefors.com). The Finn’s even make gas handling look sleek and cool.

    So far, none of the hyperscalers—AWS, Tesla, Microsoft (for classical), or Alphabet—have disclosed gas powered cryogenic cooling pilot programs for classical compute. AWS’s new liquid-cooling racks and upcoming super-dense systems are based on water or dielectric loops—not nitrogen or helium. Microsoft remains focused on quantum cryogenics, not classical servers. Tesla and Alphabet continue to scale liquid cooling through conventional methods. For now, cryogenics remains a specialist tool—not a mainstream solution. But stay alert…


    Vertiv and Nvidia: A Classically Cool Couple


    ree

    In March 2024, Nvidia and Vertiv formalised a partnership that crystallises the era of liquid cooling for AI infrastructure. Vertiv joined the Nvidia Partner Network as the exclusive physical‑infrastructure advisor, bringing its expertise to bear in the densest compute environments yet seen. Nvidia’s CEO Jensen Huang himself emphasised that “rising requirements for absolute power, cooling ­or power delivery will not pose an issue” thanks to this collaboration—a strategic blow to boutique cooling startups. Vertiv now seems to be your one way into the Nvidia ecosystem if you are a cooling startup.” Boooo!


    The alliance has yielded a reference design capable of supporting up to 132 kW per rack, combining liquid- and air-cooling in a hybrid architecture. Built under the U.S. Department of Energy’s COOLERCHIPS grant programme, the system fuses cold‑plate liquid loops for GPUs with chilled rear‑door heat exchangers, packaged into factory-ready modules that can be deployed up to 50%  faster than conventional builds.


    The fruits of this collaboration are already visible in real-world deployments, including the iGenius “Colosseum” AI supercomputer in Italy, where the Vertiv–Nvidia reference design enables exascale AI performance in sovereign infrastructure. They have also delivered a 7 MW, 132 kW-per-rack blueprint for Blackwell GPUs, positioning Vertiv as a de facto standard player for hyperscale cooling.


    For startups chasing fresh cooling paradigms—be they two‑phase loops, microjets or even early immersion banks—this poses a stern challenge. Large providers like Vertiv now offer turnkey, software‑tested, globally supported systems explicitly designed for Nvidia’s hottest chips, making it harder for unproven niche technologies to gain ground. As Jensen Huang put it, “New data centres are built for accelerated computing and generative AI with architectures that are significantly more complex… With Vertiv’s world‑class cooling and power technologies, Nvidia can realise our vision.”


    BUT – as with actual chip design, other majors (Tesla, Alphabet) are loath to be locked into single provider designs – surely Elon cannot give away more of his margin to Jensen? So there is room for a few of the mid-stage start-ups promising safe, reliable, plug and play cooling solutions to architectures irrespective of design provenance may yet have space to play. Nvidia went with Vertiv, Musk seems to be placing chips with Supermicro – but who knows? Nvidia is relying on startups and ecosystem for its photonics plans. Some of the majors may be pushed to make offers for some of those earlier-stage cooling companies yet. ICM HPQC will keep an eye out – so you don’t have to. 


    That’ll do for today. Time to enjoy a cold one!



    Sources

    Important Note

    The information in this article should not be considered an offer or solicitation to deal in ICM HPQC Fund (Registration number T22VC0112B-SF003) (the “Sub-fund”). The information is provided on a general basis for informational purposes only and is not to be relied upon as investment, legal, tax, or other advice. It does not take into account the investment objectives, financial situation, or particular needs of any specific investor. The information presented has been obtained from sources believed to be reliable, but no representation or warranty is given or may be implied that it is accurate or complete. The Investment Manager reserves the right to amend the information contained herein at any time, without notice. Investments in the Sub-fund are subject to investment risks, including the possible loss of the principal amount invested. The value of investments and the income derived therefrom may fall or rise. Past performance is not indicative of future performance. Investors should seek relevant professional advice before making any investment decision. This document is intended solely for institutional investors and accredited investors as defined under the Securities and Futures Act (Cap. 289) of Singapore. This document has not been reviewed by the Monetary Authority of Singapore.


    ICM HPQC Fund is a registered Sub-fund of the ICMGF VCC (the VCC), a variable capital company incorporated in the Republic of Singapore. The assets and liabilities of ICM HPQC Fund are segregated from other Sub-funds of the VCC, in accordance with Section 29 of the VCC Act.

     

     



     

    Comments


    © 2025 by ICMGF VCC. All rights reserved.


    The information on this website is solely for information purposes and is not intended to be, and should be construed as, an offer or recommendation to buy and sell investments. If you are in any doubt as to the appropriate course of action, we would recommend that you consult your own independent financial adviser, stockbroker, solicitor, accountant or other professional adviser. 

     

    ICMGF VCC - ICM HPQC Fund information referred to in this website has not been reviewed by the Monetary Authority of Singapore or any other regulators.


    Past performance is no guide to the future. The value of investments and the income from them may go down as well as up and investors may not get back the full amount they originally invested. The information herein has been obtained from sources believed to be reliable but no representation or warranty is given or may be implied that they are accurate or complete.

    bottom of page