globe {{(header.eyebrow.langSelector.label != '') ? header.eyebrow.langSelector.label : 'Choose Language'}}
{{ popupData.primarybody }}
{{ distyMobilePopUpData.title }}
{{ distyMobilePopUpData.primarybody }}
{{ distyMobilePopUpData.secondarybody }}

Part List

{{addedBomQuantity}} {{addedBomName}} Added
{{totalQuantityInBom}} item(s) View List >>

Part List

  1. {{product.name}}

    {{product.description}}

    {{product.quantity}} item(s)
View List >>

Looking Back to Look Forward: AI’s Very Real Data Center Cooling Demand

Dec 2024
Data Center Artificial Intelligence
 Thermographic view of data center servers

When I speak at industry or company events, I frequently start my presentations with a cliché attributed to Mark Twain: “History doesn’t repeat, but it often rhymes.”

The benefit of working in the same industry for a long time is spotting these trends—where something old becomes new again, or some previous technology is reimagined to address new challenges.

This sentiment is atypical because, with AI, much of the data center industry is experiencing radical change. However, the cooling tools charged to deploy and support AI have deep roots in the past. Indulge me as I jump in the time machine and take the short journey back to 2008.

Efficient Data Center Cooling Methods Begin to Emerge in 2008

At that point, Sun Microsystems (pre-Oracle acquisition) started sharing and evangelizing its data center designs. Some of their concepts—eliminating raised floors, supporting all power, and cabling distribution overhead—became mainstays of hyperscale data centers in years to follow. In 2008, Sun held a “Chill Off,” an independent contest to identify the most efficient data center cooling technology for high-density deployments. IBM’s Cool Blue Rear Door Heat Exchanger emerged victorious in this competition, supporting a 10kW installation in what the test criteria deemed the most energy-efficient manner.

Soon thereafter, Sun entered the commercial space for rear door coolers, unveiling their Glacier rear door cooler at the Supercomputing 2008 tradeshow. To their credit, Glacier significantly raised the stakes on cooling capacity for a rear door, supporting 35kW in a similar passive, no-fan design.

Also in 2008, the National Center for Atmospheric Research (NCAR) deployed a new supercomputer called Bluefire. At the time of deployment, Bluefire ranked #31 on the Top 500 list of supercomputers and had a max power demand of 649kW Power consumption per rack exceeded 30kW, so the system used three levels of data center cooling:

1.        Direct-to-chip cooling with “water-chilled copper plates mounted in direct contact with each POWER6 microprocessor chip”

2.        The IBM Cool Blue Heat Exchanger mounted on each cabinet

3.        Traditional raised floor cooling

The events of 2008 foretell, at least partially, the prominence of two technologies to support AI deployments: direct-to-chip cooling and rear door heat exchangers, and the emerging need to couple them. Techniques previously reserved for supercomputing (like Bluefire) may become mainstream if the AI deployment forecasts come true (and NVIDIA’s earnings reports suggest we’re on that path).

Data Center Operators explore on their laptop different cooling methods

Different Data Center Cooling Choices Transform the AI Landscape

There will be discussions among vendors and designers—for example, whether water or dialectic fluid is the best direct-to-chip medium or whether immersion cooling offers a better, alternative path. But the good news is that despite immense changes in IT, the cooling techniques to handle AI have been charted before. That knowledge, historically confined to the realms of national labs and research-intensive industries, will need distribution to a wider audience.

Learn more about how Panduit can deliver a seamless integration of products and solutions to enable your AI network by visiting our Artificial Intelligence landing page

References

  1. IBM Cool Blue Shines in Vendor 'Chill Off'
  2. Sun Debuts New Rack Cooling Door 
  3. https://www.top500.org/system/176390/
  4. https://news.ucar.edu/940/ncar-installs-76-teraflop-supercomputer-critical-research-climate-change-severe-weather

Author:

Justin Blumling

Justin Blumling is a Senior Solutions Architect at Panduit. He has over fifteen years of technical and commercial experience within data centers, including electrical, mechanical, cabinets, and DCIM. He supports Panduit’s Intelligent Infrastructure offering, comprised of rack PDUs, UPS, DCIM software, and intelligent sensors.

He holds a Master of Business Administration (MBA) degree and is a US Department of Energy (DOE) certified Data Center Energy Practitioner – Specialist.