When I speak at industry or company events, I frequently start my presentations with a cliché attributed to Mark Twain: “History doesn’t repeat, but it often rhymes.”
The benefit of working in the same industry for a long time is spotting these trends—where something old becomes new again, or some previous technology is reimagined to address new challenges.
This sentiment is atypical because, with AI, much of the data center industry is experiencing radical change. However, the cooling tools charged to deploy and support AI have deep roots in the past. Indulge me as I jump in the time machine and take the short journey back to 2008.
Efficient Data Center Cooling Methods Begin to Emerge in 2008
At that point, Sun Microsystems (pre-Oracle acquisition) started sharing and evangelizing its data center designs. Some of their concepts—eliminating raised floors, supporting all power, and cabling distribution overhead—became mainstays of hyperscale data centers in years to follow. In 2008, Sun held a “Chill Off,” an independent contest to identify the most efficient data center cooling technology for high-density deployments. IBM’s Cool Blue Rear Door Heat Exchanger emerged victorious in this competition, supporting a 10kW installation in what the test criteria deemed the most energy-efficient manner.
Soon thereafter, Sun entered the commercial space for rear door coolers, unveiling their Glacier rear door cooler at the Supercomputing 2008 tradeshow. To their credit, Glacier significantly raised the stakes on cooling capacity for a rear door, supporting 35kW in a similar passive, no-fan design.
Also in 2008, the National Center for Atmospheric Research (NCAR) deployed a new supercomputer called Bluefire. At the time of deployment, Bluefire ranked #31 on the Top 500 list of supercomputers and had a max power demand of 649kW Power consumption per rack exceeded 30kW, so the system used three levels of data center cooling:
1. Direct-to-chip cooling with “water-chilled copper plates mounted in direct contact with each POWER6 microprocessor chip”
2. The IBM Cool Blue Heat Exchanger mounted on each cabinet
3. Traditional raised floor cooling
The events of 2008 foretell, at least partially, the prominence of two technologies to support AI deployments: direct-to-chip cooling and rear door heat exchangers, and the emerging need to couple them. Techniques previously reserved for supercomputing (like Bluefire) may become mainstream if the AI deployment forecasts come true (and NVIDIA’s earnings reports suggest we’re on that path).
Different Data Center Cooling Choices Transform the AI Landscape
There will be discussions among vendors and designers—for example, whether water or dialectic fluid is the best direct-to-chip medium or whether immersion cooling offers a better, alternative path. But the good news is that despite immense changes in IT, the cooling techniques to handle AI have been charted before. That knowledge, historically confined to the realms of national labs and research-intensive industries, will need distribution to a wider audience.
Learn more about how Panduit can deliver a seamless integration of products and solutions to enable your AI network by visiting our Artificial Intelligence landing page
References