CoolIT Systems Takes Liquid Cooling for HPC Data Centers to the Next Level

Patrick McGinn, CoolIT Systems

Patrick McGinn, CoolIT Systems

In this video from SC15, Patrick McGinn from CoolIT Systems describes the company’s latest advancements in industry leading liquid cooling solutions for HPC data center systems.

CoolIT Systems showcased its full range of Rack DCLC™ centralized pumping warm water liquid cooling systems for HPC Data Centers at the conference. HPC data center deployments incorporating CoolIT Systems Rack DCLC (direct contact liquid cooling) technology were featured, including Poznan Supercomputing and Networking Centre, and the Center of Biological Sequence Analysis at the Technical University of Denmark. Multiple examples of liquid cooled server configurations from Dell, Lenovo, Intel, Supermicro, Huawei, Penguin Computing were presented. CoolIT’s cold plate solutions for present and future high wattage CPUs and GPUs from Intel, Nvidia and more were on display. Additionally, CoolIT revealed its next generation CDU code-named ‘Revelstoke’: a rack mounted, 4u liquid-to-liquid heat exchanger capable of managing upwards of 80kW and 120 nodes with warm water cooling.

“The adoption from vendors and end users for liquid cooling is growing rapidly with the rising demands in rack density and efficiency requirements,” said Geoff Lyon, CEO/CTO of CoolIT Systems who also chairs The Green Grid’s Liquid Cooling Work Group. “CoolIT Systems is responding to these demands with our world leading enterprise level liquid cooling solutions.”

Transcript:

insideHPC: What’s new with CoolIT Systems? Your liquid cooling seems to be all over the floor.

Patric McGinn: It is all over the floor, and lots new. We’re showing off our standard liquid cooling in-server components. These are our passive cold plates. As with all our cold plates, we do things both passive and active. We still have our pumped cold plates, but most of our data center product uses the passive cold plates in the centralized pumping systems.
Some of the other new pieces that we’re showing off are memory cooling, VR cooling, other little bits and pieces that we’re adding in deployments right now.

insideHPC: I see you’ve got a number of vendors here using your stuff.

Patric McGinn: Yes, we’ve been pretty busy this year. On the booth here you’ll see everything from Inspur, Lenovo, Intel, new Dell projects, and Huawei CH121 blade which we’re installing in Poland right now in a 20 rack cluster.

insideHPC: Patrick, looks like we’ve got a heat exchanger here.

Patric McGinn: That’s right. This is the CoolIT CHx40 module and we’ve had great success in the past few years with rack-based heat exchangers. So the CHx40 and the AHx20 are two of our most popular systems. Then if we look over to the back in here, we have our CHx650 which is for a full cluster row of racks and it manages 650 kilowatts of load with warm water.

insideHPC: How many kilowatts can you cool with this?

Patric McGinn: This is the CHx40 and it can manage 40 kilowatts with warm water in 2U. It’s a very tight package. And we’re graduating, at this show, up from the CHx40 into the Revelstoki. It’s still under a code name. It will be released in Q1 of 2016, and the target for this project is 80 to 100 kilowatts, and 100 to 120 nodes managed per heat exchanger in a rack and 4U. So it’s a very, very dense heat exchange package.

insideHPC: If I had a 100 kilowatt rack, this could take that on?

Patric McGinn: It could take that on in 4U of space. That’s it.

insideHPC: Let’s bring this all together. It seems like liquid cooling – as these new chips like Knight’s Landing – they’re really hot, right? HPC wants increasing density, it just seems to me that liquid cooling is a no-brainer.

Patric McGinn: I feel the same way! You see a lot of liquid out there, you really do. There’s quite a few vendors here at the show. Clustered deployments and high performance computing, they’re really looking to liquid to solve the problems that they’re seeing in terms of density. Not just that, but it’s about data center efficiency as well. Relaxing reliance, maybe, on chiller loops and chilled water coils, and moving to more warm water systems.

See our complete coverage of SC15 * Sign up for our insideHPC Newsletter