The accelerating adoption of Asetek liquid cooling by OEMs at major HPC clusters worldwide (LLNL, Los Alamos, Sandia and the JCAPC Oak Forest Cluster) demonstrates the flexibility of Asetek® liquid cooling. Much like the famous Swiss Army Knife, Asetek liquid cooling can be adapted to most cooling problems that a data center might experience.
Asetek distributed pumping is based on placing coolers (integrated pumps/cold plates) within server and blade nodes themselves. These coolers replace the CPU/GPU heat sinks in the server nodes to remove heat with hot water rather than much less efficient air. Asetek has over 3.5 million of these types of coolers deployed worldwide.
Asetek heat management and removal is done by using heat exchangers inside of RackCDU™ to transfer heat, not liquid, to data center facilities water. RackCDU come in two types to give flexibility to cluster operators. InRackCDU™ is mounted in the server rack along with the servers themselves. Mounted in the top 3U or 4U of the rack (depending on overall heat load), it connects to Zero-U PDU style manifolds in the rack.
Most HPC clusters today utilizing Asetek technology use VerticalRackCDU™. This consists of a Zero-U rack level CDU (Cooling Distribution Unit) mounted in a 10.5-inch rack extension that includes space for 3 additional PDUs.
Beyond the rack, the hot water cooling in this architecture has additional advatanges in the form of overall cost of heat removal. Because hot water (up to 40ºC) is used, the data center does not require expensive CRACs and cooling towers but can utilize inexpensive dry coolers. The system can, of course, also be connected to traditional cooled water systems (which is often done when there is available capacity).
Rather than expelling heat outside the data center, RackCDU D2C™ can be used for heat recovery and reuse. This is being done at NREL in the US and at The University of Tromsø in Norway where the Stallo cluster heat is used for building heating.
Back at the server-level, distributed pumping architecture provides further options. In cases where the data center is moving to thermal room-neutral, Asetek uses the same distributed pumping coolers in its ISAC™ (In Server Air Conditioning) server coolution solution. In ISAC, a heat exchanger (HEX) is added within the server. This HEX removes the heat in the air that was not removed by the cold plates and adds that heat to the liquid circuit. ISAC servers are installed just like regular RackCDU D2C servers by integrating with both InRackCDU and VerticalRackCDU.
Lastly, distributing pumping elegantly supports liquid assisted air cooling with Asetek ServerLSL (Server Level Sealed Loop). Just like Asetek’s other solutions, ServerLSL replaces less efficient air coolers with liquid coolers and exhausts 100% of this hot air via HEX into the data center. The overall data center heat is then handled by existing CRACs and chillers with no changes to the infrastructure. This approach is often seen when an OEM wishes to incorporate very high wattage components into air cooled designs which cannot handle the cooling. It is also being used in the high frequency trading arena where server-level overclocking is seen as a must-have.
From a facility management side, ServerLSL can be used where budget limitations are paramount or in a transitional data center (a mix of low density air cooled & high wattage liquid cooled racks).
Whatever the situation, Asetek distributed pumping architecture is an elegant “Swiss Army Knife”’; adaptable to the specific requirements of OEMs for cost effective liquid cooling that addresses the diverse needs of customer data centers.
Demonstrating Asetek’s adaptability to any data center cooling need, HPC installations from around the world are currently on display at SC16 in Salt Lake City, Utah November 14-17. Servers from these installations featuring Asetek liquid cooling will be on display including servers installed at Oakforest-PACS, the highest Performance Supercomputer System in Japan.
Located at booth #2301, Asetek will display cost effective solutions from OEMs such as CIARA®, Cray®, Format®, Fujitsu®, and Penguin®. Liquid cooling for NVIDIA® P100 (Pascal) and Intel® Knight’s Landing™ will also be on display. Intel Knight’s Landing server nodes will feature both Asetek RackCDU D2C™ and ServerLSL™ liquid cooling.