Stäubli to Highlight New Cooling Connectors at Virtual SC20

Duncan, SC (Nov. 9, 2020) — At virtual SC20, held online November 17 -19, Stäubli, a manufacturer of quick-release coupling systems for IT/liquid cooling, will be digitally highlighting the new DAG coupling this year. The DAG range has been designed for perfect integration in installations such as data centers or super computers, combining three major […]

The Use of High-Performance Polymers in HPC and Data Center Applications

“Polymer components in liquid cooling systems are attractive for several reasons: they are lightweight, typically less expensive than metal counterparts, and are impervious to corrosion that can render parts inoperable or introduce debris into flow paths. The challenges with many polymers used to date, however, are their abilities to handle high temperatures and physical stressors without deforming, cracking or creeping. These shortcomings become significant when leaks occur, leading to downtime or damage to equipment.”

The Use of High-Performance Polymers in HPC and Data Center Applications

This whitepaper from CPC details the use of high-performance polymers in HPC and data center applications where plastics advance as a viable, reliable option in liquid cooling systems. CPC offers the industry’s first PPSU QD, purpose-built for liquid cooling use in HPC and data centers.

The Right Terminations for Reliable Liquid Cooling in HPC

High performance computing manufacturers are increasingly deploying liquid cooling. To avoid damage to electronic equipment due to leaks, secure drip-free connections are essential. Quick disconnects for HPC applications simplify connector selection. And with expensive electronics at stake, understanding the components in liquid cooling systems is critical. This article details what to look for when seeking the optimal termination for connectors—a way to help ensure leak-free performance.

The Right Terminations for Reliable Liquid Cooling in HPC

Liquid cooling for HPCs is increasingly common. With expensive electronics at stake, understanding the components in liquid cooling systems is critical. This article details what to look for when seeking the optimal termination for connectors—a way to help ensure leak-free performance.

Managed Implementation of Liquid Cooling

The need for high reliability cooling without reducing the rack density to handle high wattage nodes is no easy task. “Asetek’s direct-to-chip liquid cooling provides a distributed cooling architecture to address the full range of heat rejection scenarios. It is based on low pressure, redundant pumps and sealed liquid path cooling within each server node.”

Datacenter Efficiencies Through Innovative Cooling

Datacenters that are designed for High Performance Computing (HPC) applications are more difficult to design and construct than those that are designed for more basic enterprise applications. Organizations that are creating these datacenters need to be aware of, and design for systems that are expected to run at their maximum or near maximum performance for the lifecycle of the servers.

Do you like Water? Do you like Pumps?

The HPC industry is ever facing more and more challenges on various topics and especially a significant increase in cooling requirements. To meet those requirements, liquid cooling looks like the solution. But there is an alternative cooling solution that works without a pump and without water.

Cooling Today’s Hot New Processors

Expected later in 2016, Intel will be releasing production versions of its Knights Landing (KNL) 72-core coprocessor. These next generation coprocessors are impacting the physical design of the supercomputers now coming down the pike in a number of ways. One of the most dramatic changes is the significant increase in cooling requirements – these are high wattage chips that run very hot and present some interesting engineering challenges for systems designers.

Invest in Supercomputers, Not Chillers

By using Direct-to-Chip liquid cooling, Mississippi State University was able to purchase more servers by minimizing the capital spent to cool the data center. The success of the initial cluster at MSU led to the installation of second cluster.