Video: Asetek Showcases Growing Adoption of OEM Solutions at SC15

Asetek showcased its full range of RackCDU hot water liquid cooling systems for HPC data centers at SC15 in Austin. On display were early adopting OEMs such as CIARA, Cray, Fujitsu, Format and Penguin. HPC installations from around the world incorporating Asetek RackCDU D2C (Direct-to-Chip) technology were also be featured. In addition, liquid cooling solutions for both current and future high wattage CPUs and GPUs from Intel, Nvidia and OpenPower were on display.

Rise of Direct-to-Chip Liquid Cooling Highlighted at SC15

Of the varied approaches to liquid cooling, most remain technical curiosities and fail to show real-world adoption in any significant degree. In contrast, both Asetek RackCDU D2C™ (Direct-to-Chip) and Internal Loop Liquid Cooling are seeing accelerating adoption both by OEMs and end users.

Asetek Enters OEM Purchase Agreement with Penguin Computing

Today Asetek announced an OEM purchase agreement with HPC vendor Penguin Computing. As a part of the agreement, Penguin will incorporate Asetek’s RackCDU D2C liquid cooling technology into its Tundra Extreme Scale (ES) HPC server product line. RackCDU direct-to-chip hot water liquid cooling enhances Penguin’s ability to provide HPC solutions with extreme energy efficiency and higher rack cluster densities. The agreement has already resulted in an order by Penguin described in a previously anonymous announcement.

Asetek Continues Momentum with Largest Server Installation Order to Date

Today Asetek announced its biggest purchase order to date for its RackCDU data center liquid cooling system. The order was placed by an undisclosed Original Equipment Manufacturing partner. The order for 21 RackCDU with Direct-to-Chip cooling loops is to satisfy an undisclosed OEM customer installation. Both the OEM and the end user will be announced when the information becomes public.

Innovation Keeps Supercomputers Cool

“The range of cooling options now available is testimony to engineering ingenuity. HPC centers can choose between air, oil, dielectric fluid, or water as the heat-transfer medium. Opting for something other than air means that single or two-phase flow could be available, opening up the possibilities of convective or evaporative cooling and thus saving the cost of pumping the fluid round the system.”

FORMAT in Poland to Deploy RackCDU Liquid Cooling Systems

Today Asetek announced an order for its RackCDU data center liquid cooling system placed by FORMAT Sp. Ltd, an IT solutions provider located in Poland. Building on the success of previous smaller orders, FORMAT has ordered 6 RackCDU with cooling loops for a total of 471 compute nodes that will be delivered in Q3. The order will result in revenue to Asetek in the range of $100k.

Interview: Asetek Rides Rapid Adoption of Liquid Cooling for HPC

The HPC industry’s expanded use of liquid cooling was evident at the recent ISC 2015 conference in Frankfurt. To learn more, we caught up with Steve Branton from Asetek.

Fujitsu Teams with Asetek for Cool-Central Primergy Systems

Last week at ISC 2015, Fujitsu rolled out new Primergy systems using the company’s Cool-Central Liquid Cooling Solution. Based on Asetek technology, Cool-Central reduces cooling costs by up to 50 percent.

Video: Asetek Showcases Liquid Cooling at ISC 2015

In this video from ISC 2015, Steve Branton from Asetek describes a series of high profile supercomputing upgrades that show the growing momentum of Asetek liquid cooling in the HPC market. “Asetek customers are using the company’s RackCDU Liquid Cooling for increased datacenter efficiency. See for yourself how Asetek successfully addresses datacenter demands at the University of Tromso, Mississippi State University, NREL, and elsewhere, while working with Cray, Fujitsu and other OEMs.”

Invest in Supercomputers, Not Chillers

By using Direct-to-Chip liquid cooling, Mississippi State University was able to purchase more servers by minimizing the capital spent to cool the data center. The success of the initial cluster at MSU led to the installation of second cluster.