Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ServerCool CDUs are Cooling 10-Percent of Top100 Supercomputers

Today Nortek Air Solutions’ ServerCool division that its coolant distribution unit (CDU) technology is cooling dozens of the world’s most powerful and energy efficient supercomputers, according to the 2019 TOP500 and Green500 lists announced recently at the ISC 2019 in Frankfurt. “The TOP500 proves that hardware manufacturers are pushing their equipment’s performance envelope at the chip level, however ServerCool will keep pace with that growth by continuing to develop higher cooling and flow capacities with smaller footprints,” said Stuart Smith, global sales manager, ServerCool div. of Nortek Air Solutions.

AI Approach Points to Bright Future for Fusion Energy

Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”

Converging HPC, Big Data, and AI at the Tokyo Institute of Technology

Satoshi Matsuoka from the Tokyo Institute of Technology gave this talk at the NVIDIA booth at SC17. “TSUBAME3 embodies various BYTES-oriented features to allow for HPC to BD/AI convergence at scale, including significant scalable horizontal bandwidth as well as support for deep memory hierarchy and capacity, along with high flops in low precision arithmetic for deep learning.”

Radio Free HPC Gets the Scoop from Dan’s Daughter in Washington, D.C.

In this podcast, the Radio Free HPC team hosts Dan’s daughter Elizabeth. How did Dan get this way? We’re on a mission to find out even as Elizabeth complains of the early onset of Curmudgeon’s Syndrome. After that, we take a look at the Tsubame3.0 supercomputer coming to Tokyo Tech.

DDN and Lustre to Power TSUBAME3.0 Supercomputer

“The IO infrastructure of TSUBAME3.0 combines fast in-node NVMe SSDs and a large, fast, Lustre-based system from DDN. The 15.9PB Lustre* parallel file system, composed of three of DDN’s high-end ES14KX storage appliances, is rated at a peak performance of 150GB/s. The TSUBAME collaboration represents an evolutionary branch of HPC that could well develop into the dominant HPC paradigm at about the time the most advanced supercomputing nations and consortia achieve Exascale computing.”

Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech

“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”