NETL, Cerebras Claim CFD Milestone

A collaboration between DOE’s National Energy Technology Laboratory and Cerebras Systems, maker of the CS-1 deep learning compute system,  has demonstrated that CS-1 could perform a key computational fluid dynamics (CFD) workload more than 200 times faster and at a fraction of the power consumption than the same workload on an optimized number of cores of NETL’s Joule 2.0 supercomputer, currently ranked the 81st most powerful system in the world.

CS-1 is powered by Cerebras’ Wafer-Scale Engine, which Cerebras said is the largest commercial chip ever made, consisting of 1.2 trillion transistors with 400,000 cores.

The research was led by Dirk Van Essendelft, Ph.D., machine learning and data science engineer at NETL, and Michael James, Cerebras chief architect of advanced technologies and a cofounder of the company.

“The CS-1 is a very interesting hardware platform because it eliminates key bottlenecks that strangle performance in conventional HPC applications,” Van Essendelft said. “Current HPCs are fighting physics. There are two big problems — wire volume and wire length. The smaller and shorter the wires are that connect memory to compute, the faster and more energy efficient things can run. On traditional HPCs, the memory is located further away, so the wire lengths can be several inches (or feet between computational nodes) and have quite a large volume.”

Cerebras Wafer Scale Engine

The CS-1 overcomes this issue because all system memory is located on the device as close to compute as current manufacturing methods will allow and is connected by nanowires. Additionally, the Cerebras CS-1 features the Cerebras Wafer Scale Engine, which is the largest chip ever built. It contains 1.2 trillion transistors, covers more than 46,225 square millimeters of silicon and contains 400,000 general purpose compute cores.

The workload under test was to solve a large, sparse, structured system of linear equations. These calculations underpin the modeling of physical phenomena, including the CFD codes of the Lab’s Multiphase Flow with Interphase eXchanges (MFiX), which is designed to model reacting multiphase systems found in advanced energy systems like chemical looping reactors and fluidized bed reactors.

Further development of this unique computational architecture could lead to a paradigm shift in NETL’s HPC efforts and help overcome challenges facing researchers as they design and model next-generation energy systems, the two organizations said in their announcement.

“Cerebras is proud to extend its work with NETL and produce extraordinary results on one of the foundational workloads in scientific compute,” said Andrew Feldman, co-founder and CEO, Cerebras Systems. “This work opens the door to major breakthroughs in scientific computing performance, as the CS-1, with its wafer-scale engine, overcomes traditional barriers to highly achieved performance, enabling real-time and other use cases precluded by the failure of strong scaling. Because of the radical memory and communication acceleration that wafer scale integration creates, we have been able to go far beyond what is possible in a discrete, single chip processor, be it a CPU or a GPU.”

source: NETL and Cerebras