HPE and Cerebras to Install AI Supercomputer at Leibniz Supercomputing Centre

LRZ

The Leibniz Supercomputing Centre (LRZ), Cerebras Systems, and Hewlett Packard Enterprise (HPE), today announced the joint development of a system designed to accelerate scientific research and innovation in AI at Leibniz Supercomputing Centre (LRZ), an institute of the Bavarian Academy of Sciences and Humanities (BAdW).

The system is purpose-built for scientific research and is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, which makes it the first solution in Europe to leverage the Cerebras CS-2 system Cerebras said. The HPE Superdome Flex server delivers a modular, scale-out solution to meet computing demands and features specialized capabilities for in-memory processing required for high volumes of data.

Additionally, the HPE Superdome Flex server’s specific pre-and post-data processing capabilities for AI model training and inference “is ideal to support the Cerebras CS-2 system, which delivers the deep learning performance of 100s of graphics processing units (GPUs), with the programming ease of a single node,” Cerebras said. “Powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 delivers greater AI-optimized compute cores, faster memory, and more fabric bandwidth than any other deep learning processor in existence.”

The system will be used by local scientists and engineers for research use cases. Applications include Natural Language Processing (NLP), medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognosis, and computational fluid dynamics (CFD) to advance understanding in areas such as aerospace engineering and manufacturing.

“Currently, we observe that AI compute demand is doubling every three to four months with our users. With the high integration of processors, memory and on-board networks on a single chip, Cerebras enables high performance and speed. This promises significantly more efficiency in data processing and thus faster breakthrough of scientific findings,” said Prof. Dr. Dieter Kranzlmüller, Director of the LRZ. “As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science. To ensure optimal use of the system, we will work closely with our users and our partners Cerebras and HPE to identify ideal use cases in the community and to help achieve groundbreaking results.”

Cerebras CS2-HPE Superdome Flex

The new system is funded by the Free State of Bavaria through the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region’s mission to becoming an international AI hotspot. The new system is also an additional resource to Germany’s national supercomputing computing center, and part of LRZ’s Future Computing Program that represents a portfolio of heterogenous computing architectures across CPUs, GPUs, FPGSs and ASICs.

Cerebras said WSE-2 is 46,225 square millimeters of silicon, housing 2.6 trillion transistors and 850,000 AI-optimized computational cores as well as evenly distributed memory that hold up to 40 gigabytes of data and fast interconnects to transport them across the disk at 220 petabytes per second. This allows the WSE-2 to keep all the parameters of multi-layered neural networks on one chip during execution, which in turn reduces computation time and data processing. To date, the  CS-2 system is being used in a number of U.S. research facilities and enterprises and is proving particularly effective in image and pattern recognition and natural language processing (NLP). Additional efficiency is also provided by water cooling, which reduces power consumption.

To support the Cerebras CS-2 system, the HPE Superdome Flex server provides large-memory capabilities and scalability to process the massive, data-intensive machine learning projects that the Cerebra’ CS-2 system targets, Cerebras said. The HPE Superdome Flex server also manages and schedules jobs according to AI application needs, enables cloud access, and stages larger research datasets. In addition, the HPE Superdome Flex server includes a software stack with programs to build AI procedures and models.

In addition to AI workloads, the combined technologies from HPE and Cerebras will also be considered for more traditional HPC workloads in support of larger, memory-intensive modeling and simulation needs, the companies said.

“The future of computing is becoming more complex, with systems becoming more heterogeneous and tuned to specific applications. We should stop thinking in terms of HPC or AI systems,” says Laura Schulz, Head of Strategy at LRZ. “AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras. We’re working towards a future where the underlying compute is complex, but doesn’t impact the user; that the technology–whether HPC, AI or quantum–is available and approachable for our researchers in pursuit of their scientific discovery.”