OSC to Deploy Pitzer Cluster built by Dell EMC

Print Friendly, PDF & Email

OSC’s new Dell EMC-built Pitzer Cluster will serve a wide range of clients, with features that will enhance research involving machine learning and Big Data.

Today the Ohio Supercomputer Center announced plans to deploy the center’s newest, most efficient supercomputer system, the liquid-cooled, Dell EMC-built Pitzer Cluster.

Ohio continues to make significant investments in the Ohio Supercomputer Center to benefit higher education institutions and industry throughout the state by making additional high performance computing services available,” said John Carey, chancellor of the Ohio Department of Higher Education. “This newest supercomputer system gives researchers yet another powerful tool to accelerate innovation.”

Named for Russell M. Pitzer, a co-founder of the center and emeritus professor of chemistry at The Ohio State University, the Pitzer Cluster is expected to be at full production status and available to clients in November. The new system will power a wide performance for mixed-precision artificial intelligence workloads.

The Pitzer Cluster will feature 260 nodes, including Dell EMC PowerEdge C6420 servers with CoolIT Systems’ Direct Contact Liquid Cooling (DCLC) coupled with PowerEdge R740 servers. In total, the cluster will include 528 Intel Xeon® Gold 6148 processors, 64 NVIDIA Tesla V100 Tensor Core GPUs, all connected with EDR InfiniBand network.

We worked with Dell EMC to create a highly efficient, dense and flexible petaflop-class system,” said Douglas Johnson, chief architect at OSC. “We have designed the Pitzer Cluster with some unique components to complement our existing systems and boost our total center performance to more than 2.8 petaflops.”

The Pitzer Cluster will join existing systems on the OSC data center floor at the State of Ohio Computer Center: The Dell EMC/Intel Owens Cluster (March 2017) and the HP/Intel Ruby Cluster (April 2015). The new system will replace the HP/Intel Oakley Cluster (March 2012).

Dell EMC is thrilled to continue our great collaboration with OSC with this new dense, efficient and liquid cooled system,” said Thierry Pellegrino, vice president, Dell EMC High Performance Computing. “The Pitzer Cluster brings to bear a multitude of new technologies to help OSC and its researchers more quickly and efficiently tackle immense challenges, using artificial intelligence and deep learning to ultimately drive human progress.”

The Pitzer Cluster will utilize CoolIT Systems’ DCLC, a modular, low-pressure, rack-based cooling solution that enables a dramatic increase in rack density, component performance and power efficiency. To support the high performance requirements of the system, CoolIT’s Passive Coldplate Loop for the PowerEdge C6420 servers delivers dedicated liquid cooling to the Intel processors in each of the 256 CPU nodes, managed by a stand-alone, central pumping CHx650 Coolant Distribution Unit.

To speed up data flow within the Pitzer Cluster, Dell EMC recommended components that improve memory bandwidth on each CPU node and increase network capacity between them. The Intel processors feature 6-channel integrated memory controllers, improving bandwidth by 50 percent compared to cores in the Owens Cluster. Mellanox EDR InfiniBand 100 Gigabit per second provided provides high data throughput, low latency and high message rate of 200 million messages per second. Additionally, the smart In-Network Computing acceleration engine provides higher application performance and overall improved efficiency.

The Pitzer Cluster will provide clients with access to four Large Memory nodes (Dell EMC PowerEdge R940), with up to three terabytes of memory per node, especially helpful for data-intensive operations, such as DNA sequencing. And, the cluster’s GPU nodes (Dell EMC PowerEdge R740) feature NVIDIA Tesla V100 Tensor Core GPUs, which are 50 percent more energy efficient than previous generation GPUs. These GPUs offer large increases in speed, especially useful for deep learning algorithms and artificial intelligence projects.

Sign up for our insideHPC Newsletter