Test Bed Systems Pave the Way for 150 Petaflop Summit Supercomputer

Print Friendly, PDF & Email
Philip Curtis, a member of the High-Performance Computing Operations group at the OLCF, works with Pike, one of the test systems being used to prepare for Summit.

Philip Curtis, a member of the High-Performance Computing Operations group at the OLCF, works with Pike, one of the test systems being used to prepare for Summit.

Over at OLCF, Jonathan Hines writes that staff at the Oak Ridge Leadership Computing Facility are preparing for their upcoming Summit supercomputer with two modest test bed systems called Pike and Crest.

Both systems are small clusters powered by IBM Power8 CPUs, a precursor to the CPUs that will be used on Summit. Peripheral components differ, however, based on the function of each system, according to Dustin Leverman, a member of the OLCF High-Performance Computing Operations (HPC Ops) Group that helped assemble the early test bed.

Crest is a compute test bed. Each of its four compute nodes contain four GPUs so staff can get a feel for running code with more than one GPU per CPU socket, a key difference between Summit and its predecessors,” Leverman said. “Pike, on the other hand, is a data storage test bed of 14 nodes. Instead of GPUs, it has a non-volatile memory disk to evaluate potential attributes of the high-speed data storage system planned for Summit. These two systems give us a head start on Summit’s next-generation compute and storage systems so we will be better prepared to support users.”

According to Hines, future test systems will incorporate Nvidia’s NVLink high-bandwidth interconnect.

In this video, OLCF technical leaders describe the Summit supercomputer.

Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.

Sign up for our insideHPC Newsletter.