University of Tokyo to Deploy IME14K Burst Buffer on Reedbush Supercomputer

Reedbush Supercomputer

Reedbush Supercomputer

Today DDN Japan announced that the University of Tokyo and the Joint Center for Advanced High Performance Computing (JCAHPC) has selected DDN’s burst buffer solution “IME14K” for their new Reedbush supercomputer.

For a gradual improvement in performance of parallel file systems for computing performance of the supercomputer, we introduced a high-speed file system cache as a new technology to fill the performance gap,” said Professor Hiroshi Nakamura, director of Information Technology Center, The University of Tokyo. “The first introduction of this technology to a supercomputer in Japan is the Reedbush system. The DDN “IME14K” provides a good balance of network and storage performance, and can deliver 1.5TB/sec performance of Oakforest-PACS at 25 system (50 server) — more than enough to take advantage of this new technology, computational science, and big data analysis, and we expect the contribution to extend to further development, such as machine learning.”

The “IME14K” appliance is connected by the calculation node group and the EDR InfiniBand network to the parallel file system with a capacity of 209TB. The high-speed file cache system has 5.04PB of capacity with a data transfer rate of 436.2GB/sec. The parallel file system has a data transfer rate of 75GB/sec. The solution will be provided as an expansion of the computing resources that the University of Tokyo uses for its supercomputing system and will strongly support the response to new areas of demand, such as big data analysis and machine learning that are not limited to the science and technical computing markets.

New System of Joint Center for Advanced High Performance Computing (JCAHPC): “Oakforest-PACS”

Infinite MemoryThe Oakforest-PACS  has also added the IME14K onto their present system. The cache has a capacity of 940TB with 50 nodes and the data transfer rate will be 1.56TB/sec. In addition, the “SFA14KE” with the parallel file system has a capacity of 26PB and a data transfer rate of 484GB/sec. “IME 14K” and “SFA14KE” work together with the Intel Omni-Path architecture to achieve high-speed access performance and wideband data transfer performance between the 8,208 compute nodes that have been Fat Tree bonded.

The complete solution becomes a “K computer” with about 2.2 times the ultra-high-performance system as the total peak computing performance and achieves a theoretical aggregate performance of 25 petaflops (PFLOPS(1)). Production is scheduled to start in December 2016. It is expected to be the fastest supercomputer system in Japan at that time. As a result, it will support the collaboration of computer scientists and computational scientists, and will support the strengthening of infrastructure to provide a wide range of large-scale and ultra-high-speed arithmetic processing functions for academic research across the university.

DDN “IME14K” provides two-nodes per chassis, each with 50GB/sec read and write performance. Users can combine multiple systems as distributed shared file cache. One “IME14K” with NVMe SSDs from Toshiba Corporation can be configured with a maximum of 48 drives and can scale up to a 10-chassis configuration per rack with 500GB/sec of throughput and 55 million random write IOPS.

DDN “SFA14KE” is a 4U high-performance block storage platform with built-in storage application options. It can be extended, if necessary, and can be configured as a single system scaled up to 8.4PB in a 44-rack unit. It reduces footprint, power usage, and cooling costs as well as management complexity.

The University of Tokyo and the University of Tsukuba have been global leaders in designing and utilizing HPC systems for the past two decades. The Reedbush and Oakforest-PACS initiatives are reaffirming and expanding this leadership position for a new generation of HPC systems,” said Alex Bouzari, CEO of DDN. “Many problems in science and research today are located at the intersections of HPC and Big Data, and storage and I/O are increasingly important components of any large compute infrastructure. We look forward to working closely with both Universities, as well as with our partners Fujitsu and SGI, on these exciting new efforts.”

Sign up for our insideHPC Newsletter