Mellanox inside Chinese GPU-based PFLOPS super

Print Friendly, PDF & Email

This week Mellanox announced that they are providing the adapters, switches, and cables that glue together Mole-8.5 which, according to the company, is the first Petaflop GPGPU supercomputer in China.

Mellanox logoMellanox…announced that its ConnectX®-2 40Gb/s InfiniBand adapters with GPU-Direct™ technology, IS5600 648-port switch with FabricIT™ fabric management software and fiber optic cables are providing the Institute of Process Engineering (IPE) at the Chinese Academy of Science with world-leading networking and application acceleration for the Mole-8.5 system, the first Petaflop GPGPU supercomputer in China. IPE is currently utilizing the Mole-8.5 to conduct scientific simulations in areas such as chemical engineering, material science, biochemistry, data and image processing, oil exploitation and recovery and metallurgy.

At least as interesting as the 40 Gbps IB technology is the fact that China has a 1 PFLOPS GPU-based super that I didn’t know anything about.

“By incorporating Mellanox 40Gb/s InfiniBand with GPU-Direct technology, we have been able to conduct scientific simulations using GPUs at performance levels that we would never have been able to achieve using a different interconnect,” said Dr. Xiaowei Wang of IPE. “The new Mole-8.5 Petaflop cluster, with industry-leading interconnect performance and efficiency, enables us to shorten the time it takes to run applications that are critical in the process of scientific discovery.”

The Mole-8.5 system was designed to achieve high efficiency in real applications with a low cost in establishment and power consumption. By using Mellanox InfiniBand with GPU-Direct technology, the GPUs compute at a much faster rate, increasing the performance of applications run on the Mole-8.5. Mellanox InfiniBand delivers up to 96 percent system utilization, allowing users to maximize their return-on-investment for their high-performance computing server and storage infrastructure.

NVIDIA GPUDirect

Mellanox got in touch with me about a detail they didn’t include in what they published about this story. The system uses Mellanox’s NVIDIA GPUDirect technology (briefing here, 10 mins), which lets GPUs communicate with one another over the IB network without using the host CPUs or requiring the associated buffer copy (from the GPU’s pinned memory to the IB adapter’s). GPUDirect let’s the Mellanox HCA and the GPU share the same pinned memory, which Mellanox claims can reduce GPU to GPU communications time by 30%.

Trackbacks

  1. […] I reported on the PFLOPS super that Chinese Academy of Science has built with NVIDIA GPUs (the new […]