At Virtual ISC: Catching up with Gilad Shainer and Nvidia’s ‘Data Center on a Chip’

Print Friendly, PDF & Email

At virtual ISC 2021, we sat down with Gilad Shainer, Nvidia’s Senior Vice President of Marketing and a long time figure – with a background at high performance networking complany Mellanox, acquired two years ago by Nvidia – in the HPC community to update us on the company’s new supercomputing architecture, Cloud Native Supercomputing, along with news the company is announcing at ISC.

“How do we solve the problem … to get the bare metal performance from the infrastructure and at the same time be able to support multi-tenancy in isolation and securely? We brought out the BlueField DPU data processing unit, it includes networking services, includes multiple ARM cores and acceleration engines that are set up for the infrastructure workloads, acceleration engines for security, for storage, for networking. And by bringing the DPU into the supercomputing infrastructure, we can offload and accelerate the infrastructure management, the data center operating system, free the CPU cycles, and be able to deliver bare metal performance on one side and, not for the first time, also multi-tenancy in isolation. So that’s the great benefit of Cloud Native Supercomputing,” Shainer said.

Regarding Nvidia news announced at ISC, Shainer discussed new technologies announced by Nvidia at the conference, including NVIDIA A100 80GB PCIe GPUs, which the company said increase GPU memory bandwidth 25 percent compared with the A100 40GB, to 2TB/s, and provide 80GB of HBM2e high-bandwidth memory.