Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: One Stop Systems Takes GPU Density to the Next Level at SC16

onestopIn this video from SC16, Nate Parada from One Stop Systems describes the company’s new High Density Compute Accelerator (HDCA) for Deep Learning.

“The CA16000 High Density Compute Accelerator with sixteen NVIDIA Tesla GPU accelerators is used for a variety of HPC applications including oil and gas exploration and financial services. Completely integrated with the GPUs most suited for a specific application, it’s easy installation and tested reliability makes it superior to alternative products. The CA16000 occupies only 3U of rack space and connects directly to one or four host server(s) through the latest technology PCIe x16 Gen3 connections. Four removable canisters house up to four full-height, full-length, PCIe x16 double-wide GPUs and one half-length, half-height IO card each. The system is powered by three 3000-watt redundant power supplies and includes an IPMI-based system monitor.”

At SC16, One Stop Systems also unveiled two new deep learning appliances: OSS-PASCAL4 and OSS-PASCAL8.

“The OSS-PASCAL8 is a 170 TeraFLOP engine with 80GB/s NVIDIA NVLink for the largest deep learning models. The OSS-PASCAL4 provides 21.2 TeraFLOPS of double precision performance with an 80GB/s GPU peer-to-peer NVLink. These systems are tuned for out-of-the-box operation and quick and easy deployment.”

The OSS-PASCAL4 and OSS-PASCAL8 have the latest NVIDIA Tesla P100 SXM2 GPUs for up to 170 TeraFLOPS of half precision performance. They utilize NVLink with speeds up to 80GB/s peer-to-peer between GPUs. These GPU accelerated servers have dual v4 Broadwell CPUs and up to 2TB DDR4 memory. The OSS-PASCAL4 and OSS-PASCAL8 can integrate into the GPUltima rack-level solution using 100Gb EDR Infiniband interfaces to large-scale multi-root peer-to-peer RDMA networks.

See our complete coverage of SC16

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: