Over at the IBM System Blog, Sumit Gupta writes that the company’s new IBM Power System 822LC with Nvidia Tesla P100 GPUs is already demonstrating impressive performance on Deep Learning training applications. “A single S822LC for HPC with four NVIDIA Tesla P100 GPUs is 2.2 times faster reaching 50 percent accuracy in AlexNet than a server with four NVIDIA Tesla M40 GPUs!”
Power8 Systems with NVLink Come to Nimbix HPC Cloud
Today’s emerging workloads like machine and deep learning, artificial intelligence, accelerated databases, and high performance data analytics require incredible speed through accelerated computing,” said Sumit Gupta, Vice President, High Performance Computing and Data Analytics, IBM. “Delivering the capabilities of the new IBM POWER8 with NVIDIA NVLink-based system through the Nimbix cloud expands the horizons of HPC and brings a highly differentiated accelerated computing platform to a whole new set of users.”
Supermicro Rolls Out New Servers with Tesla P100 GPUs
“Our high-performance computing solutions enable deep learning, engineering, and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot, and per dollar,” said Charles Liang, President and CEO of Supermicro. “With our latest innovations incorporating the new NVIDIA P100 processors in a performance and density optimized 1U and 4U architectures with NVLink, our customers can accelerate their applications and innovations to address the most complex real world problems.”
E4 Computer Engineering Rolls Out GPU-accelerated OpenPOWER server
“The POWER8 with NVIDIA NVLink processor enables incredible speed of data transfer between CPUs and GPUs ideal for emerging workloads like AI, machine learning and advanced analytics”, said Rick Newman, Director of OpenPOWER Strategy & Market Development Europe. “The open and collaborative spirit of innovation within the OpenPOWER Foundation enables companies like E4 to leverage new technology and build cutting edge solutions to help clients grappling with the massive amounts of data in today’s technology environment.”
TYAN Adds Support for NVIDIA Tesla P100, P40 and P4 GPUs
Today TYAN announced support and availability of the NVIDIA Tesla P100, P40 and P4 GPU accelerators with the new NVIDIA Pascal architecture. Incorporating NVIDIA’s state-of-the-art technologies allows TYAN to offer the exceptional performance and data-intensive applications features to HPC users.
One Stop Systems Shipping Platforms with NVIDIA Tesla P100 for PCIe
Today One Stop Systems (OSS) announced that its High Density Compute Accelerator (HDCA) and its Express Box 3600 (EB3600) are now available for purchase with the NVIDIA Tesla P100 for PCIe GPU. These high-density platforms deliver teraflop performance with greatly reduced cost and space requirements. The HDCA supports up to 16 Tesla P100s and the EB3600 supports up to 9 Tesla P100s. The Tesla P100 provides 4.7 TeraFLOPS of double-precision performance, 9.3 TeraFLOPS of single-precision performance and 18.7 TeraFLOPS of half-precision performance with NVIDIA GPU BOOST technology.
Nvidia Disputes Intel’s Maching Learning Performance Claims
“Few fields are moving faster right now than deep learning,” writes Buck. “Today’s neural networks are 6x deeper and more powerful than just a few years ago. There are new techniques in multi-GPU scaling that offer even faster training performance. In addition, our architecture and software have improved neural network training time by over 10x in a year by moving from Kepler to Maxwell to today’s latest Pascal-based systems, like the DGX-1 with eight Tesla P100 GPUs. So it’s understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software.”
Slidecast: Announcing the Nvidia Tesla P100 for PCIe Servers
In this slidecast, Marc Hamilton from describes the Nvidia Tesla P100 for PCIe Servers. “The Tesla P100 for PCIe is available in a standard PCIe form factor and is compatible with today’s GPU-accelerated servers. It is optimized to power the most computationally intensive AI and HPC data center applications. A single Tesla P100-powered server delivers higher performance than 50 CPU-only server nodes when running the AMBER molecular dynamics code, and is faster than 32 CPU-only nodes when running the VASP material science applications.”