Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


No speed limit on NVIDIA Volta with rise of AI

In this special guest feature, Brad McCredie from IBM writes that launch of Volta GPUs from NVIDIA heralds a new era of AI. “We’re excited about the launch of NVIDIA’s Volta GPU accelerators. Together with the NVIDIA NVLINK “information superhighway” at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets.”

Benefits of Multi-rail Cluster Architectures for GPU-based Nodes

Craig Tierney from NVIDIA gave this talk at the MVAPICH User Group meeting. “As high performance computing moves toward GPU-accelerated architectures, single node application performance can be between 3x and 75x faster than the CPUs alone. Performance increases of this size will require increases in network bandwidth and message rate to prevent the network from becoming the bottleneck in scalability. In this talk, we will present results from NVLink enabled systems connected via quad-rail EDR Infiniband.”

IBM’s New PowerAI Software Speeds Deep Learning

IBM PowerAI on Power servers with GPU accelerators provide at least twice the performance of our x86 platform; everything is faster and easier: adding memory, setting up new servers and so on,” said current PowerAI customer Ari Juntunen, CTO at Elinar Oy Ltd. “As a result, we can get new solutions to market very quickly, protecting our edge over the competition. We think that the combination of IBM Power and PowerAI is the best platform for AI developers in the market today. For AI, speed is everything —nothing else comes close in our opinion.”

Anaconda Open Data Science Platform comes to IBM Cognitive Systems

Today IBM announced that it will offer the Anaconda Open Data Science platform on IBM Cognitive Systems. Anaconda will also integrate with the PowerAI software distribution for machine learning and deep learning that makes it simple and fast to take advantage of Power performance and GPU optimization for data intensive cognitive workloads. “Anaconda is an important capability for developers building cognitive solutions, and now it’s available on IBM’s high performance deep learning platform,” said Bob Picciano, senior vice president of Cognitive Systems. “Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale.”

GTC to Feature 90 Sessions on HPC and Supercomputing

Accelerated computing continues to gain momentum. This year the GPU Technology Conference will feature 90 sessions on HPC and Supercomputing. “Sessions will focus on how computational and data science are used to solve traditional HPC problems in healthcare, weather, astronomy, and other domains. GPU developers can also connect with innovators and researchers as they share their groundbreaking work using GPU computing.”

E4 Computer Engineering’s Showcases New Petascale OCP Platform

Today E4 Computer Engineering from Italy showcased a new PetaFlops-Class Open Compute Server with “remarkable energy efficiency” based on the IBM POWER architecture. “Finding new ways of making easily deployable and energy efficient HPC solutions is often a complex task, which requires a lot of planning, testing and benchmarking – said Cosimo Gianfreda CTO, Co-Founder, E4 Computer Engineering. – We are very lucky to work with great partners like Wistron, as their timing and accuracy means we have all the right conditions to have effective time-to-market. I strongly believe that the performance on the node, coupled with the power monitoring technology, will receive a wide acceptance from the HPC and Enterprise community.”

Nvidia’s Bill Dally to Keynote HiPINEB 2017 Exascale Workshop

Nvidia’s Bill Dally will keynote HiPINEB 2017 – the 3rd IEEE International Workshop on High-Performance Interconnection Networks in the Exascale and Big-Data Era. The event takes place Feb. 5, 2017 in Austin, Texas and will be held in conjunction with the IEEE HPCA Conference.

One Stop Systems Introduces a New Line of GPU Accelerated Servers for Deep Learning at SC16

Today One Stop Systems announced two new deep learning appliances that leverage the NVIDIA NVLink. One Stop Systems’ deep learning appliances are designed for augmented performance in machine learning and deep learning applications. These appliances provide the ultimate power for performing deep learning training and exploring neural networks,” said Steve Cooper, OSS CEO. “The OSS-PASCAL4 and OSS-PASCAL8 […]

Ace Computers Adds NVIDIA Tesla P100 to HPC Clusters

Today Ace Computers from Illinois announced that the company is integrating the new Nvidia Tesla P100 accelerators into select HPC clusters. “Nvidia has been a trusted partner for many years,” said Ace Computers CEO John Samborski. “We are especially enthusiastic about this P100 accelerator and the transformative effect it promises to have on the clusters we are building for enterprises, academic institutions and research facilities.”

Penguin Computing Adds Pascal GPUs to Open Compute Tundra Systems

“Pairing Tundra Relion X1904GT with our Tundra Relion 1930g, we now have a complete deep learning solution in Open Compute form factor that covers both training and inference requirements,” said William Wu, Director of Product Management at Penguin Computing. “With the ever evolving deep learning market, the X1904GT with its flexible PCI-E topologies eclipses the cookie cutter approach, providing a solution optimized for customers’ respective applications. Our collaboration with NVIDIA is combating the perennial need to overcome scaling challenges for deep learning and HPC.”