MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

IBM Launches LC OpenPower Servers


Today IBM announced launched a new LC series of servers that infuse technologies from members of the OpenPOWER Foundation and are part of IBM’s Power Systems portfolio of servers. According to IBM, the new LC systems perform data analytics workloads faster and cheaper than comparable x86-based servers.

Video: Is Remote GPU Virtualization Useful?


“Although the use of GPUs has generalized nowadays, including GPUs in current HPC clusters presents several drawbacks mainly related with increased costs. In this talk we present how the use of remote GPU virtualization may overcome these drawbacks while noticeably increasing the overall cluster throughput. The talk presents real throughput measurements by making use of the rCUDA remote GPU virtualization middleware.”

Video: NOAA Software Engineering for Novel Architectures (SENA) Project


“NOAA will acquire software engineering support and associated tools to re-architect NOAA’s applications to run efficiently on next generation fine-grain HPC architectures. From a recent procurement document: “Finegrain architecture (FGA) is defined as: a processing unit that supports more than 60 concurrent threads in hardware (e.g. GPU or a large core-count device).”

Video: NVLink Interconnect for GPUs


“NVLink enables fast data exchange between CPU and GPU, thereby improving data throughput through the computing system and overcoming a key bottleneck for accelerated computing today. NVLink makes it easier for developers to modify high-performance and data analytics applications to take advantage of accelerated CPU-GPU systems. We think this technology represents another significant contribution to our OpenPOWER ecosystem.”

Developing a Plan for Cloud Based GPU Processing


For some applications, cloud based clusters may be limited due to communication and/or storage latency and speeds. With GPUs, however, these issue are not present because application running on cloud GPUs perform exactly the same as those in your local cluster — unless the application span multiple nodes and are sensitive to MPI speeds. For those GPU applications that can work well in the cloud environment, a remote cloud may be an attractive option for both production and feasibility studies.

NVIDIA GRID 2.0 comes to Microsoft Azure


“Our vision is to deliver accelerated graphics and high performance computing to any connected device, regardless of location,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “We are excited to collaborate with Microsoft Azure to give engineers, designers, content creators, researchers and other professionals the ability to visualize complex, data-intensive designs accurately from anywhere.”

Planning for the Convergence of HPC and Big Data

HPC BIGDATA Convergence

As an open source tool designed to navigate large amounts of data, Hadoop continues to find new uses in HPC. Managing a Hadoop cluster is different than managing an HPC cluster, however. It requires mastering some new concepts, but the hardware is basically the same and many Hadoop clusters now include GPUs to facilitate deep learning.

GPUs Power Low-Cost Supercomputer Solution from Nor-Tech

David Bollig, President and CEO of Nor-Tech

“We figured out a way to get consumer-grade cards into a 4U chassis,” said Nor-Tech’s Vice President of Engineering, Dom Daninger and his team tested and retested the prototype until they were satisfied that their solution would be successful for most applications. “The result is a niche product that allows nearly all organizations to take advantage of GPU supercomputing capabilities—in essence supercomputing capabilities at an unheard of price point.”

TSUBAME2: How to Manage a Large GPU-Based Heterogeneous Supercomputer

Dr. Satoshi Matsuoka, University of Tokyo

Satoshi Matsuoka gave this talk at the PBS Works User Group this week. “The Tokyo Tech. TSUBAME2 supercomputer is one of the world’s leading supercomputer, ranked as high as #4 in the world on the Top500 and recognized as the “greenest supercomputer in the world” on the Green 500. With the GPU upgrade in 2013, it still sustains high performance (5.7 Petaflops Peak) and high usage (nearly 2000 registered users). However, such performance levels have been achieved with pioneering adoption of latest technologies such as GPUs and SSDs that necessitated non-traditional strategies in resource scheduling.”

Bill Dally from Nvidia Receives Funai Achievement Award

Bill Dally, Nvidia Chief Scientist and Senior Vice President of Research

Today IPSJ, Japan’s largest IT society honored Bill Dally from Nvidia with the Funai Achievement Award for his extraordinary achievements in the field of computer science and education. “Dally is the first non-Japanese scientist to receive the award since the first two awards were given out in 2002 to Alan Kay (a pioneer in personal computing) and in 2003 to Marvin Minsky (a pioneer in artificial intelligence).”