Sign up for our newsletter and get the latest HPC news and analysis.

HPC News Roundup for March 27, 2015

hatnews

I’m on my way home from a series of Springtime HPC conferences with boatload of new videos and interviews on the latest in high performance computing. Here are some notable items that may have not made it to the front page.

Video: Increasing Cluster Throughput while Reducing Energy Consumption for GPU Workloads

rcuda

“The use of GPUs to accelerate applications is mainstream nowadays, but their adoption in cur- rent clusters presents several drawbacks. In this talk we present the last developments of the rCUDA remote GPU virtualization framework, which is the only one supporting the most recent CUDA version, in addition to leverage the InfiniBand fabric for the sake of performance.”

Nvidia Pascal GPU is Just One Year Away

Nvidia-Pascal

According to comments made at Nvidia’s GPU Technology Conference (GTC) last week, the Pascal, next generation GPU is now just a year away.

Nvidia Showcases Deep Learning Technology

Jen-Hsun Huang and Elon Musk

Nvidia’s GPU Technology Conference (GTC), being held in San Jose California this week, showcased a combination of hardware and software aimed at driving the development of a branch of machine learning called ‘deep learning’.

Radio Free HPC Wraps up the 2015 GPU Technology Conference

bubble

In this episode, the Radio Free HPC team wraps up the GPU Technology Conference. The theme of the show this year was Deep Learning, a topic that is heating up the market for GPUs with challenges like image recognition and self-driving cars. As a sister conference, the OpenPOWER Summit this week in San Jose showcased the first OpenPower hardware, including a prototype HPC server from IBM that will pave the way to the two IBM/Nvidia/Mellanox Coral supercomputers expected in 2017.

Replay: GTC Keynote on Deep Learning at Google

imgres

This week insideHPC will be streaming live keynotes from the GPU Technology Conference in San Jose. Today’s keynote will feature Google Senior Fellow Jeff Dean. “Google has built large-scale computer systems for training neural networks, and then applied these systems to a wide variety of problems that have traditionally been very difficult for computers.”

Earthquake Researchers at SDSC Win Nvidia Global Impact Award

quake_fig_200

Today Nvidia announced that researchers at the San Diego Supercomputer Center have won the inaugural Global Impact Award and its $150,000 prize. The researchers were honored for their work using high performance computing to understand how earthquakes occur and their impact on the earth’s behavior.

Test Bed Systems Pave the Way for 150 Petaflop Summit Supercomputer

Philip Curtis, a member of the High-Performance Computing Operations group at the OLCF, works with Pike, one of the test systems being used to prepare for Summit.

Oak Ridges is preparing for their upcoming Summit supercomputer with two modest test bed systems using Power8 processors. “Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017.”

The Past, Present, and Future of OpenACC

larkin

In this video from the University of Houston CACDS HPC Workshop, Jeff Larkin from Nvidia presents: The Past, Present, and Future of OpenACC. “OpenACC is an open specification for programming accelerators with compiler directives. It aims to provide a simple path for accelerating existing applications for a wide range of devices in a performance portable way. This talk with discuss the history and goals of OpenACC, how it is being used today, and what challenges it will address in the future.”

Slidecast: Deep Learning – Unreasonably Effective

deep

“Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. At the 2015 GPU Technology Conference, you can join the experts who are making groundbreaking improvements in a variety of deep learning applications, including image classification, video analytics, speech recognition, and natural language processing.”