Sign up for our newsletter and get the latest HPC news and analysis.

HPC News with Snark for the Week of Jan. 12, 2015

hatnews

The news has started to pile up this post-Holiday Season, so here is the HPC News with Snark for Friday, January 16, 2014. We’ve got podcasts on everything form self-driving cars to Data Breaches resulting from North Korean satire films. There’s even some big financial surprises from Intel.

Video: Deep Learning on GPU Clusters

baidu

“Deep neural networks have recently emerged as an important tool for difficult AI problems, and have found success in many fields ranging from computer vision to speech recognition. Training deep neural networks is computationally intensive, and so practical application of these networks requires careful attention to parallelism. GPUs have been instrumental in the success of deep neural networks, because they significantly reduce the cost of network training, which then has allowed many researchers to train better networks. In this talk, I will discuss how we were able to duplicate results from a 1000 node cluster using only 3 nodes, each with 4 GPUs.”

Video: Nvidia’s Path to Exascale

dally

“The end of Dennard scaling has made all computing power limited, so that performance is determined by energy efficiency. With improvements in process technology offering little increase in efficiency innovations in architecture and circuits are required to maintain the expected performance scaling. The large scale parallelism and deep storage hierarchy of future machines poses programming challenges. Future programming systems must allow the programmer to express their code in a high-level, target-independent manner and optimize the target-dependent decisions of mapping available parallelism in time and space. This talk will discuss these challenges in more detail and introduce some of the technologies being developed to address them.”

Nvidia Rolls Tegra X1: Supercomputer on a Chip

webintroducingdrive

The new Tegra X1 provides more power than a supercomputer the size of a suburban family home from 15 years ago.

Call for Participation: OpenPOWER Summit, March 17-19 at GTC 2015

openpower

The First Annual OpenPOWER Summit will take place March 17-19 at the San Jose Convention Center in conjunction with the GPU Technology Conference (GTC).

Video: Penguin Computing Showcases OCP Hardware for HPC

phil

“Powered by Intel’s Xeon E5-2600 v3 processor, Penguin Computing’s Tundra OpenHPC platform delivers density, performance and serviceability for demanding and extraordinary customers. Built to be compatible with Open Compute Open Rack specifications, the Tundra OpenHPC platform provides customers with a powerful and compact HPC server designed to reduce infrastructure costs when moving to the next generation of technology.”

Challenges on the Titan Supercomputer: Accelerating the Path to the Exascale

jack

“I will summarize the benefits, challenges, and lessons learned in deploying Titan and in preparing applications to move from conventional CPU architectures to a hybrid, accelerated architectures. I will emphasize on the challenges we have encountered with emerging programming models and how we are addressing these challenges using directive based-approaches. I also plan to discuss the early science outcomes from Titan in diverse areas such as materials sciences, nuclear energy, and engineering sciences. I will also discuss research outcomes from a growing number of industrial partnerships.”

Piz Daint and Piz Dora: Productive, Heterogeneous Supercomputing

thomas

“The Cray XC30 system at CSCS, which includes “Piz Daint”, the most energy efficient peta-scale supercomputer in operation today, has been extended with additional multi-core CPU cabinets (aka “Piz Dora”). In this heterogeneous system we unify a variety for high-end computing services – extreme scale compute, data analytics, pre- and post processing, as well as visualization – that are all important parts for the scientific workflow.”

Machine Learning: What Computational Researchers Need to Know

machine

Nvidia GPUs are powering a revolution in machine learning. With the rise of deep learning algorithms, in particular deep convolutional neural networks, computers are learning to see, hear, and understand the world around us in ways never before possible.

Video: Accelerating ORNL’s Applications to the Exascale

bland

In this video from the Nvidia booth theater at SC14, Buddy Bland from Oak Ridge National Laboratory presents: Accelerating ORNL’s Applications to the Exascale. “The Titan computer at Oak Ridge National Laboratory is delivering exceptional results for our scientific users in the U.S. Department of Energy’s Office of Science, Applied Energy programs, academia, and industry. Mr. Bland will describe the Titan system, how this system fits within the roadmap to exascale machines, and describe successes we have had with our applications using GPU accelerators.”