The news has started to pile up this post-Holiday Season, so here is the HPC News with Snark for Friday, January 16, 2014. We’ve got podcasts on everything form self-driving cars to Data Breaches resulting from North Korean satire films. There’s even some big financial surprises from Intel.
“Deep neural networks have recently emerged as an important tool for difficult AI problems, and have found success in many fields ranging from computer vision to speech recognition. Training deep neural networks is computationally intensive, and so practical application of these networks requires careful attention to parallelism. GPUs have been instrumental in the success of deep neural networks, because they significantly reduce the cost of network training, which then has allowed many researchers to train better networks. In this talk, I will discuss how we were able to duplicate results from a 1000 node cluster using only 3 nodes, each with 4 GPUs.”
“The end of Dennard scaling has made all computing power limited, so that performance is determined by energy efficiency. With improvements in process technology offering little increase in efficiency innovations in architecture and circuits are required to maintain the expected performance scaling. The large scale parallelism and deep storage hierarchy of future machines poses programming challenges. Future programming systems must allow the programmer to express their code in a high-level, target-independent manner and optimize the target-dependent decisions of mapping available parallelism in time and space. This talk will discuss these challenges in more detail and introduce some of the technologies being developed to address them.”
“Powered by Intel’s Xeon E5-2600 v3 processor, Penguin Computing’s Tundra OpenHPC platform delivers density, performance and serviceability for demanding and extraordinary customers. Built to be compatible with Open Compute Open Rack specifications, the Tundra OpenHPC platform provides customers with a powerful and compact HPC server designed to reduce infrastructure costs when moving to the next generation of technology.”
“I will summarize the benefits, challenges, and lessons learned in deploying Titan and in preparing applications to move from conventional CPU architectures to a hybrid, accelerated architectures. I will emphasize on the challenges we have encountered with emerging programming models and how we are addressing these challenges using directive based-approaches. I also plan to discuss the early science outcomes from Titan in diverse areas such as materials sciences, nuclear energy, and engineering sciences. I will also discuss research outcomes from a growing number of industrial partnerships.”
“The Cray XC30 system at CSCS, which includes “Piz Daint”, the most energy efficient peta-scale supercomputer in operation today, has been extended with additional multi-core CPU cabinets (aka “Piz Dora”). In this heterogeneous system we unify a variety for high-end computing services – extreme scale compute, data analytics, pre- and post processing, as well as visualization – that are all important parts for the scientific workflow.”
In this video from the Nvidia booth theater at SC14, Buddy Bland from Oak Ridge National Laboratory presents: Accelerating ORNL’s Applications to the Exascale. “The Titan computer at Oak Ridge National Laboratory is delivering exceptional results for our scientific users in the U.S. Department of Energy’s Office of Science, Applied Energy programs, academia, and industry. Mr. Bland will describe the Titan system, how this system fits within the roadmap to exascale machines, and describe successes we have had with our applications using GPU accelerators.”