Sign up for our newsletter and get the latest HPC news and analysis.

Video: GPU-Accelerated Analysis of Large Biomolecular Complexes

microscope

“This presentation will highlight the use of GPU ray tracing for visualizing the process of photosynthesis, and GPU accelerated analysis of results of hybrid structure determination methods that combine data from cryo-electron microscopy and X-ray crystallography atom molecular dynamics with all- simulations.”

HPC News with Snark for the Week of Jan. 12, 2015

hatnews

The news has started to pile up this post-Holiday Season, so here is the HPC News with Snark for Friday, January 16, 2014. We’ve got podcasts on everything form self-driving cars to Data Breaches resulting from North Korean satire films. There’s even some big financial surprises from Intel.

Video: Deep Learning on GPU Clusters

baidu

“Deep neural networks have recently emerged as an important tool for difficult AI problems, and have found success in many fields ranging from computer vision to speech recognition. Training deep neural networks is computationally intensive, and so practical application of these networks requires careful attention to parallelism. GPUs have been instrumental in the success of deep neural networks, because they significantly reduce the cost of network training, which then has allowed many researchers to train better networks. In this talk, I will discuss how we were able to duplicate results from a 1000 node cluster using only 3 nodes, each with 4 GPUs.”

Video: Nvidia’s Path to Exascale

dally

“The end of Dennard scaling has made all computing power limited, so that performance is determined by energy efficiency. With improvements in process technology offering little increase in efficiency innovations in architecture and circuits are required to maintain the expected performance scaling. The large scale parallelism and deep storage hierarchy of future machines poses programming challenges. Future programming systems must allow the programmer to express their code in a high-level, target-independent manner and optimize the target-dependent decisions of mapping available parallelism in time and space. This talk will discuss these challenges in more detail and introduce some of the technologies being developed to address them.”

How CoDesign Helps Shape Successful Hardware and Software Development

sandia

“In working to improve future hardware and software for its simulation requirements, Sandia National Laboratories are engaging in co-design efforts with major hardware vendors. In this talk recent improvements influenced by the collaboration with NVIDIA will be discussed. The presentation will in particular focus on newly available experimental C++11 support in CUDA and how this facilitates both more rapid porting of applications to GPUs as well as better exploitation of GPU architecture characteristics. Furthermore, initial performance studies on NVIDIA’s next generation Tesla product line will be presented as well as first impressions of an IBM POWER-8 based GPU cluster.”

Nvidia Rolls Tegra X1: Supercomputer on a Chip

webintroducingdrive

The new Tegra X1 provides more power than a supercomputer the size of a suburban family home from 15 years ago.

Challenges on the Titan Supercomputer: Accelerating the Path to the Exascale

jack

“I will summarize the benefits, challenges, and lessons learned in deploying Titan and in preparing applications to move from conventional CPU architectures to a hybrid, accelerated architectures. I will emphasize on the challenges we have encountered with emerging programming models and how we are addressing these challenges using directive based-approaches. I also plan to discuss the early science outcomes from Titan in diverse areas such as materials sciences, nuclear energy, and engineering sciences. I will also discuss research outcomes from a growing number of industrial partnerships.”

Machine Learning: What Computational Researchers Need to Know

machine

Nvidia GPUs are powering a revolution in machine learning. With the rise of deep learning algorithms, in particular deep convolutional neural networks, computers are learning to see, hear, and understand the world around us in ways never before possible.

Video: Accelerating ORNL’s Applications to the Exascale

bland

In this video from the Nvidia booth theater at SC14, Buddy Bland from Oak Ridge National Laboratory presents: Accelerating ORNL’s Applications to the Exascale. “The Titan computer at Oak Ridge National Laboratory is delivering exceptional results for our scientific users in the U.S. Department of Energy’s Office of Science, Applied Energy programs, academia, and industry. Mr. Bland will describe the Titan system, how this system fits within the roadmap to exascale machines, and describe successes we have had with our applications using GPU accelerators.”

Video: Convergence of Extreme Computing and Big Data

satoshi

In this video, Satoshi Matsuoka, professor at Tokyo Institute of Technology, examines GPU’s role in the rapidly increasing data volume and processing requirements of so-called big data. Conventional cloud infrastructures will no longer be efficient. Will GPUs play a central role, or will they be peripheral?