Sign up for our newsletter and get the latest HPC news and analysis.

The Past, Present, and Future of OpenACC

larkin

In this video from the University of Houston CACDS HPC Workshop, Jeff Larkin from Nvidia presents: The Past, Present, and Future of OpenACC. “OpenACC is an open specification for programming accelerators with compiler directives. It aims to provide a simple path for accelerating existing applications for a wide range of devices in a performance portable way. This talk with discuss the history and goals of OpenACC, how it is being used today, and what challenges it will address in the future.”

Slidecast: Deep Learning – Unreasonably Effective

deep

“Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. At the 2015 GPU Technology Conference, you can join the experts who are making groundbreaking improvements in a variety of deep learning applications, including image classification, video analytics, speech recognition, and natural language processing.”

Video: OpenACC Interoperability with CUDA C and Fortran

wolfe

“Developed by PGI, Cray, and NVIDIA, the OpenACC directives are a shared vision of how directives can simplify the programming model for accelerators, where each vendor is committed to support a common programming standard.”

Video: Intro to Compiler Directives for Accelerators

openACC

“Geoscientists need tools to allow them to rapidly develop algorithms that run fast on accelerators, while at the same time deliver portability and improve productivity. They demand a single source code, with no need to maintain multiple code paths, using a high-level approach that presents a low learning curve. OpenACC provides directives-based approaches to rapidly accelerating applications for GPUs and other parallel architectures. This talk will serve as an introduction to programming with OpenACC 2.0.”

Performance and Power Characterization of GPU-Enabled HPC Applications

dell

“An increasing number of GPU enabled applications are available to the HPC community. The key issues are understanding the enhanced application performance and corresponding increase in power consumption due to GPUs. In most cases these depend on the CPU to GPU ratio and the way GPUs and connected to CPUs. Latest compute node designs allow flexibility to select the number of GPUs and how they are connected CPUs. This offers users a unique opportunity to select the a suitable operating point according to their application characteristics. This talk is about studying the performance vs. power tradeoff on a few common HPC applications.”

HPC’s Future Lies in Remote Visualization

Tom Wilkie, Scientific Computing World

Remote visualization is at the intersection of cloud, big data, and high performance computing. And the ability to look at complex data sets using only a mobile phone’s data rate is not some fantasy of the future. It is reality here and now.

Free Test Drive: Nvidia Tesla K80 Accelerator

Bryce Mackin, Nvidia

“Over at the Nvidia Blog, Bryce Mackin writes that the company is offering a free GPU Test Drive for their new Tesla K80 accelerator. With the test drive, you can run your own application on one or more K80s or try one of the preloaded applications, including AMBER, NAMD, GROMACS and LAMMPS.”

Video: GPU-Accelerated Analysis of Large Biomolecular Complexes

microscope

“This presentation will highlight the use of GPU ray tracing for visualizing the process of photosynthesis, and GPU accelerated analysis of results of hybrid structure determination methods that combine data from cryo-electron microscopy and X-ray crystallography atom molecular dynamics with all- simulations.”

HPC News with Snark for the Week of Jan. 12, 2015

hatnews

The news has started to pile up this post-Holiday Season, so here is the HPC News with Snark for Friday, January 16, 2014. We’ve got podcasts on everything form self-driving cars to Data Breaches resulting from North Korean satire films. There’s even some big financial surprises from Intel.

Video: Deep Learning on GPU Clusters

baidu

“Deep neural networks have recently emerged as an important tool for difficult AI problems, and have found success in many fields ranging from computer vision to speech recognition. Training deep neural networks is computationally intensive, and so practical application of these networks requires careful attention to parallelism. GPUs have been instrumental in the success of deep neural networks, because they significantly reduce the cost of network training, which then has allowed many researchers to train better networks. In this talk, I will discuss how we were able to duplicate results from a 1000 node cluster using only 3 nodes, each with 4 GPUs.”