MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Shifter – Containers in HPC environments

“Containers wrap up software with all its dependencies in packages that can be executed anywhere. This can be specially useful in HPC environments where, often, getting the right combination of software tools to build applications is a daunting task. However, typical container solutions such as Docker are not a perfect fit for HPC environments. Instead, Shifter is a better fit as it has been built from the ground up with HPC in mind. In this talk, we show you what Shifter is and how to leverage from the current Docker environment to run your ap- plications with Shifter.”

OpenACC Building Momentum going into GTC

Today the OpenACC standards group announced a set of additional hackathons and a broad range of learning opportunities taking place during the upcoming GPU Technology Conference being held in San Jose, CA April 4-7, 2016. OpenACC is a mature and performance-portable path for developing scalable parallel programs across multi-core CPUs, GPU accelerators or many-core processors.

Video: The Nvidia Tesla Accelerated Computing Platform

Axel Koehler from Nvidia presented this talk at the HPC Advisory Council Switzerland Conference. “Accelerated computing is transforming the data center that delivers unprecedented throughput, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data.”

High-Performance and Scalable Designs of Programming Models for Exascale Systems

DK Panda from Ohio State University presented this talk at the Switzerland HPC Conference. “This talk will focus on challenges in designing runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPUs and Intel MIC) and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”

RCE Podcast Looks at EasyBuild Installation Framework

“EasyBuild, a software build and installation framework, can be used to automatically install software and generate environment modules. By using a hierarchical module naming scheme to offer environment modules to users in a more structured way, and providing Lmod, a modern tool for working with environment modules, we help typical users avoid common mistakes while giving power users the flexibility they demand. EasyBuild is developed by the High-Performance Computing team at Ghent University together with the members of the EasyBuild community, and is made available under the GNU General Public License (GPL) version 2.”

Video: Superscalar Programming Models – Making Applications Platform Agnostic

Dr. Rosa Badia from BSC/CNS presented this Invited Talk at SC15. “StarSs (Star superscalar) is a task-based family of programming models that is based on the idea of writing sequential code which is executed in parallel at run-time taking into account the data dependencies between tasks. The talk will describe the evolution of this programming model and the different challenges that have been addressed in order to consider different underlying platforms from heterogeneous platforms used in HPC to distributed environments, such as federated clouds and mobile systems.”

Slidecast: How to Make MPI Awesome – MPI Sessions

In this slidecast, Jeff Squyres from Cisco Systems presents: How to make MPI Awesome – MPI Sessions. As a proposal for future versions of the MPI Standard, MPI Sessions could become a powerful tool tool to improve system resiliency as we move towards exascale. “Now that we have brought these ideas to a larger audience, my hope is that we (the Forum) start refining these ideas to fit them into a future release of the MPI standard. Meaning: please don’t assume that exactly what is proposed in these slides are going to make it into the MPI standard.”

Paving the Way for Theta and Aurora

In this special guest feature, John Kirkley writes that Argonne is already building code for their future Theta and Aurora supercomputers based on Intel Knights Landing. “One of the ALCF’s primary tasks is to help prepare key applications for two advanced supercomputers. One is the 8.5-petaflops Theta system based on the upcoming Intel® Xeon Phi™ processor, code-named Knights Landing (KNL) and due for deployment this year. The other is a larger 180-petaflops Aurora supercomputer scheduled for 2018 using Intel Xeon Phi processors, code-named Knights Hill. A key goal is to solidify libraries and other essential elements, such as compilers and debuggers that support the systems’ current and future production applications.”

Best Practices – Dynamic Tuning for Energy Efficiency

“Today’s server systems provide many knobs which influence energy efficiency and performance. Some of these knobs control the behavior of the operating systems, whereas others control the behavior of the hardware itself. Choosing the optimal configuration of the knobs is critical for energy efficiency. In this talk recent research results will be presented, including examples of big data applications that consume less energy when dynamic tuning is employed.”

With GPUOpen, CGG Fuels Petroleum Exploration using AMD FirePro GPUs

Today AMD announced that CGG, a pioneering global geophysical services and equipment company, has deployed AMD FirePro S9150 server GPUs to accelerate its geoscience oil and gas research efforts, harnessing more than 1 PetaFLOPS of GPU processing power. Employing AMD’s HPC GPU Computing software tools available on GPUOpen.com, CGG rapidly converted its in-house Nvidia CUDA code to OpenCL for seismic data processing running on an AMD FirePro S9150 GPU production cluster, enabling fast, cost-effective GPU-powered research.