MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


EuroMPI Conference Partners with Women in HPC to Bolster Diversity

Over at the Women in HPC Blog, Daniel Holmes from EPCC writes that the EuroMPI Conference is partnering with Women in HPC to increase diversity in high performance computing.

Video: UPC++ Parallel Programming Extension

In this video from the 2016 OpenFabrics Workshop, Zili Zheng from LBNL presents: UPC++. “UPC++ is a parallel programming extension for developing C++ applications with the partitioned global address space (PGAS) model. UPC++ has demonstrated excellent performance and scalability with applications and benchmarks such as global seismic tomography, Hartree-Fock, BoxLib AMR framework and more. In this talk, we will give an overview of UPC++ and discuss the opportunities and challenges of leveraging modern network features.”

Jeff Squyres on Building Community at OpenHPC

“As a community, we are excited about enabling HPC for everyone. If OpenHPC can really make it so easy to install HPC systems that more people join the ecosystem – as users, system administrators, resource managers, or developers – we all win.”

Intel MPI Messaging Paper Wins ISC 2016 Hans Meuer Award

Today ISC 2016 announced that a research paper in the area of Message Passing Interface (MPI) performance, has been selected to receive the 2016 Hans Meuer Award. The awarding will take place at the ISC High Performance conference on Monday, June 20.

Slidecast: How to Make MPI Awesome – MPI Sessions

In this slidecast, Jeff Squyres from Cisco Systems presents: How to make MPI Awesome – MPI Sessions. As a proposal for future versions of the MPI Standard, MPI Sessions could become a powerful tool tool to improve system resiliency as we move towards exascale. “Now that we have brought these ideas to a larger audience, my hope is that we (the Forum) start refining these ideas to fit them into a future release of the MPI standard. Meaning: please don’t assume that exactly what is proposed in these slides are going to make it into the MPI standard.”

Video: Programming Models for Exascale Systems

“This talk will focus on programming models and their designs for upcoming exascale systems with millions of processors and accelerators. Current status and future trends of MPI and PGAS (UPC and OpenSHMEM) programming models will be presented. We will discuss challenges in designing runtime environments for these programming models by taking into account support for multi-core, high-performance networks, GPGPUs, Intel MIC, scalable collectives (multi-core-aware, topology-aware, and power-aware), non-blocking collectives using Offload framework, one-sided RMA operations, schemes and architectures for fault-tolerance/fault-resilience.”

Exascale Architectures: Evolution or Revolution?

In this special guest feature, Earl Joseph from IDC describes his SC15 panel where four HPC luminaries discussed, disputed, and divined the path to exascale computing. “As the panel wound to a close, participants agreed on one thing: the path to exascale contains significant obstacles, but they’re not insurmountable. Tremendous progress is being made in preparing codes for the next generations of systems, and sheer determination and innovation is running at an all-time high.”

Changes Afoot from the HPC Crystal Ball

In this special guest feature from Scientific Computing World, Andrew Jones from NAG looks ahead at what 2016 has in store for HPC and finds people, not technology, to be the most important issue. “A disconcertingly large proportion of the software used in computational science and engineering today was written for friendlier and less complex technology. An explosion of attention is needed to drag software into a state where it can effectively deliver science using future HPC platforms.”

MultiLevel Parallelism with Intel Xeon Phi

“The combination of using both MPI and OpenMP is a topic that has been explored by many developers in order to determine the most optimum solution. Whether to use OpenMP for outer loops and MPI within, or by creating separate MPI processes and using OpenMP within can lead to various levels of performance. In most cases of determining which method will yield the best results will involve a deep understanding of the application, and not just rearranging directives.”

Shared Memory and MPI 3.0

As multi-socket, then multi-core systems have become the standard, the Message Passing Interface (MPI) has become one of the most popular programming models for applications that can run in parallel using many sockets and cores. Shared memory programming interfaces, such as OpenMP, have allowed developers to take advantage of systems that combine many individual servers and shared memory within the server itself. However, two different programming models have been used at the same time. The MPI 3.0 standard allows for a new MPI interprocess shared memory extension (MPI SHM).