ISC 2015 Interview: Programming Models on the Road to Exascale

Print Friendly, PDF & Email

ISC-EventISC 2015 will host a number of sessions on Exascale computing next month in Frankfurt. In what looks to be one of the highlights of the conference, Bill Gropp, Georg Hager, and Paul Kelly will discuss Programming Models on the Road to Exascale. To learn more, we caught up with the Session Chair, Dr Michèle Weiland, who serves as a Project Manager at the EPCC supercomputing center at the University of Edinburgh.

insideHPC: MPI seems to be the most popular programming model for HPC clusters. Where would you say that MPI falls short in terms of what will be needed for exascale programming?

Dr. Michèle Weiland, EPCC

Dr. Michèle Weiland, EPCC

Dr Michèle Weiland: I think it is important to distinguish between the MPI programming model itself and implementations of the programming model.

All parallel programming models need to pass ‘messages’ in some form to communicate information between parallel threads. Sometimes these are actual message sent over the network, other times these are simply shared memory operations. At the Exascale, sending these messages efficiently, and ensuring memory and data consistency, will be the real challenge. Already today some MPI implementations are better at this than others, and this is unlikely to change going forward (in fact, the performance impact of less efficient implementations will get worse).

In terms of the programming model itself, I am not sure that MPI in its current allows developers to be as creative with parallelism as they should be. We all know that writing parallel programs with MPI is difficult because programmers have to manage data movement explicitly, so often once an MPI program is correct and performs reasonably well, development stops. What we really want is for a programming model to give a developer as much control as MPI while at the same time being quicker and simpler to program in order to encourage experimentation with parallel implementations without sacrificing performance.

insideHPC: Can you describe the main challenges we will need to address in an effective programming model for exascale?

Dr Michèle Weiland: I think that programming at the Exascale will be a challenge, regardless of how much the programming models support the vast amount of parallelism that we will be facing. Solving problems using millions (or even billions) of parallel execution threads that will need to communicate to each other in some fashion is the key hurdle to overcome. A good parallel programming model for the Exascale, whatever that might be, should support developers to exploit as much parallelism as possible. It should also encourage us to inherently ‘think’ in parallel (rather than to focus on how to parallelize serial implementation).

insideHPC: Do you believe there are there are ways to maintain programmer productivity as we move towards exascale?

Dr Michèle Weiland: As I have said earlier, a good parallel programming model would allow developers to ‘play’ with code ad implementations without sacrificing performance (too much). An example I can think of here is OpenACC: it allows developers to fairly rapidly port a portion of their code to GPU, understand the code’s limitation on a GPU architecture and get quick returns for their investment. As performance with OpenACC is very rarely as good as with a lower-level model, developers then have the choice to invest the manpower required to port their code to, say, CUDA and maximize performance. Unfortunately the example I’ve given here uses more than one programming models already and ideally this would not be the case. A single model that allows both rapid prototyping with decent performance and low-level programming with in-depth optimization would definitely go a long way to maintaining and increasing programmer productivity.

insideHPC: Can you tell us anything more about what attendees will learn about in your session?

Dr Michèle Weiland: The “Programming Models for the Exascale” session will look at three popular parallel programming paradigms and our speakers, are all leading experts in the field, will explain how they see these paradigms progress in the future to play a role in the Exascale era. The three models are: MPI, as the de facto standard programming model today; MPI+X, where X can one or multiple models, such as threading or PGAS, that are used to alleviate some of the performance bottlenecks of pure MPI; and Domain Specific Languages (DSLs), that allow scientists to abstract the problems they are trying to solve in a high-level language, which is automatically translated into a lower-level language in the background.

Registration is now open for ISC 2015, which takes place July 12-16 in Frankfurt.

Sign up for our insideHPC Newsletter.