Over at the ISC Blog, John Barr writes that today’s hybrid programming models are a long way from what we will need for productive exascale computing in the future.
Programs are being developed for accelerators today using a mix of OpenMP, OpenACC, OpenCL and CUDA. Many developers want to have a single approach that will work for the more popular hardware choices. This is similar to the situation that arose when distributed memory HPC systems first appeared. Every vendor had their own communications library, and many users and ISVs refused to consider this style of machine until there was a standard way to program them all – and the standard that was developed was MPI. When OpenACC first appeared it made sense to use this forum to experiment with new approaches while the use of GPUs in HPC was evolving rapidly, with the expectation that the best ideas would then be reintroduced into OpenMP. But OpenMP and OpenACC now seem to be diverging. Indeed, a comparison of OpenACC and OpenMP on the OpenACC web site says “efforts so far to include support for GPUs in the OpenMP specification are — in the opinions of many involved — at best insufficient, and at worst misguided.”