Rob Farber writes that pragma-based programming standards like OpenACC (and potentially OpenMP 4.0) move us closer to the single-source tree ideal by providing a mechanism to annotate code to run efficiently on serial and parallel.
In this slidecast, Doug Miles from Nvidia describes the new features and performance gains in the PGI 2014 release. “The use of accelerators in high performance computing is now mainstream,” said Douglas Miles, director of PGI Software at Nvidia. “With PGI 2014, we are taking another big step toward our goal of providing platform-independent, multi-core and accelerator programming tools that deliver outstanding performance on multiple platforms without the need for extensive, device-specific tuning.”
Mark Harris from Nvidia presents this talk from SC13. “The performance and efficiency of CUDA, combined with a thriving ecosystem of programming languages, libraries, tools, training, and services, have helped make GPU computing a leading HPC technology. Learn how powerful new features in CUDA 6 make GPU computing easier than ever, helping you accelerate more of your application with much less code.”
Over at ZDnet, Nick Heath writes that a group of researchers from three UK universities are attempting to create a software Rosetta Stone. In such a system, the compiler decides for itself which hardware device is best suited to run a particular block of code.
We are posed with a very complex problem of program transformation if we want to tackle these heterogeneous systems and we can’t afford, one, to do the transformations manually, and, two, to be wrong,” said Dr Wim Vanderbauwhede of the University of Glasgow.
In this video from the Nvidia booth at SC13, Michael Wolfe presents on OpenACC. “The OpenACC API provides a high-level, performance portable programming mechanism for parallel programming accelerated nodes. Learn about the latest additions to the OpenACC specification, and see the PGI Accelerator compilers in action targeting the fastest NVIDIA GPUs.”