Accelerating HPC Programmer Productivity with OpenACC and CUDA Unified Memory

Print Friendly, PDF & Email

In this video from SC17 in Denver, Doug Miles from NVIDIA presents: Accelerating HPC Programmer Productivity with OpenACC and CUDA Unified Memory.

“CUDA Unified Memory for NVIDIA Tesla GPUs offers programmers a unified view of memory on GPU-accelerated compute nodes. The CPUs can access GPU high-bandwidth memory directly, the GPUs can access CPU main memory directly, and memory pages migrate automatically between the two when the CUDA Unified Memory manager determines it is performance-profitable. PGI OpenACC compilers now leverage this capability on allocatable data to dramatically simplify parallelization and incremental optimization of HPC applications for GPUs. In the future it will extend to all types of data, and programmer-driven data management will become an optimization rather than a requirement. This talk will summarize the current status and near future of OpenACC programming and optimization for GPU-accelerated compute nodes with CUDA Unified Memory.”

Doug Miles runs the PGI compilers & tools team at NVIDIA. He has worked in HPC for over 30 years in math library development, benchmarking, programming model development, technical marketing and software engineering management at Floating Point Systems, Cray Research Superservers, The Portland Group, STMicroelectronics and NVIDIA.

See our complete coverage of SC17

Check out our insideHPC Events Calendar