Sign up for our newsletter and get the latest HPC news and analysis.

New Paper: Automatic CPU-GPU Communication Management and Optimization

A new peer-reviewed paper from the ACM PLDI conference looks at the CGCM CPU-GPU Communication Manager, which automatically manages and optimizes communications and improves the applicability and performance of automatic GPU parallelization.

The performance benefits of GPU parallelism can be enormous, but unlocking this performance potential is challenging. The applicability and performance of GPU parallelizations is limited by the complexities of CPU-GPU communication. To address these communications problems, this paper presents the first fully automatic system for managing and optimizing CPU-GPU communcation. This system, called the CPU-GPU Communication Manager (CGCM), consists of a run-time library and a set of compiler transformations that work together to manage and optimize CPU-GPU communication without depending on the strength of static compile-time analyses or on programmer-supplied annotations. CGCM eases manual GPU parallelizations and improves the applicability and performance of automatic GPU parallelizations. For 24 programs, CGCM-enabled automatic GPU parallelization yields a whole program geomean speedup of 5.36x over the best sequential CPU-only execution.

Download the whitepaper (PDF).

Comments

  1. Hi. It is not a whitepaper. It is a peer reviewed paper published at ACM PLDI (Programming Language Design and Implementation) 2011.

Trackbacks

  1. […] magic happens to move data between the two systems transparently.It’s available as a PDF.New Paper: Automatic CPU-GPU Communication Management and Optimization | insideHPC.com. This story written by Randall Hand Randall Hand is a visualization scientist working for a federal […]

Resource Links: