Discover why GPUs are driving the future of HPC

Print Friendly, PDF & Email

Since 2007, when GPU computing was introduced to high performance computer (HPC) users, the technology has advanced rapidly. Demand for computational GPUs has been driven largely by HPC practitioners looking for greater performance and improved energy efficiencies. The GPU, thanks to its throughput-optimized design and considerable floating point capability, is able to deliver better performance and performance-per-watt, compared to CPUs, on parallel software.

The growth of GPU adoption for high performance computing has been principally driven by NVIDIA, which has invested heavily in building a robust software ecosystem to support its hardware. Specifically, the company has developed a set of parallel programming APIs, libraries, and associated software development tools to support application development on its CUDA (Compute Unified Device Architecture) GPU platform, as well as a set of standard compiler directives for high-level languages that can be used for both x86 CPUs and accelerators.

According to the latest Intersect360 Research site census data, of the 50 most popular application packages mentioned by HPC users, 34 have offer GPU support, including 9 of the top 10. As is evident from the number of GPU-accelerated applications available in areas such as chemical research, physics, structural analysis, and visualization, the use of this accelerator technology has become well established in the HPC user community.

Download the Intersect360 Whitepaper: HPC Application Support for GPU Computing to find out if your application is being accelerated by GPUs.