Today Nvidia announced CUDA 6, the latest version of the company’s pervasive parallel computing platform and programming model. Designed to make parallel programming easier than ever, Cuda 6 will help developers decrease the time and effort required to accelerate their scientific and engineering applications with GPUs.
By automatically handling data management, Unified Memory enables us to quickly prototype kernels running on the GPU and reduces code complexity, cutting development time by up to 50 percent,” said Rob Hoekstra, manager of Scalable Algorithms Department at Sandia National Laboratories. “Having this capability will be very useful as we determine future programming model choices and port more sophisticated, larger codes to GPUs.”
Key features of CUDA 6 include:
- Unified Memory — Simplifies programming by enabling applications to access CPU and GPU memory without the need to manually copy data from one to the other, and makes it easier to add support for GPU acceleration in a wide range of programming languages.
- Drop-in Libraries — Automatically accelerates applications’ BLAS and FFTW calculations by up to 8X by simply replacing the existing CPU libraries with the GPU-accelerated equivalents.
- Multi-GPU Scaling — Re-designed BLAS and FFT GPU libraries automatically scale performance across up to eight GPUs in a single node, delivering over nine teraflops of double precision performance per node, and supporting larger workloads than ever before (up to 512GB). Multi-GPU scaling can also be used with the new BLAS drop-in library.
For more information about the CUDA 6 platform, visit Nvidia booth #613 at SC13, Nov. 18-21 in Denver.
Read the Full Story.