The upcoming GPU Technology Conference is entering it’s fifth year with developer talks on everything from Numerical Algorithms to Big Data Analytics. “In short, we’ll have a ton of HPC content. There are nearly 100 sessions dedicated to supercomputing and HPC topics. This includes major scientific research enabled by these GPU-accelerate systems – everything from breakthroughs in cancer research and astronomy, to HIV research and new big data analytics innovation.”
In this slidecast, Doug Miles from Nvidia describes the new features and performance gains in the PGI 2014 release. “The use of accelerators in high performance computing is now mainstream,” said Douglas Miles, director of PGI Software at Nvidia. “With PGI 2014, we are taking another big step toward our goal of providing platform-independent, multi-core and accelerator programming tools that deliver outstanding performance on multiple platforms without the need for extensive, device-specific tuning.”
Bill Dally from Nvidia presented this talk at the Stanford HPC Conference. “HPC and data analytics share challenges of power, programmability, and scalability to realize their potential. The end of Dennard scaling has made all computing power limited, so that performance is determined by energy efficiency. With improvements in process technology offering little increase in efficiency innovations in architecture and circuits are required to maintain the expected performance scaling.”
Over at the Stream Computing Blog, Vincent Hindriksen has posted an overview of the Heterogeneous Systems Architecture (HSA). “HSA changes the way memory is handled by eliminating a hierarchy in processing-units. In a hUMA architecture, the CPU and the GPU (inside the APU) have full access to the entire system memory.”
Mark Harris from Nvidia presents this talk from SC13. “The performance and efficiency of CUDA, combined with a thriving ecosystem of programming languages, libraries, tools, training, and services, have helped make GPU computing a leading HPC technology. Learn how powerful new features in CUDA 6 make GPU computing easier than ever, helping you accelerate more of your application with much less code.”
“NumbaPro is a powerful compiler that takes high-level Python code directly to the GPU producing fast-code that is the equivalent of programming in a lower-level language. It contains an implementation of CUDA Python as well as higher-level constructs that make it easy to map array-oriented code to the parallel architecture of the GPU.”
Fans of accelerated computing are reminded to take advantage of the Early Bird deadline for 2014 GPU Tech Conference ends January 29.