For about 40 years, developers and users could count on an increase in CPU performance that would make applications run faster. However, with the slowdown in constant clock rate increases being replaced by additional core counts and even more new instructions, rethinking algorithms, their use of the latest APIs, and using the latest compilers has become critical for the next generation of application performance enhancements.
“Written by one of the foremost experts in high-performance computing and the inventor of Gustafson’s Law, The End of Error: Unum Computing explains a new approach to computer arithmetic: the universal number (unum). The unum encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. This new number type obtains more accurate answers than floating-point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power.”
“Today, energy companies mark the world leaders in commercial supercomputing. Companies like Total are utilizing high performance computing (HPC) to deliver an optimal combination of performance, price and efficiency. Supercomputers like Pangea deliver 10 times the computing capacity of the system it replaced, helping Total identify and exploit new reserves more effectively.”
In this video from the University of Houston CACDS HPC Workshop, Jeff Larkin from Nvidia presents: The Past, Present, and Future of OpenACC. “OpenACC is an open specification for programming accelerators with compiler directives. It aims to provide a simple path for accelerating existing applications for a wide range of devices in a performance portable way. This talk with discuss the history and goals of OpenACC, how it is being used today, and what challenges it will address in the future.”