Over at Scientific Computing, Doug Black has posted first in a series of in-depth articles that examine the critical challenges of code modernization.
As scientific computing moves inexorably toward the Exascale era, an increasingly urgent problem has emerged: many HPC software applications — both public domain and proprietary commercial — are hamstrung by antiquated algorithms and software unable to function in manycore supercomputing environments. Aside from developing an Exascale-level architecture, HPC code modernization is the most important challenge facing the HPC community over the next decade. The stakes are high. Without hardware-software harmonization, increased processing power will be a wasted resource, and the discovery and innovation work performed on HPC systems will stall.
So, how do we get there? David Scott, an HPC Solution Architect at Intel agrees there are three requirements for code to achieve high performance on modern computer architectures: thread parallelism, SIMD operations and frequent reuse of data while it is in the cache (“compute intensity”).
Many existing codes were written years ago, before any of these were required and are not optimized for these parameters,” Scott said. “Code modernization is reorganizing the code, and perhaps changing algorithms, to increase the amount of thread parallelism, SIMD instructions and compute intensity to optimize performance on modern architectures.”
Read the Full Story.