But like a Formula One race car stuck in a traffic jam, HPC hardware performance is frequently hampered by HPC software. This is because some of the most widely used application codes have not been updated for years, if ever, leaving them unable to leverage advances in parallel systems. As hardware power moves toward exascale, the imbalance between hardware and software will only get worse. The problem of updating essential scientific applications goes by many names — code modernization, refactoring, vectorization, parallelization. The need is for algorithms and software that can efficiently utilize massive numbers of processors simultaneously, to reprogram codes to increase their parallelism and scalability. This is complicated, time-consuming work that commonly slips through the budgetary cracks of academic institutions and government organizations. To address this problem, Intel has stepped outside of its role as a developer of HPC hardware to partner with supercomputing organizations around the world on modernization of public domain HPC code. To date, 31 Intel Parallel Computing Centers (IPCC) have begun operation at academic, government and private institutions in the United States, Germany, U.K., Finland, Italy, France, India, Korea, Russia, Brazil, Ireland, Japan and other countries.
Read the Full Story.