In this Chip Chat podcast, Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, discusses the importance of code modernization as we move into multi- and many-core systems. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization.
“The notion of High Performance Computing is evolving over time. So what was deemed a leadership class computer five years ago is a little bit obsolete. We are talking about the evolution not only in the hardware but also in the programming models because there are more and more cores available. Orchestrating the calculations in the way that can effectively take advantage of parallelism takes a lot of thinking and a lot of redesign of the algorithms behind the calculations.”
“Optimizing HPC Applications with Intel Cluster Tools takes the reader on a tour of the fast-growing area of high performance computing and the optimization of hybrid programs. These programs typically combine distributed memory and shared memory programming models and use the Message Passing Interface (MPI) and OpenMP for multi-threading to achieve the ultimate goal of high performance at low power consumption on enterprise-class workstations and compute clusters. The book focuses on optimization for clusters consisting of the Intel Xeon processor, but the optimization methodologies also apply to the Intel Xeon Phi coprocessor and heterogeneous clusters mixing both architectures.”
Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”