“Although the commercial application of HPC is more usually associated with the behemoths of aerospace and automobile manufacturing, the EU-funded Partnership for Advanced Computing in Europe (PRACE) showcased how HPC can help SMEs at its recent PRACEdays14 event held in Barcelona.”
“It’s frustrating to think that antiquated software is hampering discovery and innovation across the board. This is one of the driving reasons why Intel launched the Intel Parallel Computing Center (IPCC) program last October with an initial five collaborators and an open call for additional collaborators.”
The requirement for both application scaling (capability computing) and system throughput (capacity computing) continues to grow. The “THUMS” human body model has 1.8 million elements, and safety simulations of over 50 million elements are on the roadmap. Models of this size will require scaling to thousands of cores just to maintain the current turnaround time.
“Creating more energy-efficient HPC is a continuous improvement process that requires the right tools for measuring, taking action, checking the results and iterating in a virtuous cycle. This is true for the infrastructure as well as all levels of the system; from components through applications. We haven’t had to think about energy efficiency, nor have we had the tools to measure it. Once the right tools are in place, we can start wrapping our heads around what are the contributing factors to better efficiency, and from that we can start influencing hardware designs.”
“Using one of the most powerful supercomputers in the world — “Titan,” a Cray XK7 system housed at Oak Ridge National Lab (ORNL) — researchers across the country devoted more than 1.94 billion processor hours to 32 computational research projects last year alone. Projects ranging from nuclear fusion to astrophysics occupied some of Titan’s computing capability, allowing for groundbreaking research in multiple disciplines.”
“The evolution of Hadoop has very much been a backwards one; it entered HPC as a solution to a problem which, by and large, did not yet exist. As a result, it followed a common, but backwards, pattern by which computer scientists, not domain scientists, get excited by a new toy and invest a lot of effort into creating proof-of-concept codes and use cases. Unfortunately, this sort of development is fundamentally unsustainable because of its nucleation in a vacuum, and in the case of Hadoop, researchers moved on to the next big thing and largely abandoned their model applications as the shine of Hadoop faded.”
“We really need to re-look at what the requirements are that will lead us all the way up to being able to support Exascale deployments. One of these absolute requirements is CPU fabric integration, because the performance that’s needed, the density, the power, are all areas that have to be vastly improved to support deployments of exascale.”
“How can capital markets firms handle the computational challenges presented by regulatory mandates and big data? Chances are the solution will involve high-performance computing powered by parallelism, or the ability to leverage multiple hardware resources to run code simultaneously. But while hardware architectures have been moving in that direction for years, many firms’ software isn’t written to take advantage of multiple threads of execution.”