There is a growing gap between the Top10 systems and the rest of HPC. “The diversity is occurring now, but it is by no means the first time it has ever happened,” said Thomas Sterling.
Over at Fierce Government IT, David Perera writes that a June Energy Department report to Congress indicates that building a viable exascale supercomputer by 2022 will likely require at least $1 billion to $1.4 billion in funding and won’t occur in America unless federal agencies spend money on its development. Exascale suffers the problem of […]
In this video from the HPC Advisory Council Spain Conference 2013, Pedro J. Garcia presents: High-Performance Interconnection Networks on the Road to Exascale HPC: Challenges and Solutions. One of the challenges of that interconnect researchers face today is how to efficiently interconnect the huge number of processors expected to be present in future exascale systems. […]
Two key projects within the US and Europe are attempting to solve the power challenge of exascale systems.
John Barr asks what standard approach can the industry agree on to make next-generation HPC systems easier to program?
When the architecture of high-performance computing (HPC) systems changes, the tools and programming paradigms used to develop applications may also have to change. We have seen several such evolutions in recent decades, including the introduction of multiprocessors, the use of heterogeneous processors to accelerate applications, vector processors, and cluster computing.
These changes, while providing the potential for delivering higher performance at lower price points, give the HPC industry a big headache. That headache is brought on by the need for application portability, which in turn leverages standard development tools that support a range of platforms. Without software standards supporting emerging platforms, independent software vendors (ISVs) are slow to target these new systems, and without a broad base of software the industry pauses while the software catches up.
An example of this was the transition from complex, expensive shared memory systems to proprietary distributed memory systems, and then to clusters of low cost commodity servers. When each new system had its own message passing library, ISVs were reluctant to port major applications. It was only when MPI was widely accepted as the standard message passing library that the body of applications available for clusters started to grow, and cluster computing became the norm in mainstream HPC.
So the burning question is: What standard approach can the industry agree on to make next generation HPC systems easier to program, and therefore more attractive for ISVs to support?
As first reported here on insideHPC last week, Argonne National Laboratory is working on a prototype Exascale operating system called Argo. This week, Argonne announced it has been awarded a $9.75 million grant from the DOE Office of Science to lead this multi-institutional research project. In Greek mythology, Argo (which means “swift”) was the ship used […]
Over at The Exascale Report, Mike Bernhardt writes that the future of U.S. competitiveness depends on HPC leadership, and we need the National Labs to get the country back on top. Exascale, and eventually zettascale, require long- term, dedicated research. Success will depend on collaboration among the labs’ researchers, shared experiences and results, along with […]