In this video from the HPC Advisory Council Spain Conference 2013, Gilad Shainer from HPCAC describes the goals of the conference series.
Two key projects within the US and Europe are attempting to solve the power challenge of exascale systems.
John Barr asks what standard approach can the industry agree on to make next-generation HPC systems easier to program?
When the architecture of high-performance computing (HPC) systems changes, the tools and programming paradigms used to develop applications may also have to change. We have seen several such evolutions in recent decades, including the introduction of multiprocessors, the use of heterogeneous processors to accelerate applications, vector processors, and cluster computing.
These changes, while providing the potential for delivering higher performance at lower price points, give the HPC industry a big headache. That headache is brought on by the need for application portability, which in turn leverages standard development tools that support a range of platforms. Without software standards supporting emerging platforms, independent software vendors (ISVs) are slow to target these new systems, and without a broad base of software the industry pauses while the software catches up.
An example of this was the transition from complex, expensive shared memory systems to proprietary distributed memory systems, and then to clusters of low cost commodity servers. When each new system had its own message passing library, ISVs were reluctant to port major applications. It was only when MPI was widely accepted as the standard message passing library that the body of applications available for clusters started to grow, and cluster computing became the norm in mainstream HPC.
So the burning question is: What standard approach can the industry agree on to make next generation HPC systems easier to program, and therefore more attractive for ISVs to support?
As first reported here on insideHPC last week, Argonne National Laboratory is working on a prototype Exascale operating system called Argo. This week, Argonne announced it has been awarded a $9.75 million grant from the DOE Office of Science to lead this multi-institutional research project. In Greek mythology, Argo (which means “swift”) was the ship used […]
Over at The Exascale Report, Mike Bernhardt writes that the future of U.S. competitiveness depends on HPC leadership, and we need the National Labs to get the country back on top. Exascale, and eventually zettascale, require long- term, dedicated research. Success will depend on collaboration among the labs’ researchers, shared experiences and results, along with […]
While Exascale computing may be years away, Pete Beckman from Argonne and a team of researchers are already looking at what the operating system might look like. At the recent Runtime and Operating Systems for Supercomputers event in Oregon, Beckman described the Argo OS project and how it might handle an Exascale architecture with large […]
As we approach Exascale levels of computing over the next decade, the ever-increasing numbers of components in supercomputers means that something somewhere is going to break at any time. To tackle this resiliency problem, Matan Eriz from the University of Texas at Austin and his colleagues are collaborating with Cray to develop a new approach […]
In this podcast, the Radio Free HPC team discusses interconnect requirements for applications outside of the current Exascale mission profile. Henry is concerned that these hyperscale machines are getting so physically large that the latency will be a showstopper for applications that have to wait for data. The speed of light is not going to change, […]
In this podcast, the Radio Free HPC team looks at the mission parameters driving Exascale development. Henry is concerned that a system designed for DOE applications will be tailored to that particular mission profile and therefore not be a performing system on many of today’s applications. Will the TOP 10 Exascale applications all be about […]
Over at ACM.org, Dan Reed from the University of Iowa writes that the story of Exascale won’t go forward without an inciting incident. It seems increasingly doubtful that we can reach the exascale destination by incrementalism. Instead, radical innovations in semiconductor processes, computer architecture, system software and programming systems are needed. Simply put, we are […]