If countries in Asia are going to be successful in being the first to develop exascale systems, then technological breakthroughs are also going to be vital. An ability to increase processor performance is critical, which means developers must constantly look at ways to enhance memory technology, interconnect and integrate new functions into the processor, reduce power consumption, identify innovative cooling techniques, and identify new technologies delivering increased flexibility to software developers.
Over at Fierce Government IT, David Perera writes that a June Energy Department report to Congress indicates that building a viable exascale supercomputer by 2022 will likely require at least $1 billion to $1.4 billion in funding and won’t occur in America unless federal agencies spend money on its development. Exascale suffers the problem of […]
In this video from the HPC Advisory Council Spain Conference 2013, Pedro J. Garcia presents: High-Performance Interconnection Networks on the Road to Exascale HPC: Challenges and Solutions. One of the challenges of that interconnect researchers face today is how to efficiently interconnect the huge number of processors expected to be present in future exascale systems. […]
Two key projects within the US and Europe are attempting to solve the power challenge of exascale systems.
John Barr asks what standard approach can the industry agree on to make next-generation HPC systems easier to program?
When the architecture of high-performance computing (HPC) systems changes, the tools and programming paradigms used to develop applications may also have to change. We have seen several such evolutions in recent decades, including the introduction of multiprocessors, the use of heterogeneous processors to accelerate applications, vector processors, and cluster computing.
These changes, while providing the potential for delivering higher performance at lower price points, give the HPC industry a big headache. That headache is brought on by the need for application portability, which in turn leverages standard development tools that support a range of platforms. Without software standards supporting emerging platforms, independent software vendors (ISVs) are slow to target these new systems, and without a broad base of software the industry pauses while the software catches up.
An example of this was the transition from complex, expensive shared memory systems to proprietary distributed memory systems, and then to clusters of low cost commodity servers. When each new system had its own message passing library, ISVs were reluctant to port major applications. It was only when MPI was widely accepted as the standard message passing library that the body of applications available for clusters started to grow, and cluster computing became the norm in mainstream HPC.
So the burning question is: What standard approach can the industry agree on to make next generation HPC systems easier to program, and therefore more attractive for ISVs to support?
As first reported here on insideHPC last week, Argonne National Laboratory is working on a prototype Exascale operating system called Argo. This week, Argonne announced it has been awarded a $9.75 million grant from the DOE Office of Science to lead this multi-institutional research project. In Greek mythology, Argo (which means “swift”) was the ship used […]