Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Exascale – A Race to the Future of HPC

Whenever a milestone is reached in the High Performance Computing (HPC) world, the next milestone is quickly announced with anticipated dates to reach the new computing plateau. From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march is always on and moving ahead. This whitepaper looks at the future of HPC.

There are a number of challenges to reach sustainable and affordable exaflop computing system.  Yes, an exaflop system could be designed today with enough money and power delivered to the datacenter, but it would be quite costly and difficult to manage.  An extrapolation could be done from today’s petaflop systems to reach the exaflop performance range, but managing so many systems (and cores) as well as the infrastructure, including networking would be prohibitive.

Moore’s Law is frequently cited as a bottleneck to achieving exascale computing, yet the number of transistors per chip continues to grow. However, getting the data into and out of the CPU from the memory subsystem has not kept pace with the CPU demands. New approaches are needed to move the data back and forth. Memory centric computing will need to be developed and commercialized in order to create a system that can maintain this level of performance over a wide range of applications.

Networking and associated technologies will need to be developed as well.  Since an exaflop class system will have to rely on hundreds of thousands of cores, efficient networking, in terms of both performance and electricity consumption will need to be architected, designed and implemented.

Reliability, availability, and serviceability (RAS) is another key issue that will need to be addressed. A large system containing so many systems will most likely have a certain percentage of components down at any given time. A system will need to continue to run and perform as expected as computing elements/storage/networking go down and need to have an automated work around already defined.

The programming challenges need to be addressed as well.  In order to achieve the expected performance of such an expensive system, hundreds of thousands to millions of threads will need to be working together.  Programmers will need new tools to develop these applications, and operating systems will need new API’s in order to develop applications on such a massive scale.

This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing. HPE is leading the way with innovative research and technologies in order to get to this level of performance that is the next major milestone. Download this whitepaper to learn more about the challenges and some solutions that will lead the way.  Get it now.

Resource Links: