Exascale: A race to the future of HPC

White Papers > Big Data > Exascale: A race to the future of HPC

Back in summer 2008, the first HPC system capable of sustaining more than 1 petaflop (defined as 1012 floating-point operations per second [FLOPS] with the HPL benchmark1) was launched. It appeared as the world’s fastest supercomputer ranked number one on the TOP500 list with a power consumption of about 2.4 MW. It was considered a breakthrough achievement and it marked the end of a row of evolutionary steps.
But shortly afterwards, while first researchers were adapting their codes to petascale systems, the HPC community began a serious discussion about scientific benefits and design challenges of systems faster by a factor of 1,000 or capable of reaching 1 exaflop (1015 FLOPS). A paper published in fall 2009, listed the unprecedented opportunities for science as well as critical advances for the U.S. energy needs and security.2 One year later, in fall 2010, the Department of Energy’s (DoE) Office of Science issued an in-detail report on exascale computing.3
Similar discussions took place also in China,4 Europe,5 and Japan6 resulting in multiple independent roadmaps toward exascale.
More recently, the United States’ National Strategic Computing Initiative (NSCI)7 aims to drive a path to exascale through cooperation among the nation’s technical leadership agencies, including the DoE and the National Science Foundation (NSF). In parallel, an initiative of the Horizon 2020 (the EU Framework Programme for Research and Innovation) is to deliver a broad spectrum of extreme scale HPC systems and develop a sustainable European HPC ecosystem.8
It should be noted that exascale computing has to be seen in conjunction with Big Data as a recent paper “Exascale Computing and Big Data” outlines in detail.9
While the discussion continued, new number one systems have emerged on the TOP500 in the form of a step function. The current number one system has a peak performance of 1/8 exaflop and sustains about 1/10 exaflop with HPL at a power consumption of about 16 MW.10
And here is the challenge: How to increase performance by about a magnitude while staying in the same power envelope? It is obvious that such a goal cannot be reached through additional evolutionary steps—a technological transformation is needed across multiple aspects of a system architecture. We will address this step-by-step.
With the end of Moore’s Law,11 there are some major difficulties to overcome. Compared to the current number one in the TOP500 we want to achieve a ten-fold improvement in computing performance with only a small increase in power. Thus, we have to find a way to deliver FLOPS with less energy consumed. This involves a number of changes in the way we design and provision large-scale computing systems, placing less emphasis on reaching peak FLOPS and more focus on holistic system design through an enhanced memory subsystem and more energy-efficient data motion.

    Contact Info

    Work Email*
    First Name*
    Last Name*
    Address*
    City*
    State*
    Country*
    Zip/Postal Code*
    Phone*

    Company Info

    Company*
    Company Size*
    Industry*
    Job Role*

    All information that you supply is protected by our privacy policy. By submitting your information you agree to our Terms of Use.
    * All fields required.