Exascale Computing: A Race to the Future of HPC

Print Friendly, PDF & Email

In this week’s Sponsored Post,  Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. 

In the summer of 2008, the first high-performance computing (HPC) system capable of sustaining more than 1 petaflop was launched. Named the world’s fastest supercomputer on the TOP500 list, the system was considered a breakthrough achievement resulting from decades of research and development.

nic-dube_sponsored-blog

Nicolas Dube, Chief Strategist for HPC, Hewlett Packard Enterprise

Shortly after, while first researchers were adapting their codes to petascale systems, the HPC community began a serious discussion about the scientific benefits and design challenges of exascale. In 2009, Big Data and Extreme-Scale Computing (BDEC) outlined unprecedented opportunities for advancement in science, energy, and security if certain architecture, software, and programming challenges could be resolved. More recently, the United States’ National Strategic Computing Initiative (NSCI) was created to promote cooperation among the nation’s leading technical agencies, including the Department of Energy (DoE) and the National Science Foundation (NSF) to deliver a broad spectrum of extreme scale HPC systems and develop a path to exascale.

While the discussion continued, new number one systems emerged on the TOP500 in the form of a step function. The current number one system has a peak performance of 1/8 exaflop and sustains about 1/10 exaflop with a power consumption of roughly 16 megawatts.

And here is the challenge: How do we increase performance by about an order of magnitude while staying in the same power envelope? Such a massive goal cannot be reached through additional evolutionary steps – it requires a technological transformation of system architectures.

To achieve exascale computing, researchers are placing less emphasis on reaching peak FLOPS and focusing more on holistic system design and energy-efficient data motion. Compared to the current number one system in the TOP500, the objective is a ten-fold improvement in computing performance with only a small increase in power. Researchers are aggressively striving to deliver FLOPS with less energy consumed, and implementing a number of design changes to large-scale computing systems. For example, researchers are leveraging optical technologies such as silicon photonics to drive increased input and output to computing elements without exploding the energy budget. These developments are potential avenues for achieving large-scale data motion with less energy consumed.

Researchers are aggressively striving to deliver FLOPS with less energy consumed, and implementing a number of design changes to large-scale computing systems.

Reliability, availability, and serviceability (RAS) is a key issue surrounding exascale system design. Supercomputing on such a large scale must be able to tolerate some degree of hardware failure without inhibiting workload execution. The ultimate goal is to provide exascale computing on a continual basis without interruption, therefore significant improvements to existing RAS systems must be made to predict and prevent crashes and increase the overall resilience of exascale systems.

Another challenge on the path to exascale is combating the limited memory capacity and bandwidth of current systems. Future HPC platforms will demand both increased memory capacity and high-performance bandwidth to operate efficiently. To boost memory performance, high-bandwidth memory stacks are emerging. Yet this solution comes with another conundrum: with high-bandwidth memory comes smaller capacity and with a non-volatile memory (NVM) comes large capacity but less bandwidth.

The rapidly increasing volume, variety, and velocity of data is causing Big Data to become too big for computing systems to effectively analyze. Enterprises must rapidly enhance their programming capabilities to handle this heavy influx of data.

A leadership position in the HPC market provides Hewlett Packard Enterprise (HPE) the unique set of capabilities needed to drive innovation in the future of computing. Leveraging the largest server revenue in the IT industry, the targeted research and development spending will fuel innovation to benefit exascale while also bringing HPC technologies to a point of affordability and availability for every enterprise.

To help address the many challenges on the path to exascale, HPE is also leading an industry-wide approach that will revolutionize system architecture. The development of a new and open protocol, temporarily dubbed the next-generation memory interface (NGMI), will increase flexibility when connecting memory devices, processors, accelerators, and other devices, allowing the system architecture to better adapt to any given workload. Then, emerging NVM technologies will power high-performance computing but in a persistent, more energy-efficient way.

The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. An open architecture will help developers foster a vibrant innovation ecosystem and drive the industry to rethink how next-generation computing systems will be built.

As exponential data growth reshapes industry, engineering, and scientific discovery, success hinges on the ability to analyze and extract insight from incredibly large datasets. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.

Nicolas Dube is Chief Strategist for HPC, working on strategic engagements within the HyperScale Business Unit at Hewlett Packard Enterprise. He is chartered with the architecture of superscale systems for both HPC customers and service providers. He also works on HP’s next generation computing platform design, leveraging a combined experience in server and datacenter engineering. Member of the GreenGrid and other energy efficiency groups, Nicolas advocates for a “greener” IT industry, leveraging warm water cooling, heat re-use and low carbon energy sources while pushing for dramatically more efficient computing platforms.

At Hewlett Packard Enterprise, innovation is our legacy and our future. Our unmatched ability to bring new technology to the mainstream will provide systems that are markedly more affordable, usable, and efficient at handling growing workloads. We stand at the forefront of the next wave of computing, all the way to exascale. See what the future of HPC has in store.