Mark Seager on Why the Best is Yet to Come for HPC

Print Friendly, PDF & Email

In this special guest feature from Scientific Computing World, Intel’s Mark Seager, CTO of Technical Computing Ecosystem, writes that although advances in HPC have been stellar, there is even more still to come.

Mark Seager, CTO of Technical Computing Ecosystem at Intel

Mark Seager, CTO of Technical Computing Ecosystem at Intel

The single most important truth about high-performance computing (HPC) over the next decade is that it will have a more profound societal impact with each passing year. The issues that HPC systems address are among the most important facing humanity: disease research and medical treatment; climate modeling; energy discovery; nutrition; new product design; and national security. In short, the pace of change and of enhancements in HPC performance – and its positive impact on our lives – will only grow.

This phenomenon stems from what can generally be called ‘predictive scientific simulation,’ which has revolutionized the scientific method itself. Since Galileo invented the telescope 405 years ago and observed the moons of Jupiter, scientists have moved research forward through theory and experiment. Now the pace of scientific discovery has been radically accelerated as a result of theory and experiment being augmented by predictive scientific simulations using parallel supercomputers.

Scientific simulations inform theory and experiment in three ways. The ‘hero’ simulations, typically use the entire system on a single record-setting simulation, and produce higher-fidelity results. The ‘ensemble’ simulations are groups of simulations run in throughput mode, typically utilizing major fractions of the system, and which provide sensitivity information of the ‘hero’ results on inputs and model parameters that allow one to estimate confidence in the ‘hero’ results (e.g. error bars). Lastly, the rapid analysis of large experimental data sets increases the usefulness of these results. These procedures enable significant savings through informed decisions and actions in the real world.

The effect of scientific simulations on the scientific method has been enabled by the astonishing increase in the computational power of supercomputer systems. Over the past 20 years – which is a heartbeat compared to the long span of the scientific method (and even shorter when compared to the evolution of biological systems or to the timescales of geological change) – supercomputers have been transformed in many ways.

Here’s a quick analysis of the Top500 list of supercomputers, comparing the Thinking Machine’s CM-5/1024 of 1993 to China’s Tianhe-2 of 2013:

  • Performance: Systems have exploded from a top performance of 59.7GigaFlop/s to33.9PetaFlop/s, an improvement by a factor of 568,000.
  • Cost: Systems prices have increase from $30m for CM-5/1024 to $390m (USD) for Tianhe-2. On a $/FLOP it’s a factor of 44,700 improvement.
  • Power Consumption: System power requirements have jumped from 96.5 KW for CM-5/1024 to 17.6 MW for Tianhe-2, an increase of 5,500 times.
  • Hardware parallelism: Over the last several years most performance improvements have come by adding parallelism: both more processors in the system and more cores/threads per processors. The CM-5 of 20 years ago had 1,024 single-core processors whereas Tianhe-2 has 32,000 Intel Xeon processors, each with 12 cores and 48,000 Intel Xeon Phi coprocessors, each with 60 cores. This is an increase in parallelism of 3,234 times (ignoring vectorization).

This is an astounding performance increase that will continue apace as we reach Exascale levels of performance by about 2022.

Progress across the HPC landscape is, of course, uneven, and advances in one sector create challenges in another. For example, as HPC processing power continues to grow rapidly there is also a concomitant need to modernize, or parallelize, software applications. But many of our most important HPC codes, both in the public domain as well as commercial applications, have not received the parallelization beyond MPI (e.g. SMP parallelization and vectorization) needed to fully leverage supercomputers comprised of hundreds and thousands of processors with large numbers of vector parallel cores/threads. Fortunately, code modernization is starting to receive the attention and resources that it so urgently needs.

As for the explosion in power consumption, this is being addressed in a number of ways. For example: the Intel Xeon Phi product line delivers exceptional FLOP/s per Watt; silicon photonics provides vastly improved energy per bit moved; stacked memory significantly improves memory bandwidth at constant power; water cooling and other system design enhancements reduce the power required to cool supercomputers.

These and other issues can and must be overcome because the revolution in the scientific method has elevated the economic value of scientific discovery and HPC systems to the level of national security. We used to say ‘to out-compete is to out-compute,’ as though supercomputers delivered a favorable advantage and little more. Now supercomputers and superiority in predictive scientific simulation are fundamentally bound up with the economic security of nations and entire regions.

Thus, individual countries and aggregations of nation states, such as the EU, are spending billions to develop, acquire, and leverage leadership-class HPC systems and their predictive scientific and engineering capabilities. To concede HPC computing superiority to competing countries and regions would be an abdication of responsibility by national leaders.

Look at the enormous impact of HPC on science, engineering and national security:

  • Personal genomics, providing the ability to tailor drug treatment regimens for cancer and other diseases based on one’s own therapeutic needs. This will end one-size-fits-all medicine and move toward individualized treatments.
  • Better batteries: a five-fold improvement in power density and a five-fold lower cost over the next five years will mean more energy-efficient transportation, better use of alternative energy sources, and lower environmental impact required to keep us warm, working and moving.
  • Improved estimates on global climate changes and their attendant impacts on water resources, agriculture, real estate, political systems and human migration.
  • Democratization and decentralization of manufacturing through the ‘maker movement’ enabled by implicate technical computing.
  • Optimization of the IoT (Internet of Things), giving jet engines a 3-5 per cent efficiency improvement resulting in billions of gallons of fuel saved per year. This extends to the optimization of refrigerators and air conditioners, reducing power demands when the electrical grid is under the greatest strain.
  • Many areas of national security, such as major enhancements in military aircraft, ship and radio antenna design, as well as the meshing and geometry generation used to in the representation of weapons systems.

As HPC technology advances, problems will always arise. But the innovative genius resident in the HPC community continues to overcome these issues. It’s a continual case of many steps forward for every step back. As we look ahead at the challenges facing humanity, supercomputers are and will be a critical element showing us the way toward healthy, effective solutions. Code modernization will ensure our success.

The best part of the great ride we’re on is still to come.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.