Exascale: Raising the Stakes

Print Friendly, PDF & Email

The race is on, says Mike Bernhardt from The Exascale Report.

Following the progress on the road to exascale is an interesting and entertaining exercise as more players get involved, predictions get more aggressive and politics and national pride take charge. And of course, the hype meter is overworked as more and more companies claim to have the answers.

Funds committed to reaching exascale seem to be growing by the week. But let’s not forget – funding commitments don’t always result in cheques being written. It is still far too early to determine who will demonstrate the financial staying power necessary to bring the first exascale-class system to market.

The cultures and government infrastructures of China and Japan represent incredibly powerful forces capable of aligning resources, financial and other, to hold a steady course over the next eight years (approximate time for the arrival of exascale). Neither the spirit of commitment attached to national pride, nor the capability of technological innovation from these countries should be taken lightly. They are indeed the front runners for anyone betting on this race. As far back as June of 2010, we saw this quote in The Exascale Report from an anonymous contributor in China labeled Mr Zheng: ‘I think it would be great if the first exascale computer had a very large engraved tag that said, “Made in China”.’

Europe is also looking very strong in the race right now. The European Commission has recently decided to double its stake to 1.2 billion Euros and Intel and others have pooled resources under the umbrella of exascale research centres in various European locations. Intel, by the way, has made the commitment to deliver an exascale system by 2018 and at the target power requirement of 20MW.

Just less than two years ago, Peter Kogge, retired IBM Fellow and McCourtney Chair in Computer Science and Engineering at the University of Notre Dame, stated that development of exascale systems would require not just effort, but ‘miracles’. Today, most people equate ‘miracles’ with ‘funding’ as expectations of exascale system development costs start to go beyond the $1 billion mark.

And while details have not been disclosed from the Indian government, the information is generally out now that India is committing close to $1 billion to join the race. But don’t place your wages on this race just yet. The other two participants are Russia and the US.

Russia has been very quiet about its exascale plans. But the country has been very solid about moving up the food chain when it comes to HPC technology. T-Platforms, Russia’s leading developer of supercomputers, recently announced that the Russian bank, Vneshekonombank (VEB), had purchased 25 per cent of the shares of T-Platforms and it’s no secret the company has been developing an exascale strategy for the past several years.

And that leaves the US.

Never to be counted out, the US is hoping the leadership of its Department of Energy, currently under Bill Harrod, will bring the country from the underdog position to serious contender. However, current funding commitments fall way short of others in this race. Even if you piece together various pockets of funding that could go toward exascale, it’s only in the range of $100 million.

As recently as January 2012, Professor Thomas Sterling, a well-known and widely quoted HPC evangelist, in a note of concern, stated that if the US did not change its course, it would ‘not be prepared for exascale computing systems and applications and will have left open the opportunity for others in the international community to own the space and dictate their new standards. We (the US) will have become a nation of domesticated users as we have in the economic product domain of consumer electronics.’

So, the race is on and this next year will prove to be quite interesting as we see who is really willing to write those cheques. But thankfully all of the focus is not on the hardware development for these massive systems. A new benchmark has been established to help prepare the next generation of applications for parallelism at a level we’ve never before imagined. In the current issue of The Exascale Report, we have several community thought leaders expressing their opinions about the importance of this new benchmark known as the Graph 500.

The Graph 500 has actually been around for several years, but the recent attention to ‘Big Data’ has energised the interest in this benchmark designed to address the performance of data movement as opposed to just clocking the compute cycle speed. The Graph 500 is a nice balance to the Top500 listing, as application developers start to focus less on FLOPS (Floating Point Operations per Second) and move to TEPS (Transversed Edges per Second.)

If we don’t start to explore new areas of application and algorithm development now, we’ll have only some very expensive stunt machines to look forward to in the 2018-2020 timeframe. Therein lies the importance of looking at performance differently such as offered by the Graph 500.

Steve Scott, CTO of Nvidia’s Tesla business offers the following perspective: ‘The Graph 500 is important, as it stands as a proxy (albeit, an overly simple one) for an emerging class of graph analytics and Big Data problems. Graph analytic techniques hold tremendous promise for performing complex analysis of large, unstructured, datasets. They are of increasing interest in several markets, including defence and intelligence, business intelligence and optimisation of processes in complex networks such as transportation, electrical grid, etc.’

Graph 500 is a win/win for HPC. It’s a benchmark that will enable us to bring a new class of applications into HPC and a great step in enabling the global community to prepare the next generation of scientific discovery and life- and commerce-changing applications.

Regardless of the ‘race’ and determination of the winner, we need to think ahead to how these systems might be used, and we need to start preparing applications for exascale-class systems today.

This story originally appeared on HPC Projects. It appears here as part of a cross-publishing agreement with Scientific Computing World.