On the Road to Exascale: The Challenges of Portable Heterogeneous Programming

We heard some very good reviews of a talk given by Doug Miles of The Portland Group at the bi-annual Clouds, Clusters and Data for Scientific Computing technical meeting outside of Lyon, France in mid- September. Most of the talks from that meeting are available online at the CCDSC 2012 website, but the PGI talk did not include any slides. PGI has provided the Exascale Report with a copy of the transcript from the talk, which we have reproduced here with a few minor edits.

Most of today’s CPU-only large-scale systems have a similar look-and-feel: many homogeneous nodes communicating via MPI, each node has a few identical processor chips, each chip has multiple identical cores, each core has some SIMD processing capability. Programming one such system is very like programming any other, regardless of chip vendor, number of total cores, number of cores per node, SIMD width or interconnect fabric.

Setting aside Accelerator-enabled systems for a minute, how did we get here? How did we reach this level of homogeneity from such heterogeneous HPC system roots? 25 or 30 years ago we had vector machines, VLIW machines, SMP machines, massively parallel SIMD machines, and literally scores of different instruction set architectures. How did systems become so homogeneous?

Penny Wise and Pound Foolish: the collateral damage caused by the GSA’s conference scandal

By: Mike Bernhardt & Doug Black

Last year, the annual global supercomputing conference, SC11, was packed with inspiring and encouraging demonstrations of scientific discovery, leading edge research and new technology prototypes. This year it will be quite different. The conference will still be crowded with attendees and undoubtedly buzzing with excitement, but it’s the long term impact to U.S. technology leadership we should all be concerned with.

Travel and participation requirements imposed on government employees have caused a number of government labs to cancel their plans for exhibit hall booths at SC12, leaving the demonstrations of their latest work and research efforts back in the gray cubicles of their government offices.

Exascale at SC12

By Rajeev Thakur

The international HPC community is actively working toward developing the next generation of high-performance computers that will be capable of 1 exaflops/s (1018 flops/s) or more of performance. These activities of the community are well represented in the program of the SC12 conference—the premier international conference on high-performance computing, networking, storage, and analysis.

The U.S. Presidential Election – What’s at stake for HPC and Exascale?

Article by Mike Bernhardt and Doug Black

This question generated a flood of responses — and most asked to remain anonymous. It appears that politics sometimes conflicts with free speech, especially if you work for an organization that relies on federal funding. So when you see an unattributed comment, we are simply honoring the respondent’s request.

Thanks to everyone who contributed to this issue. We can’t possibly use all the comments we received, and quite frankly, some of the responses, as much as they made us laugh, really wouldn’t be appropriate to print.

An Appeal to the Office of Science and Technology Policy

Imagine the SC conference without the National Labs being present. Kind of hard to do, isn’t it?
While the combined U.S. federal agencies only represent less than 10 percent of the SC attendees, those attendees, and the exhibits demonstrating the work being done in the labs, represent the very heart of the SC conference.

Recently, four of the HPC community’s cornerstone organizations, the US Public Policy Council of ACM (USACM), The Computing Research Association (CRA), The Society for Industrial and Applied Mathematics (SIAM), and The Institute of Electrical and Electronic Engineers, Inc. – USA (IEEE-USA), sent an appeal to the Office of Science and Technology Policy to reconsider these restrictions.

A Shining Star Lights Up the Road to Exascale

While we have more than our share of stories talking about frustration and politics negatively impacting the race to exascale, there are several bright spots that deserve a round of applause.

One such shining star, the DEEP project, comes from the Jülich Research Centre, nestled in the heart of the Stetternich Forest in Jülich.

DEEP is one of the European responses to the Exascale challenge.

An Interview with AMD’s John Gustafson

It seems everyone in HPC is familiar with Moore’s Law. But, just in case you missed that one, Moore’s law refers to the observation made in 1965 by Intel co-founder, Gordon E. Moore, that the number of transistors on integrated circuits doubles approximately every two years.

Then there is another important, but less quoted HPC observation known as Amdahl’s law. This one is named after computer architect Gene Amdahl, and is used to determine the maximum expected speedup for a fixed-sized problem when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

And finally, there is one more that is perhaps not as widely known or referenced, but extremely important and relevant for several reasons. We are referring to Gustafson’s Law. This law addresses the shortcomings of Amdahl’s law which does not fully exploit the computing power that becomes available when the number of machines increases. Gustafson’s Law instead proposes that programmers tend to scale the size of the problems to use the available equipment in order to solve problems within a practical fixed time. Therefore, if faster and more parallel systems are available, larger problems can be solved in the same time.

The U.S. presidential election has left HPC research an orphan

A Feature Guest Commentary

What’s at stake? What impact will the U.S. presidential election have on the global HPC community and the race to exascale?
The immediate impact of the US presidential election is not about which party will win but rather that the system of responsible government is out to lunch. In spite of stated support for HPC in general and exascale in particular by the current administration, the U.S. is currently under spending international competition.

John Barr Joins The Exascale Report Editorial Team

We are pleased to inform our readers that John Barr has joined The Exascale Report™ editorial team as our European correspondent.

Barr is a widely recognized independent industry analyst, formerly Research Director of HPC at the 451 Group, who brings 30 years experience in the HPC industry to the publication.

Barr will provide a European perspective on exascale issues, including coverage of users, vendors, and European Commission funded research programmes.

The Exascale Timeframe: 2020 – 2022

Presentations from a number of the technology leaders now show the stretch goal for achieving a working exascale system has moved out to 2020. Most of them quickly add that 2022 is perhaps even more realistic.

Just to clarify for our readers, 2018 was never a guaranteed delivery date for an exascale system. While some companies have stated they will have an exascale system by 2018, that’s about as credible as saying “we can build an exascale system today if price and power consumption don’t matter.”? Well, they do matter.

2018 was a stretch goal. It was a target to rally the entire industry and to give the race a sense of urgency. It was chosen based on what some people believed was achievable – in the broadest sense of the word.