An Interview with Intel’s Extreme Scale Computing Director, Wilf Pinfold

I met recently with Dr. Wilfred (Wilf) Pinfold, Director of Extreme Scale Computing research with Intel Labs. We talked about a number of issues centered around the race to exascale. Pinfold is cautiously optimistic about exascale R&D, but also warns us that now is the time to get it right. He brings a balanced perspective to the discussion of pursuing the most commercially viable path as opposed to a more risky path of R&D. We pick up the conversation with Pinfold as he discusses the importance of international competition in this global race.

2011 Exascale Progress Meter

We asked readers to list the top influencers in which they have the most confidence for leading us to exascale.

An interview with Nvidia’s Sumit Gupta

In a recent poll conducted by The Exascale Report, NVIDIA was picked as one of the two most influential companies moving the world toward exascale. The company recently held a GPU Technology Conference in Beijing, China to a record audience. We spoke with NVIDIA’s Sumit Gupta on a range of topics including activities in China, competition with Intel, compilers, debuggers and the ARM processor.

We pick up the interview as Gupta is describing several of the announcements that were made at the Beijing GPU Technology Conference.

Click here for the briefing interview slides from NVIDIA

Community Response

In the last issue of The Exascale Report, we posted two reader-submitted questions. The editors’ choices for the best responses from the community are listed below.

We also offer this comment from Argonne’s Rick Stevens, not as a specific response but as consideration at a higher level:

“I don’t understand why everyone automatically assumes that existing programming paradigms will not scale. It’s not the programming paradigm that usually is the problem but the algorithm. To say we need new algorithms is of course nearly obvious. In my thinking, scale itself is not the problem we *might* need new programming models for. Our challenge is to address issues relating to managing alternative memory hierarchies, architectural changes for power management, computing embedded in memory, reliability etc. It is likely that only if we fail to get these right that we will need new programming models.

In Search of an Exascale Roadmap

In this exclusive interview with Indiana University’s Thomas Sterling, we break down a number of exascale topics with a candid look at the good, the bad, and the – well, not so attractive efforts and results of 2011.

If you are interested in exascale, you really should read the article and listen to the audio podcast which is not a transcription of the article, but a separate interview discussion that expands on points made in this thought provoking piece.

Racing Down the Long and Winding Road to Exascale

As we glance at 2011 in the rear view mirror, it’s hard to believe that The Exascale Report has been publishing for eighteen months. When we started our subscription-based publication to focus on a topic receiving very little coverage at the time, we really had no idea what kind of acceptance we’d find. Today we are going strong with a growing international base of readers. We’re proud to be playing a small role in helping bring together the future exascale community.

With such a long way to go between now and the exascale target timeframe of 2020, many in the industry have understandably described the industry’s efforts to move forward as a journey. The journey has now officially turned into a race, and those who have formally entered the race include China, Japan, Europe, the U.S., Russia and India.

For many years, technology leadership, particularly around HPC, seemed to be entitled to the U.S. But not anymore. False confidence, which some have described as U.S. arrogance, political infighting among U.S. funding sources, and perhaps even a lack of belief or understanding of the importance of a national science and education foundation have all been factors in how this race is shaping up.

Exascale Plans for Russia

A Contributed Article by Alexey Komkov, Deputy General Director of Products and Technology, T-Platforms

The Concept of HPC development based on exascale-level supercomputer technology identifies the main evolution direction of the HPC industry in the years 2012-2020. The basis for this Concept is the result of the work performed by Rosatom State Corporation in cooperation with high-tech industries enterprises, as well as Russia’s leading scientific and educational centers. The Concept reflects the key proposals of major participants in the global HPC community concerning the need for and feasibility of creating computing systems of the next generation. Implementation of this concept should ensure technological breakthroughs in a number of strategically important sectors of the economy, including energy, nuclear physics, satellite navigation and communications, medicine and pharmaceuticals, as well as exploration.

Over the last 10-12 years, the performance of supercomputers has increased more than 1,000 times, and analysts say it can overcome the1 Exaflop barrier (1018 operations per second) already by 2018-2020. However, you can not ignore a number of constraints. They are mostly related to issues of systems’ power consumption, reliability and structural envelope. In this regard, exaflop-level computing clusters are expected to be built using hybrid architectures. All the work within the framework of the Concept is supposed to be performed in three phases including the development of supercomputers with a processing capacity of 10 Pflops and 100 Pflops by 2015 and 2017 respectively. Completion of the third phase, in which a system will be developed with a capacity of 1 Exaflop, is scheduled for 2020. The system will be built using processors with more than 100 cores, and its power consumption will be not less than 50MW. There is a plan within the Concept framework to develop a range of new technologies and high-tech solutions – including a new processor, a system of liquid cooling based on hot water flows, as well as system software and the environment for programming and cluster management. Dedicated application software will allow the new system to perform complex modeling of various processes on the basis of the latest techniques. In the process, our company, being one of the Concept developers, will create hardware and software systems and ensure their further development, including supercomputer monitoring and administration systems. Other partners involved in the Concept implementation will assume the development of programming environments.

Numbers don’t lie

We’re not sure why this is happening, but we have been seeing more and more references to exascale where people are using the wrong designation for the power of ten. The numbers are the numbers, so please make a good effort to be accurate when describing the math for all our layperson colleagues, and especially when talking with the media.

This link to Wikipedia may be quite useful.

http://en.wikipedia.org/wiki/FLOPS

An Interview with Argonne’s Pete Beckman

As Director of the Exascale Technology and Computing Institute at Argonne National Laboratory, Pete Beckman has his thumb on the pulse of exascale development. In this feature interview, Beckman talks about the need for substantial investment in science and technology education in the U.S., and the direct link to exascale computing. Beckman, along with many other community leaders, shares a deep concern over the possibility of inadequate funding for exascale development.

A New Day, A New Collaboration. Learning to Play Nice in the Sandbox

By the time this issue hits the streets, we may have heard news on the rebid of the Blue Waters project. You may recall Blue Waters. It’s the program that caused a huge embarrassment for the NSF, NCSA and IBM, but is now being cleverly referred to by the spin doctors as a strategic business decision. Whatever happens with the ‘new’ Blue Waters program, the program as originally awarded ended in failure.

We discussed this in the last issue’s lead story, “The Violent Waters of HPC”. The volume of positive feedback we received was a bit overwhelming, and it was good to know we struck a chord with so many readers. Thank you all for your feedback and compliments.

When Blue Waters was announced more than four years ago, the global HPC community surged with excitement over the possibility of a 10 petaFLOPS system and what it might do for scientific computing.