As we roll the calendar over to the second decade of the 21st century, insideHPC is launching a series of articles, video, and audio looking ahead to the technologies, issues, and opportunities we’ll be facing in 2010.
We are launching the feature with a series of videos shot onsite during SC09 with companies talking about how their businesses are responding to the rapidly evolving landscape of supercomputing. We’ve wanted to do video content for a long time, but we also wanted it to have a lot of production value — still short and to the point, but with some polish. Just like mom used to make. Everything finally came together at SC this year; we hope you enjoy the product.
[2/19/2010] Just added: an audio interview with Intel’s Chief Supercomputing Architect, Dr. William Camp. Bill refers to himself as Mr. Exascale at Intel, and his thinking goes all the way from transistors to software. In this conversation recorded on the show floor during SC09 in Portland, Bill and I talk about the challenges of getting to exascale, the relationship of exascale technologies to commodity processing, and much more. Is Intel thinking about a return to specialized chips for extreme scale supercomputing? How are we going to build exascale systems that take 20 MW — not 200 MW? What about resiliency? Listen to the show and find out.
[1/22/2010] As promised, we’ve just added the second wave of videos, featuring interviews with Intel’s best and brightest. In these segments, we talk to several Intel engineers about the future of HPC hardware, the direction of HPC software, and how Intel is using HPC to design the chips we use every day.
Throughout the rest of the first part of the year we’ll be following these videos up with additional audio content, and original articles that we hope will put you on firm footing for the year ahead.
Intel’s Exascale Vision: a discussion with Bill Camp
Bill refers to himself as Mr. Exascale at Intel, and his thinking goes all the way from transistors to software. In this conversation recorded on the show floor during SC09 in Portland, Bill and I talk about the challenges of getting to exascale, the relationship of exascale technologies to commodity processing, and much more. Is Intel thinking about a return to specialized chips for extreme scale supercomputing? How are we going to build exascale systems that take 20 MW — not 200 MW? What about resiliency? Listen to the show and find out.
Listen to the show [audio:http://insidehpc.com/media/2010/Intel/BillCampSC09Final.mp3]
Scaling Performance Forward at Intel
The principle that guides all of Intel’s efforts in helping get the performance of their hardware out to the application is the idea of “scaling performance forward.” This refers to the idea that, once you make a change to improve the performance of your application on today’s hardware, you don’t lose that advantage tomorrow. Nash Palaniswamy, senior manager of Throughput Computing, talked with us about Intel’s vision for scaling performance forward, and how they use the theme to inform both their hardware and software development efforts from the desktop to HPC.
Intel hardware in 2010
Intel’s efforts span everything from development software to home medical monitoring devices, but in the end it is all driven by their hardware. In this interview we talked with David Scott, Petascale Product Line architect at Intel, about what processors and technology Intel has coming out over the next several months.
Software Development Tools and Education from Intel
When you think about Intel the odds are pretty good that you think about chips, and that is a good place to start. But Intel has thousands of engineers and computer scientists working on software to help millions of developers get the most out of Intel’s hardware, and a large effort on education that starts in K-12 and continues on through the professional level. If you’ve followed Intel’s software efforts you’ve probably run into James Reinders, chief evangelist for software products. I talked with James about the company’s focus on helping developers get at the power in the chips that make up 80% of the Top500 today, and in the chips coming out tomorrow.
HPC in the Intel enterprise
Sure, you knew that Intel is a big part of HPC today, but did you know that Intel is a big user of HPC? Intel has two systems on the Top500, and devotes considerable supercomputing resources to developing its bread-and-butter chip technologies. In this interview Shesha Krishnapura, Intel Senior Principal Engineer, talks about HPC inside Intel and the results it drives to the bottom line.
As we move more fully into the pan-petaFLOPS era, the scaling limitations in our current set of application development tools is becoming more clear. And if we are to make effective use of the exascale machines our community will begin deploying by the end of this decade, a dramatic shift in our community’s development environment is needed. During SC09 we talked to TotalView Technologies’ Chris Gottbrath about what the company has in store for developers in 2010, and how they are moving to address the problems of extreme scale development.
HPC Advisory Council
insideHPC talked with Gilad Shainer, the Chairman of the HPC Advisory Council, about what the organziation does, how its grown, and how it is helping catalyze developments for users and HPC businesses. The Council is a hybrid organization that is composed of both users and providers in HPC; according to their website their mission is to “bridge the gap between high-performance computing (HPC) use and its potential”
In addition talking about what the Council has already accomplished, Gilad also talks about the new research and focus areas that they are kicking off. The Council’s mission is to help everyone, so papers, inf0rmation, best practices, and so on are all available for download at their website.
Sun Microsystems — Storage
In the first of three conversations we had with Sun Microsystems at SC09, we get a preview of Sun’s technologies and the specific ways that they are adapting their storage technologies as systems grow larger and the amount of parallelism increases at all levels. Bob Murphy, Sun’s global business development guy for HPC open storage, introduces us to Ken Kutzer who lays out Sun’s storage portfolio and talks about Lustre. Then Harriet Coverston talks about Storage Archive Manager for long-term storage, and Christine Brandt rounds out our storage segment with a view of Sun’s tape hardware.
Sun Microsystems — Flash Storage
In the second of three conversations we had with Sun Microsystems at SC09, Bob Murphy (global business development for HPC open storage) walks us through Sun’s network storage offerings (including the 7000 series). Then Larry McIntosh, Sun HPC Systems and Storage Architect, talks about Sun’s FlashFire product line, gives us a tour of the flash-integrated cluster Sun had running live demos in their booth, and walks us through using managing a flash array in a Sun cluster. Finally, Dale Layfield from Sun’s ISV Engineering team gives us an overview of the impact that flash storage can have on application performance with applications such as MSC/Nastran. Bob Murphy rounds out this segment with a look at the analytics tools that help admins manage their flash-based storage.
Sun Microsystems — Ops Center
In the third of our conversations with Sun Microsystems at SC09, insideHPC talks with Prasad Pai, the director of systems management at Sun. Prasad gives us an enthusiastic and in-depth walk-through of Sun Ops Center, a systems management suite for managing clusters of Sun systems in a datacenter.