Transcript: Randy Bryant from the White House OSTP Discusses the NSCI Initiative

Print Friendly, PDF & Email
Randy Bryant, OSTP

Randy Bryant, OSTP

In this video from the 2015 HPC User Forum, Randy Bryant, from the White House’s Office of Science and Technology Policy (OSTP) discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.

Transcript:

“Okay, so just by way of background this is an activity that’s been going on for quite awhile, and a lot of the stimulus for it came from the President’s Council of Advisors on Science and Technology, who have written several reports over the years saying, “There doesn’t seem to be any kind of coordinated activity in high performance computing in the federal government, and we see major road blocks ahead for HPC that really call for activity by the federal government, so get going.” That culminated then and was mentioned in an executive order that was signed on July 29th, and you have seen parts of this, but as it says it’s trying to create an initiative focused on research in HPC that will be not only across the whole federal government, but also involve collaboration with industry and academia. And the purpose of it is scientific discovery, so the research that historically has been done using high performance computing as well as economic competitiveness, which is government’s speak for promoting a successful industry and commercial development.

The EO contains five strategic objectives, that I think some of you have already seen. But in my own – the way I like to present it – I like to sort of adjust these and adapt these a little bit to talk about a set of themes that map to the strategic objectives but do them in a slightly different order and a little bit different grouping. And focus more on what outcomes we want in the long run. What I’m going to do is just briefly go through these five themes and I’ll talk about them.

The first is this idea – and I heard it quite a bit yesterday in people’s discussion – of sort of combining the traditional numerical modeling and simulation of high performance computing with data analytics. And in my mind that’s a pretty big gap to bridge. And then if I look at sort of the largest systems out there, you look inside these systems, not just the hardware, but even the operating systems, the run-time systems, how they’re programmed, the whole philosophy of how they’re built and operated are fairly different. And I think it will be interesting and a challenge to see how we reach a convergence between these two. And so I think that’s a long-term goal that will be interesting to see how it plays out.

The next is of course what’s gotten the most press, is sort of in the focus on exascale, and the desire to move the U.S ahead. And of course a lot of the question becomes, “So, who really is ahead these days [chuckles]?” And I think that the thing that I’ve come to appreciate, and a lot of people here also know, that there’s much more to a machine than how it runs on one particular benchmark. And then the second corollary I’d say to that is, and there’s much more to a capacity of HPC than what the best machine in the country is able to do. And so, you think about what we’re really going for is not just one machine that will crack some benchmark that will make people feel good, but also that will build up the nation’s capacity to have machines of different classes available and are actually being put to use for these goals of scientific discovery and economic competitiveness. It’s a much broader thing than just a stunt of trying to race to the top. And of course you’ve heard about the Department of Energy’s exascale program and that’s sort of the cornerstone of this aspect of the operation, is making sure that that program is successful.

Another area that I think is particularly in needing of focus is the challenge of programming, the software development challenges for HPC. And I think people understand that, if anything, we’re moving backward, not forward, in this area. Particularly, the introduction of GPUs into supercomputers has meant now, really we have to program at multiple levels of these machines, and so smart people doing heroic effort can make things run fast on these machines, and then three years later they buy a new machine, and they have to rewrite large parts of the code and migrate things up and down this stack where the programming models at these different levels of it are fairly different. And so, there’s a lot of recoding of existing code going on. And if you think about that for a small company that wants to make use of HPC to model the wind flowing over their bicycle frames or something like that, it’s a pretty big step to be able to imagine them getting on board with HPC resources and making good use of them. What you’d really like to think of is that the whole programming is moved up several layers of abstraction to be somewhat more platform independent. And then there’s various techniques and tools that would let you move and get efficient application– efficient operation on an HPC resource through various different mechanisms. And I think, especially for the federal government, this is the kind of thing it can do well, is investment in fundamental research that will lead toward these kind of goals.

The third is– the other part is again this issue of access and, as I said, not just access for the big players, the large companies that can afford to buy their own hardware and operate them and have the expertise, or the scientific researchers who have been doing this for a long time. But all across the board, getting more people involved and more access. And that includes both physical access to the resources but also the expertise of knowing how to use them. Not just from a software developer’s perspective, but also an application user, understanding modeling and simulation and data analytics well enough to be able to use it effectively in whatever problems they’re trying to solve. And I think especially if you look at our current models of how people get access to systems, they either buy them or there’s very limited opportunities to rent or get access to existing machines. But clearly we need a much more robust marketplace and whether it’s a cloud – as we think of it now – or some extension of that that is more suitable for HPC usage. I think that in the long-term, we have to have a vital commercial basis for this that will be successful and sustaining. But I think there’s a need for a lot of creative thinking in that area.

And then final, and this often gets dropped off the bottom, but in my feeling this is one again where the federal government has a very important role which is, what the heck are we going to do when Silicon CMOS comes to its end? And I think most of us are in the denial phase of the three phases of grief here, but the reality is that after some limit it’s going to happen. There’s just only so many molecules you can assemble together. You need at least some number of them to call it a transistor, and that area is actually coming in a time frame that it’s soon enough that we better be more concerted in our effort. In particular, there’s a lot of very interesting research going on in scattered pockets in different forms about what could future technology be. Will it be some variation on CMOS, will it use carbon nanotubes, will it shift over to quantum computing or involve cryogenic operation, a totally new models of computation – neuromorphic computation and so forth? But the reality is none of these are anywhere near ready for taking over the bulk of computing that we do today. They’re way far from commercial sustainability and large-scale use. So this is the time when we really need to be getting working on that. And the implications for this aren’t just for the device builders and the hardware people, but all the layers of computing that will be built on top of these new systems as well could have effect on computer architecture, programming models, software development and so forth. And this clearly is, again, an area where the federal government is the best place to invest in pre-competitive, very fundamental, long-term research. And so the government has a unique role in sustaining this kind of activity.

So Bob, you were like my front man here. You said, “Well, what would success look like for an NSCI?” And so, I just sort of say for these themes, but if you could imagine ten years from now, we could look back and say, “Yes, we got things going, and look at how much progress we’ve made and we have a map that will take us into the future.” And it would involve sort of advancing on all five of these themes in some way that we could see that we really have gained something by having had this initiative.
A final statement is, as you saw in the executive order, there’s multiple federal agencies involved in this. Three of them are considered lead agencies, sort of driving the main activities. And they happen to be the three agencies that are here today, the National Science Foundation, the Department of Defense, and the Department of Energy. There’s other agencies that will be sponsoring research activities. I should mention, obviously, the lead agencies will be very involved in research as well. And then there’s a series of agencies that are considered deployment agencies, meaning they’re cooperating, participating, especially in sort of specking out and thinking about future applications and future systems and how they’ll be used. But they’re not considered sort of in the driver’s seat of the overall initiative. So that’s it for me.”

Other Panelists:

Watch the complete Panel Discussion on NSCIRead the NSCI Fact SheetSee more talks from the HPC User Forum 

Sign up for our insideHPC Newsletter