Interview: Next-Generation Cori Supercomputer Coming to NERSC

Print Friendly, PDF & Email

In this transcript from our recent podcast, Sudip Dosanjh and Katie Antypas from Lawrence Berkely National Laboratory describe Cori, the next-generation Cray XC30 supercomputer coming to NERSC.

insideHPC: Welcome to the Rich Report – a podcast with news and information on high performance computing. Today my guest is from NERSC. We have the director of the institution – Sudip Dosanjh. Welcome to the show today.

Sudip Dosanjh

Sudip Dosanjh

Sudip Dosanijh: Thank you very much. I’m in momentarily sunny Salishan. It’s been raining all week but the sun has just peaked out just for a couple of minutes.

insideHPC: Well, great, Sudip. I’m right down the road from you in Oregon and yeah, we’re getting our first taste of sunshine in about a week as well. I wanted to thank you for coming on today because you guys had a very interesting announcement about a new system that you’re going to procure from Cray in 2016.

Sudip Dosanijh: Yes, we’re really very excited about it. NERSC has a broad user base – 5000 users and 600 codes. We’re the mission computing center for DOE’s Office of Science, and as such, we really do focus on the scientific productivity of our users. We have typically 1,500 or more journal publications per year. Last year, we had 17 journal covers. Four Nobel prize winners have been NERSC users at some point in their careers. So that’s really been our focus. We work with our users quite a bit to do requirements gathering. When we talked to the scientist what they told us is that we really need to reach exascale within this decade to meet their science needs, and NERSC-8 will help us get closer to their. We’ll be deploying an energy efficient multi-core architecture that will allow us to make some advances that we wouldn’t otherwise be able to make.

insideHPC: Tell me a little bit about the mission of these scientists. Are they doing a broad range of things like climate change? What kind of science are they up to?

Sudip Dosanijh: They’re doing a broad range of different science activities ranging from, as you mentioned, climate change to material science, looking for new energy sources, to working on energy efficiency. So, we have really a very, very broad range of different scientific efforts that use NERSC. We have kind of the traditional HPC workload from these different science areas, and then we are also seeing this rapid growth in data-intensive computing. There are a lot of Department of Energy experimental facilities, things like accelerators, advanced light sources. There’s cosmological data. People at these experimental facilities are getting inundated with data, and so they’re transferring that data to NERSC, in many cases, to analyze. So that’s a rapidly growing area for us.

insideHPC: Sure. There’s some very interesting architectural pieces to the mission with this machine. Can you tell us more about that? It’s called the Cori Supercomputer, isn’t it?

Sudip Dosanijh: Yes. At the NERSC, because of the science focus of our center, all of our systems are named after scientists. Gerty Cori was the first American woman to win the Nobel price in science, so we thought that was a very appropriate name for our next system.

insideHPC: Give me a sense of the scale of this machine here, Sudip. What are we looking at?

Sudip Dosanijh: Our goal is to have over 10x performance, over our Hopper system – which is a petascale system. Cori will have about 50 cabinets. It will be composed of Intel Knights Landing processors. It’ll be a Cray system, so we’ll use the Cray Aries interconnect, and it’ll have a Dragonfly topology. We’re really very excited because it’s going to give us really a big boost in scientific capability over what we currently deploy. So 10x is kind of a baseline but we actually believe that we would be able to do much better than that. That would be a big boost for our scientists.

insideHPC: I mean, certainly; 10x over Hopper which was a very ground breaking system in terms of petascale science – that’s a big leap.

Sudip Dosanijh: Right. At NERSC, since we’re focused on scientific productivity, we’re really worried about the performance of our science codes. We don’t worry about the peak FLOPS. There’s no requirement for any kind of peak FLOPS. What we really care about is having a well-balanced system that’s programmable for our scientists. We do recognize that the codes will need to change to very effectively use Cori, but then again, we can’t deploy a system with 600 codes that would require three person years to adapt each code to run effectively on Cori. So we really needed a system, recognized that the codes would need to change, but we needed to have a more smooth progression. Katie Antypas, who’s with me. She’s the head of user services and she’s also been the NERSC-8 project lead. She can describe more about our application readiness afterwards. She’s been working on for quite a while now.

Katie Antypas

Katie Antypas

Katie Antypas: Sure. I think when we procured the system with such a broad workload, we knew we had to transition our user to many core architectures in order to keep up with their demand. But as Sudip said, we knew if we were going to ask users to make these changes to their applications and these changes include finding more on-node parallelisms, increasing vectorization capabilities of the code, and utilizing the on-package memory – which is one of the features of the Knights Landing architecture.

We knew if we were going to ask users to invest in making these changes, that these changes needed to be able to be sustained for a number of generations in the future. I think we’re really excited about the Cori architecture, the Knights Landing processor, because users will be able to retain the MPI and openMP programming model that they’ve been using on our previous systems – on Hopper and on Edison. So we have a lot of work to do to get our codes ready. We know it’s not going to be easy, but I think this will provide a smooth transition to exascale for the NERSC users.

As part of this application readiness effort, we’re going to be working with Intel as well as Cray to provide a lot of training for our users. We’re also going to have some deep dive dungeon sessions where we look at specific code kernels with experts from Intel and Cray. So we’re going to be challenging our users to adapt to this architecture, but we’re not going to leave them behind. We’re going to really help them and support them in making this transition.

insideHPC: Yeah. Katie, just for the folks that might not be familiar with Knights Landing. This is a little different MIC architecture from what Intel’s had in the past with the Xeon 5 because that was a coprocessor, right? Something that talked to the CPU across a PCI bus. How is Knights Landing different?

Katie Antypas: Right, that’s a great point. We need to emphasize here that the Knights Landing processor is self-hosted, and so that means it’s not an accelerator. It’s not a coprocessor and the particular kernel processor that will be having for NERSC-8, will have more than 60 cores and it will have multiple hardware threads for the core. That’s a lot, right? Having 60 cores per node with multiple hardware thread. That a significant increase from both our Hopper and Edison system, which has 24 cores each. So we’re going to be working with our users to figure out what’s the right amount of parallelism that they need to expose in their application. That’s one really big difference.

Another change is there’s on-package high bandwidth memory. This is new. It will be new for NERSC users and we’ll have to figure out the best way to program it. There’s multiple options. Users can either program it explicitly, so programming and explaining exactly what memory needs to move in and out at the high bandwidth memory. Or it can be used as a cache, too. So that’s something we’ll be exploring in the next couple of years as well.

Sudip Dosanijh: Yes, so the concurrency is very important but this on-package memory, that will be really huge for our users because many of our codes are really limited by data motions that people do often say that floating point operations are for free, and really the energy and the time spent during a typical calculation, a lot of that isn’t just moving data. So, effectively using this on package memory is very important.

insideHPC: What about the burst buffer? Is that part of the NVRAM or is that a separate component that you guys are looking at?

Sudip Dosanijh: Yes, we’re definitely looking at that. When we ask our NERSC users, Katie’s group often does these user surveys, and the number one complaint we get back is that the people want more cycles at NERSC. The other is that they really like to see a boost in IO performance. So a lot of our applications especially these data-intensive applications were you have large datasets that’s coming in from whether it’s an accelerator or a light source or a telescope. A lot of times or if it’s a genomics data, a lot of those algorithms are really limited by IO so a lot of times your compute node is just sitting idle. If we can deploy a burst buffer NVRAM, that would greatly aid a lot of these data intensive applications by caching I/O. There are other use cases for it. A lot of people have talked about it in terms of checkpoint restart, being able to write large files as far as your resilience strategy. For us, that is important but really enabling these data-intensive applications where you have to read large datasets. That’s really very important to NERSC scientist.

insideHPC: If you guys could help me out here, as far as a step towards exascale, is this system going to give us a glimpse of what those systems might look like?

Sudip Dosanijh: Yes, certainly. One of the mission drivers here as far as when we did our mission requirements, was to begin this transition of our code base to exascale. As I said, that’s a huge challenge for NERSC because we have to take the very broad science community within the Department of Energy forward, and so this explosion and concurrency that you’re going to see, that’s there with Cori. That’s going to be very typical of what you’ll see at exascale, having lots and lots of cores, each of which has lots of threads.

We also think that this on-package memory is something we are going to see more of in the future. It is really critical because of the memory wall, limiting the effectiveness of our applications. Then I think having NVRAM in the system for a burst buffer and for buffering some of this IO, I think those are old things that you’re going to see in the future exascale system.

insideHPC: Well, great. Well, Sudip and Katie, I just wanted to ask you as we close it out here is, can we look forward to some more great science coming out of Hopper in the meantime?

Sudip Dosanijh: Yes, we are. The other exciting thing that’s happening at Berkeley Lab is that we’re having a new facility being built on the hill that will house NERSC-8. Currently, NERSC is located in downtown Oakland. We had to move there because we didn’t have enough space or power for our systems on the hill. But the lab is actually building a new building called TRT, and that will have over 20,000 almost 30,000 square feet of computer space. We’ll have initially 12.5 megawatts of power but that will be going up. Our plan is to move Edison up on the hill. Hopper will get turned off where it is but we’ll see lots of great science coming out of Edison and Hopper in the next few years. And then NERSC-8 will be the first new system installed in our new building. That would be very exciting.

insideHPC: This is great. Sudip and Katie, I want to thank you for sharing this. Congratulations on the Cori Supercomputer.

Sudip Dosanijh: Thank you very much. It’s great talking to you.

Katie Antypas: Yeah, thank you.

insideHPC: You bet. Okay, folks, that’s it for the Rich Report. Stay tuned for more news and information on high performance computing.

Download the Transcript (PDF)Download the MP3 * Subscribe on iTunes * Subscribe to RSS