Dell Technologies HPC Community Interview: Bob Wisniewski, Intel’s Chief HPC Architect, Talks Aurora and Getting to Exascale

Print Friendly, PDF & Email

Intel is the prime vendor for the first US exascale supercomputer, the Aurora system, scheduled for delivery in 2021 at Argonne National Lab. The late Rich Brueckner of insideHPC caught up with Intel’s senior principal engineer and chief architect for HPC, Robert Wisniewski, to learn more.

insideHPC: Bob, we know each other mostly through your work with Intel software and OpenHPC Project. This is a very different kind of role for you, isn’t it?

Wisniewski: Thank you. Yes, I’m in a larger role, one that requires me to wear a software hat and a hardware hat, covering the whole system. I’m currently the chief architect for HPC at Intel. I am also the technical lead for the Aurora Supercomputer at Argonne National Lab, as well as principal investigator.

insideHPC: Congratulations! That will broaden the discussion here now that the Aurora Supercomputer is just around the corner.

Wisniewski: Absolutely.

insideHPC: Let’s start at the beginning. Can you describe your role as the chief architect for HPC?

Intel’s Bob Wisniewski

Wisniewski: There’s two parts to it. One, I’m playing the role of PI for Aurora, which is the principal investigator. That’s a specific role relative to Intel’s contract with Argonne National Lab. Plus, I’m the overall technical lead – that means I am responsible for the technical direction. Large projects like Aurora must meet technical and schedule milestones. We typically start with our architectural point design, but as the project progress we learn, and products do not necessarily mature as planned so continue exploring technologies and we make changes as we go. We meet weekly and review where we are. Technically, we interact and collaborate very closely with Argonne to review schedules and discuss technical information either on the performance or functional aspects as this information becomes available. We continue to modify our point design to make sure our current design is going to meet their needs. We work closely with Argonne, who has been a great partner.

insideHPC: What about your overall role as HPC architect?

Wisniewski: Part of the role entails working with partners to better understand how they can deliver HPC capabilities. One way we do this is through POC (proof of concept) projects, which have been successful. In the broader role, I’m working to make sure that the products that are coming out of Intel are well designed so they can be used by our OEMs in their systems. This is something that Intel made a shift to seven to 10 years ago, when we started thinking  from a system perspective and making sure the technologies we were designing, manufacturing, building, and providing to our OEMs were going to fit well into the overall systems they were building.

The close partnerships we have with our OEMs make for a more efficient ecosystem. It comes down to understanding the needs of our OEMs and making sure we’re designing products that meet their needs. Creating a vision for the future and ensuring this meets the needs of the HPC computing market and our OEMs is a broad view of what my role now involves.

insideHPC: As far as your future heterogeneous (CPU-GPU) architectures and things like oneAPI, are you sharing blueprints with OEMs to enable innovation?

Wisniewski: Yes, we have a solutions group that works to understand OEM needs and help take that knowledge back into Intel. To do this effectively is what you might call co-design, though I guess that’s an overused word. Intel Select Solutions offers OEMs easy and quick-to-deploy infrastructures optimized for a variety of applications, like AI, analytics clusters and HPC.

insideHPC: Bob, you’re coming into this role at a time when Intel is in the process of changing its HPC focus from general purpose CPUs to heterogeneous architectures. Is that where you’re taking us?

Wisniewski: Yes, I think that’s a great observation. We’re recognizing that HPC is expanding to include AI. But it’s not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments – it’s all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.

Intel is scheduled to deliver the Aurora supercomputer, the first U.S. exascale system, to Argonne National Laboratory in 2021, incorporating Intel Optane DC Persistent memory, Intel’s Xe compute architecture and Intel oneAPI programming framework, among other technologies. (Credit: Argonne National Laboratory)

At the same time, of course, we continue to actively support our CPU architecture. That’s the workhorse in everybody’s HPC system. But as you know, with Aurora, we’re extending our capabilities to GPUs (Intel’s future Xe architecture), and we will provide both the graphical line as well as the compute line to meet customers’ needs.

insideHPC: But that doesn’t come without its challenges. So you’re providing all these heterogeneous solutions and that’s good, right? Well, it’s good that it meets the customer’s needs, but does that make it harder to program? And to the end customers and OEMs who are going to be providing these solutions, do they need a full staff of programmers to rewrite all the code?

Wisniewski: This is where oneAPI comes in. It is a cross-industry, open, standards-based unified programming model. We believe that heterogeneity is valuable to customers. We want to provide the solution that customers need, but we also want to provide a productive and performant way to be able to leverage all the different solutions. The vision behind oneAPI is that regardless of which architecture you decide to utilize – be it from multiple vendors or Intel, you have a single, common, cohesive way of programming them. That’s the vision. Now, there will be challenges, and as a technical person I don’t want it to be presented as a simple panacea. oneAPI provides a common framework for writing code – so a single code base can be portable and re-used across a diverse set of architectures.

So oneAPI empowers end customers to be much more efficient about how they’re utilizing their resources by enabling greater code re-use while allowing for architecture-specific tuning. A lot of developers remain challenged to achieve enough parallelism to leverage today’s architectures, and now we are throwing heterogeneity their way. So their challenge is not just multiple cores, it’s heterogeneous compute elements as well. Determining which code can be parallelized and how to do that, and now which code can be off-loaded, has increased the complexity of developing applications for today’s and tomorrow’s architectures. oneAPI is going to help us and the community address those challenges and make it easier.

“We’re recognizing that HPC is expanding to include AI. But it’s not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments – it’s all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.”

insideHPC: Beyond what Intel is doing, what is the vision for the oneAPI ecosystem?

Wisniewski: To promote compatibility and enable developer productivity and innovation, the oneAPI specification builds upon industry standards and provides an open, cross-platform developer stack. It includes a cross-architecture language: Data Parallel C++, which is based on ISO C++ and Khronos SYCL. The oneAPI industry initiative aims to encourage collaboration on the oneAPI specification and compatible implementations across the ecosystem. Already, more than 30 companies and leading research organizations [LINK TO] support the oneAPI concept, and further adoption is expected to grow.

Intel’s oneAPI product is a reference implementation of the specification for Intel architecture and consists of a base and several domain specific toolkits, including one for HPC. The components that are in the core oneAPI product will be the ones that have general applicability, for example, the Intel compiler along with multiple libraries and tools.  For HPC users of oneAPI, there is an HPC toolkit, which includes components such as openMP and Fortran run times, Intel MPI library, and all the things that an HPC user would need to maximize the performance and capabilities of Intel hardware. oneAPI allows a more productive environment across all HPC and even beyond HPC, in areas like edge, cloud, and enterprise computing — although I like to think of edge and cloud and AI and HPC all coming together. oneAPI will have components that will allow developers to be able to leverage heterogeneity across various environments and architectures (CPU, GPU, FPGA and specialized accelerators).

Overall, this oneAPI cross-architecture programming approach will help ensure code works well on the next generations of innovative architectures. And it also opens the door for flexibility in choosing the best architectures for a particular solution or workload’s needs in performance, cost, and efficiency.

We envision oneAPI as an industry standard that will encourage broad developer engagement and collaboration, while having multi-vendor adoption and support.

insideHPC: So for our readers, what’s the call to action for oneAPI? Is it time to download and start playing around with this? What would you say?

Wisniewski: Absolutely. Download the oneAPI specification at Developers and researchers can also directly download the Intel oneAPI toolkits [], and test code and workloads for free across a variety of Intel architectures using the Intel DevCloud for oneAPI [LINK TO]. And there are multiple communities forming around oneAPI. It is absolutely our intent that this becomes a broad ecosystem, like the Linux model. The goal is really to make this pervasive, and it will be more powerful as more and more people use it.

insideHPC: I understand you wrote a book recently. Can you tell me more?

Wisniewski: The book is called “Operating Systems for Supercomputers and High-Performance Computing.” It was written together with my fellow editors Balazs Gerofi, Yutaka Ishikawa and Rolf Riesen.

The book came about from a collaboration with our customer at RIKEN. At some point as we were starting to compare the different versions of multikernels, a new operating system direction that a lot of people the high-end HPC capability class are pursuing, and how they differ from traditional operating system kernels.

We started talking about how it might be valuable if we did a comparative retrospective on these efforts. We thought we could accomplish two things.

First, we could have a little fun looking at the high-end operating systems community and how it evolved over the past three decades. It takes years to write an OS, and you learn hard lessons along the way. We decided to include lessons learned so that future OS developers can benefit from it.

Second, we wanted to provide insight as to why things work the way they do. The people that developed the OS thought long and hard about their designs. But sometimes you miss things. In each chapter, we included a section dedicated to lessons learned, so that readers could gain insight from the developers who spent the hard effort to build the OS. The book was a tremendous amount of work, but I had a fabulous set of co-editors and it was a lot of fun.

insideHPC: I wanted to wrap up and ask you more about community engagement. You are very generous with your time, attending industry events on a regular basis, multiple times a year. Why is that so important to you and the company to go out and engage at these events?

Wisniewski: I really enjoy going to these events and interacting with people. The events that I like going to most are the ones that have savvy audiences asking tough questions or discussing challenges they face.  For example, when technical leaders share challenges, I can work with colleagues back at Intel to address them, and that in turn changes our future architectures to be better [co-]designed to meet the needs of our customers.

Dr. Robert W. Wisniewski is an ACM Distinguished Scientist, IEEE Senior Member, and the Chief Architect for High Performance Computing and a Senior Principal Engineer at Intel Corporation.  He is the lead architect and PI for A21, the supercomputer targeted to be the first exascale machine in the US when delivered in 2021.  He is also the lead architect for Intel’s cohesive and comprehensive software stack that was used to seed OpenHPC, and serves on the OpenHPC governance board as chairman.  He has published over 77 papers in the area of high-performance computing, computer systems, and system performance, filed over 56 patents, and given over 64 external invited presentations.  Before coming to Intel, he was the chief software architect for Blue Gene Research and manager of the Blue Gene and Exascale Research Software Team at the IBM T.J. Watson Research Facility, where he was an IBM Master Inventor and led the software effort on Blue Gene/Q, which was the most powerful computer in the world in June 2012, and occupied four of the top 10 positions on Top500 list.