Let’s Talk Exascale Podcast: ECP’s Application Assessment Project

Print Friendly, PDF & Email

Kenny Roche from Pacific Northwest National Laboratory

In this Let’s Talk Exascale podcast, Kenny Roche from Pacific Northwest National Laboratory describes the ECP’s Application Assessment Project.

What is your description of the Application Assessment Project?

The Exascale Computing Project currently includes 25 projects to develop application software for exascale computers to simulate extremely complex problems derived from chemistry and materials, the energy sector, earth and space sciences, data analytics and optimization, and a small number of national security applications. The ECP Application Assessment project was designed to conduct unbiased annual evaluations of the capability, computational performance and scaling, and performance portability of these code developments minus the security applications.

Why is this area of research important to the overall efforts to build an exascale ecosystem?

Benchmarking progress problems provided by the developers with their codes on relevant HPC systems can be viewed as an experiment that vertically tests the use and performance of all the major components of the ECP ecosystem—from mapping algorithms to the parallelization models, programming languages, and compilers, to the use and performance of the runtime and operating systems on a specific hardware architecture for ECP codes and platforms. We expect that our code evaluations and independent benchmarking results will help to clarify how individual code teams are doing using the ECP software stack to meet their goals. And, taken together, the information generated in our code reviews will give a global perspective of the status and progress of the set of application code developments within ECP. The latter is clearly outside the scope of any individual application development.

What technical milestones have you reached so far?

A measure of success is the number of careful code evaluations we’re able to conduct that are coordinated with the application development efforts. This includes benchmarking their codes on relevant problems on relevant computer systems. Having just started in the August timeframe of FY 2017, what we’ve managed to do is complete a pilot phase of our project by evaluating four distinct ECP application software developments to get a handle on what kind of criteria we’d be in a position to measure and to assess, as well as the time schedules and burdens on both the developers and ourselves for conducting these studies. So we completed four of these evaluations, and those evaluations included conducting benchmarks of relevant problems on a handful of target architectures, getting some precision measurements off of those architectures on their problems with their codes, and some weak and strong scaling analyses of the problems, as well as discussions with the developers about the findings.

The ECP is a holistic project. What collaboration or integration activities have you been involved in within the ECP?

In the context of collaboration, every one of our code evaluations requires a direct interaction with the application code developers. There’s a required exchange of information and coordination of information and benchmarks that is present in each of these evaluations. So we require that partnership, but in addition to partnering with the application developers, we’re interacting with various other projects within the ECP, such as the Proxy Applications program and the Productivity team, as well as a couple of the Software Technology projects. For example, with the Proxy Applications [program], we want to figure out how to cooperatively review and quantitatively compare some of the applications and the proxy applications so that we can identify motifs across application developments.

With Productivity, we’re aiming to coordinate how to improve software productivity of the code teams and to sustain that performance and development progress to the completion of each ECP application code project. And, obviously programming models and runtime are very important, and we’re interested to test performance and portability of various implementations that map application data and parallel work to relevant architecture targets.

In the context of tools, our measurements require software tools that allow us to probe the machine events while executing the users’ problems. This requires those tools to both exist and work correctly and for us to understand how to use them. That forces us to interact with the tools program within the Software Technology focus area of ECP so that we can test the performance, learn how to instrument codes with these tools, and conduct the analysis that we want to conduct.

How would you describe the importance of the collaboration and integration activities to the overall success of the project?

I would say co-design and integration are critical areas. Let’s just start by saying that. The idea of measuring how important that is for the success of the overall ECP is very difficult to say because, from the perspective of a lot of developers, historically what you would have is a hardware platform put on the floor and then the users are adjusting to the programming models and the software stack supported by those architectures.

But within the ECP, the idea of co-design is to provide connections that cross those boundaries typically present in the critical path of solving one’s problem so that developers get good information about how programming models are performing and how they’re implemented, what to expect on different architectures, and so forth. So putting algorithm designers in contact with tools developers and the implementers of programming models and the runtime system and so forth increases the burden on the application developers but also improves the overall likelihood of success of their journey from where we are today, on today’s computers with their problems and codes, to what their stated goals are for the exascale computer systems.

Has your research taken advantage of any of the ECP’s allocation of computer time?

Yes. We use the DOE leadership computing facilities and NERSC for experiments conducted as part of our evaluations. At this point, we haven’t done any vendor-specific studies with hardware that would be in an NDA [non-disclosure agreement] space, but that may happen as well.

What’s next for the Application Assessment project?

We conduct these evaluations. We have two modes that we’re trying to work in here for the assessments. One is kind of a high-level, fast code evaluation, and the other is a more in-depth, coordinated study of the application codes. So in the fast review case, those last roughly on the order of 1 month per code, while the more in-depth reviews can last up to 3 months per study. In FY 2018, our plan is to conduct up to 22 of the fast reviews, so [we will] basically cover the remaining application codes that we haven’t yet seen from our startup in the pilot phase. And maybe at this point, there’s time to get up to six in-depth code reviews and studies conducted by the end of FY 2018 as well.

Is there anything else you’d like to add?

Maybe just the simple fact that we’re taking on the perspective of the application code developers, and that vertical nature of being a developer for the exascale computer really is a very long run, all the way from the problems at hand for the developers who are domain experts in all sorts of areas that we talked about earlier, to really understanding the very minute details of the programming language implementation and programming models for particular target architectures. The developers in the ECP have the added burden that they have to figure out how to increase the concurrency in their codes. In our measurements, we would like to be able to help demonstrate that that’s occurring.

Download the MP3

Subscribe to RSS * Subscribe on iTunes

Sign up for our insideHPC Newsletter