In this Let’s Talk Exascale podcast, Tapasya Patki of Lawrence Livermore National Laboratory dicusses ECP’s Power Steering Project.
Tapasya Patki, Lawrence Livermore National LaboratoryTapasya Patki of Lawrence Livermore National Laboratory leads the Power Steering project within the ECP. Her project provides a job-level power management system that can optimize performance under power and/or energy constraints. She has expertise in the areas of power-aware supercomputing, large-scale job scheduling, and performance modeling. Patki spoke with ECP Communications on February 6 at the ECP 2nd Annual Meeting in Knoxville, Tennessee.
What is the high-level description of your project?
Tapasya Patki: As we push the limits of supercomputing toward exascale, resources such as power and energy are becoming expensive and limited. Efficiently utilizing procured power and optimizing the performance of scientific applications at exascale under power and energy constraints are challenging for several reasons. These include the dynamic behavior of applications, processor manufacturing variability, and increasing heterogeneity of node-level components.
In this project, we are working with Intel and the University of Arizona to develop advanced plugins for a production-grade, open-source, job-level runtime system called Intel GEOPM. We are integrating ideas from existing research efforts, such as Conductor and Adagio, as well as designing new algorithms to support upcoming architectures, programming models, heterogeneity, and diverse applications. GEOPM is a production-grade source code that will be suitable for deployment with the ECP system software stack and HPC resource managers such as SLURM and Flux.
Could you provide more detail on these advanced plugins?
Tapasya Patki: Sure. We are taking a two-pronged approach. First, we are working toward consolidating existing research efforts from the community to develop high-quality plugins for GEOPM that can be deployed at exascale. In parallel, we are developing new algorithms in GEOPM to address other exascale challenges. While GEOPM already provides some baseline algorithms, the existing capabilities are not programmer transparent.
Our advanced algorithms will analyze critical paths of scientific applications transparently, balance power between different components intelligently, and provide mechanisms to capture fine-grained application semantics through Caliper. Additionally, these advanced algorithms will support non-Intel architectures such as IBM/NVIDIA and novel task-based programming models such as Legion.
Are there any aspects of your research that you think would be especially good to clarify to help people better understand what the effort is all about?
Tapasya Patki: Several technical aspects of power and energy management are not well understood.
One such aspect is the incorrect assumption that giving more power to an application will always improve its performance and that enforcing a power cap will always slow it down. While this is true for CPU-bound applications, such as High Performance LINPACK, it does not apply to most scientific and ECP applications. This is because they exhibit specific dynamic phase behaviors and tend to be more bound by memory, I/O [input/output], and network usage. Being able to steer power correctly based on application characteristics and phase behaviors is thus critical for improving performance as well as efficient utilization of power.
Another less understood aspect is that of processor manufacturing variability, wherein processors with the exact same microarchitecture can exhibit inhomogeneous power and performance characteristics, both with and without a power cap. This is attributed to the chip fabrication process and can result in over 20% run-to-run variation in application performance without any power constraints and cause over 4x variation in power-constrained scenarios. Most vendors, such as Intel and IBM, have confirmed that processor manufacturing variability is expected to worsen in the future and at larger scales, making application-level power steering in system software absolutely necessary on future systems.
Why is this area of research important to the overall efforts to build a capable exascale ecosystem?
Tapasya Patki: The key to a capable exascale system is to provide sustained performance across diverse scientific applications, programming paradigms, and heterogeneous hardware components—and to do so while ensuring efficient utilization of resources such as power and energy. Without a job-level runtime system for power steering, such sustained performance and efficient power or energy utilization will just not be possible. Detecting the critical path of applications, mitigating processor manufacturing variability, and capturing fine-grained application semantics are important aspects of optimizing performance, and these need to be managed actively at runtime. Managing these aspects can improve ECP application performance significantly, sometimes even by a factor of 2 or more.
Would you still have been doing this research without exascale?
Tapasya Patki: Yes, but the funding sources as well as the collaboration space might have been very different and would have led to more scattered efforts. ECP funding ensures that this research gets deployed on production systems in a timely manner. It also allows for tighter integration and testing with ECP applications as well as newer architectures and programming models. ECP also encourages close collaborations with industry and academia. All these aspects are critical for scientific progress and a holistic approach to exascale.
Why was this research area selected for exascale?
Tapasya Patki: Power and energy management are important thrust areas for exascale supercomputing. This is because there are physical limits on the amount of power that can be brought into the machine room, concerns about energy pricing and cooling costs, and expectations for better utilization of the procured power resources. Additionally, complex scenarios arising from application phase behavior, component heterogeneity, and processor manufacturing variability pose challenges. The Power Steering project and ECP Argo project are addressing important aspects of the development of low-overhead production-grade system software for power management, and both are important for sustained performance of scientific applications at exascale.
What can you highlight as some of the most important technical milestones of your ECP research?
Tapasya Patki: We have integrated Conductor’s configuration exploration algorithm into GEOPM as an advanced plugin over the past year. We have exhaustively tested its performance on different proxy ECP applications, obtaining some impressive results, and are in the process of improving it further. Additionally, we have added support for running GEOPM with applications that have been annotated with Caliper, which supports several ECP applications. This allows us to capture and leverage fine-grained semantic information from those applications and improve their performance or energy efficiency with GEOPM.
What collaboration or integration activities have you been involved in within the ECP?
Tapasya Patki: We are actively collaborating with the ECP Argo and Flux teams to develop a more holistic power management stack. The global resource manager and job scheduler will use GEOPM to improve application performance at runtime. This will help minimize scheduling overhead and provide more fine-grained control through GEOPM.
We are collaborating with the Caliper project to provide direct integration of ECP applications into GEOPM. We also have ongoing collaborations with the University of Tokyo, Kyushu University, LRZ, IBM, and Intel to develop a community for GEOPM deployment at large scale and to work together on common challenges.
Has your research taken advantage of any of the ECP’s allocation of computer time?
Tapasya Patki: We are currently utilizing collaborative zone clusters at Lawrence Livermore National Laboratory for development and testing. Examples include the cab, catalyst, and quartz clusters. Each of these has a different generation of Intel processors, allows for cross-generation testing, and ranges between 300 and 2000 nodes.
What’s next for this project?
Tapasya Patki: We are excited going forward and looking at integration with ECP applications as well as HPC resource managers. Looking beyond the 2019 time frame toward the second phase of ECP, we would like to focus on issues pertaining to extreme heterogeneity, diverse applications that may include high-throughput applications or scientific work flows, and also adding support for the management of network or IO resources along with power through the GEOPM runtime. We would also like to extend GEOPM to tune performance knobs such as MPI and OpenMP parameters or prefetchers in the processor. These focus areas will be explored in the next ECP phase.