The Virtual Institute – High Productivity Supercomputing Celebrates 10th Anniversary

The Virtual Institute – High Productivity Supercomputing celebrated its 10th anniversary in June at a workshop held in Seeheim, Germany.

“The mission of the Virtual Institute – High Productivity Supercomputing (VI-HPS) is to improve the quality and accelerate the development process of complex simulation codes in science and engineering that are being designed to run on highly-parallel computer systems. For this purpose, we are developing integrated state-of-the-art programming tools for high-performance computing that assist programmers in diagnosing programming errors and optimizing the performance of their applications.”

The perpetual focus on hardware performance as a primary success metric in high-performance computing often diverts attention from the role of people in the process of producing application output. But it is ultimately this output and the rate at which it can be delivered, in other words the productivity of HPC, which justifies the huge investments in this technology. However, the time needed to come up with a specific result or the “time to solution”, which it is often called, depends on many factors, including the speed and quality of software development. This is one of the solution steps where people play a major role. Obviously, their productivity can be enhanced with tools such as debuggers and performance profilers, which help them to find and eliminate errors or diagnose and improve performance.

Ten years ago, the Virtual Institute – High Productivity Supercomputing (VI-HPS) was created exactly with this goal in mind. Application developers should be able
to focus on the science to accomplish instead of having to spend major portions of their time solving problems related to their software. With initial funding from the Helmholtz Association, the umbrella organization of the major national research laboratories in Germany, the institute was founded on the initiative of Forschungszentrum Jülich together with RWTH Aachen University, Technische Universität Dresden, and the University of Tennessee.

Since then, the members of the institute have developed powerful programming tools, in particular for the purpose of analyzing HPC application correctness and performance, which are today used across the globe. Major emphasis was given to the definition of common interfaces and exchange formats between these tools to improve the interoperability between them and lower their development cost. A series of international tuning workshops and tutorials taught hundreds of application developers how to use them. Finally, the institute organized numerous academic workshops to foster the HPC tools community and offer especially young researchers a forum to present novel program analysis methods. Today, the institute encompasses twelve member organizations from five countries.

At the June 23 Workshop, Anshu Dubey from Argonne National Laboratory, one of the keynote speakers, explained that in HPC usually all parts of the software are under research, an important difference to software development in many other areas, leading to an economy of incentives where pure development is often not appropriately rewarded. In his historical review, Felix Wolf from TU Darmstadt, the spokesman of VI-HPS, looked back on important milestones such as the bylaws introduced to cover the rapid expansion of VI-HPS taking place a few years ago.

In another keynote, Satoshi Matsuoka from the Tokyo Institute of Technology /AIST, Japan highlighted the recent advances in artificial intelligence and Big Data analytics as well as the challenges this poses for the design of future HPC systems. Finally, all members of VI-HPS presented their latest productivity-related research and outlined their future strategies.

Check out our insideHPC Events Calendar