Andrew Jones posted his latest analysis of the HPC universe yesterday on ZDNet/UK. This month, Andrew takes on analyzing what has become and will continue to be a serious crux in the future directions of supercomputing: survival of the legacy application. Code is code, right? Wrong! As the HPC and technology industry continues to ebb and flow, we continually gain capability and performance from hardware components without a clear view of how those components directly affect the future of our applications. More often than not, we find ourselves scrambing to modify [not port] legacy codes in order to get them to run, albeit not efficiently, on new architectures and platforms. We find our code driving the science, instead of the science driving the code.
As some parts of the community consider the prospect of hundreds of petaflops and exascale computing — which may only be a few years away — others are starting to ask whether some of their applications are ever going to make it.
Proponents of this view argue that some legacy applications are coded in ways, or rely on algorithms, that make evolution impossible. The code refactoring and algorithm development would be greater than the effort of starting from scratch.
Others put it like this: “Don’t let the code be the science.” If you focus on the engineering challenge or the science, then the code constitutes an instrument. And as one instrument becomes incapable of addressing the scale of the problem required, move to a different instrument.
Andrew goes on to hypothesize that we, as an industry, may begin to develop two classes of applications. First, those applications that have reached their theoretical scientific maturity and thus will never exploit the next generation of platforms. And second, those applications specifically created for the next generation of supercomputing platforms. These are the applications that take the next algorithmic and scientific steps in resolution, accuracy and timeliness. Thus, the balancing act begins.
That situation creates a difficult balancing act for researchers, developers and funding agencies or company heads. They have to continue to provide the essential investment in scaling, optimisation, algorithm evolution and scientific advances in existing codes so that they can be used on high-end and medium-term mid-scale HPC platforms and avoid a possibly lethal competitive gap opening.At the same time, they must divert sufficient effort into the development of codes to enable the next step in science or engineering design by running on the most powerful supercomputers of the future. Both tracks of investment are necessary for short- and long-term survival.
Software is a problem that will continue to be the Achilles heel of HPC. Regardless of the final outcome, Andrew notes that training and employing skilled HPC applications talent will be key to the future of any application, be it radical or benign. As always, his analysis is thought provoking and spot on. I encourage to read this month’s Andrew Jones feature here on ZDNet/UK.