This is a contributed piece by regular reader Bill Feiereisen.
What do you think high performance computing will be like in 2020? We speak often of the compute machinery itself and the programming models of the future, but do you ever speculate about the new things it will be used for, and the way it can change our lives?
At the annual High Performance Computing and Communications conference in Rhode Island this year, Bob Feldman of HPC Marketing organized a panel discussion around the question “Supercomputing: Where are we going?” I enjoyed the discussion. There was much talk of the possible futures for hardware, but it also spurred thoughts about the users of the future. As the high end machines move off to petaFLOPS and beyond, and even the more modest computers deliver teraFLOPS, HPC horsepower is available to many organizations and businesses that have never seen it before. But how will they use it?
The missing middle
There’s been much discussion the last few years about the mid-range between desktop machines and top end supercomputing. Blue Collar Computing, as proposed by Stan Ahalt, and the “missing middle” as discussed by the Council on Competitiveness, now represent awesome computer horsepower that was only a dream for the top supercomputing centers a few years ago. The raw computer power is there for the missing middle, but what would some of these future mid-range users do with it? I think there are users out there who don’t yet know that they will be HPC users. They will drive HPC business in the future and they might not even know they’re using HPC.
Imagine a small architectural firm that has received a contract from the city to build a bridge. Their CAD/CAM and rendering software lets them deliver a beautiful design and computer generated visualizations to the city, but how do they make sure they have carried out all the structural analysis to protect the public (and them)? I could foresee a time, not too distant, when the CAD/CAM files could be delivered to another company which provides structural analysis as a service. They return a standard package of analysis for all of the loading and structural conditions to which the bridge might be subjected. Perhaps this will even be automated with the files submitted to “Software as a Service” that provides the analysis and compute horsepower without the architect even knowing that she has invoked High Performance Computing.
Personal genomics is often in the popular press these days. Companies like 23andMe and deCODEme offer you an analysis of your personal genome right now, but in reality genomic understanding and its real use in clinical practice are still some years off. In the April 1 edition of Nature, Craig Venter wrote in Multiple personal genomes await
Even if we had all this information today, we wouldn’t be able to make use of it because we don’t have the computational infrastructure to compare even thousands of genotypes and phenotypes with each other. The need for such an analysis could be the best justification for building a proposed ‘exascale’ supercomputer, which would run 1,000 times faster than today’s fastest computers.
From heroic effort to daily routine
Venter is writing about the monumental computing task that lies in the immediate future, in order to understand genomics and its relation to individual health. But could you imagine a not too distant day when this becomes a routine part of your health care?
The clinical connections will have already been made through the research that Venter describes. Your doctor prescribes a complete DNA sequencing of your entire genome, much as she prescribes a blood test today. And then your genome (and transcriptome and metabolome) are matched up against the world-wide databases for diagnoses and treatment. This will still be a substantial computing task, but she might perform this right from the office by invoking services across the net. And again, she would probably not know or care that she was invoking High Performance Computing.
Both of these scenarios have something in common. The doctor and the architect are not HPC users as we know them today, however they are accessing invisible HPC resources and they are doing so without any of the specialized knowledge that is currently required. Are these two of characteristics to strive for? Are they two of the characteristics that will define the missing middle in our near future? — users who don’t know they are users, and HPC that is invisible?
Dr. William (Bill) Feiereisen is currently the director of HPC for Lockheed Martin. Before holding this position he was the Division Leader of the Computer and Computational Sciences Division at Los Alamos National Laboratory, and prior to that he spent fifteen years at NASA Ames Research Center first as a computational scientist and later as the leader of the NASA Advanced Computing Facility (NAS). His background is turbulence modeling and the fluid mechanics and gas dynamics of hypersonic reentry flows.