SDSC is trying something uncommon with HPC: allocated real time access to support “event-driven” science. One example of how the system would be used is to support analysis in the immediate aftermath of an earthquake.
When an earthquake greater than magnitude 3.5 strikes Southern California, typically once or twice a month, [Caltech computational seismologist Jeroen] Tromp expects that his simulation code will need to use 144 processors of the OnDemand system for about 28 minutes. Shortly after the earthquake strikes a job will automatically be submitted and immediately allowed to run. The code will launch and any “normal” jobs running at the time will be interrupted to make way for the on-demand job.
The 2.4 TFLOPS system is a 256 core Dell system. You can read the whole story at SC Online.
The concept isn’t new (for example the DoD HPC Modernization Program has been experimenting with a large-scale interactive supercomputer for some time), but the missions that I’m aware of for these earlier deployments have either been as dedicated support systems — for example for live fire tests — application debugging, or scientific visualization.
The SDSC application is interesting, and I see demand for this kind of thing increasing over time. I’d like to see FEMA step up to fund regional- and national-scale computational disaster response and planning centers (think hurricanes, fires, floods, plagues, and terrorist acts). When not responding interactively to disasters the systems could be running planning scenarios for major risk areas around the country.