In this simulation video from the Caltech-Cornell Simulating eXtreme Spacetime project, two black holes merge into one. Images were generated by the Spectral Einstein Code (SpEC) code. To learn more, the International Science Grid This Week blog caught up with Harald Pfeiffer of the Canadian Institute for Theoretical Astrophysics and University of Toronto:
iSGTW: How resource intensive is this code – can it do these simulations overnight on a workstation? Or does it need many hundreds or thousands of CPU-hours?
Pfeiffer: Binary compact object simulations (where each object can either be a black hole or a neutron star) require 10s to 100s of thousand of CPU-hours per run. For binary black holes, the high cost is mostly determined by the high accuracy required for gravitational wave detectors (these detectors use our simulations as filters to enhance their sensitivity). For neutron star-black hole and neutron star-neutron star binaries the high cost is mostly determined by the large amount of physical effects that need to be simulated: hydrodynamics, magnetic fields, nuclear physics, neutrinos…
Pfeiffer: Given our CPU requirements, we have to be parallel. We use MPI and need a moderately fast interconnect. Infiniband is best, Gigabit Ethernet looses about 20% efficiency. The efficiency loss of gigE is not terrible, and we do run on gig-E clusters, as it is often easier to get compute time there.
iSGTW: What kind of architectures does SpEC run on — has it run on clusters? Grids? Clouds? Supercomputers?
Pfeiffer: Beowulf clusters and supercomputers. We run on in-house clusters at Caltech and CITA, and at various supercomputers (Kraken, Ranger, Lonestar, funded through “NSF Teragrid”, SciNet at Univerity of Toronto, funded by “Compute Canada”).