Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Deep Learning for Real-Time Gravitational Wave Discovery

Scientists at NCSA have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. This new approach will enable astronomers to study gravitational waves using minimal computational resources, reducing time to discovery and increasing the scientific reach of gravitational wave astrophysics. This innovative research was recently published in Physics Letters B.

George and Huerta worked with NVIDIA and Wolfram researchers to created this demo to visualize the architecture of Deep Filtering, and to get insights into its neuronal activity during the detection and characterization of real gravitational wave events. This demo highlights all the components of Deep Filtering, exhibiting its detection sensitivity and computational performance.

Combining deep learning algorithms, numerical relativity simulations of black hole mergers—obtained with the Einstein Toolkit run on the Blue Waters supercomputer—and data from the LIGO Open Science CenterNCSA Gravity Group researchers Daniel George and Eliu Huerta produced Deep Filtering, an end-to-end time-series signal processing method.

Deep Filtering achieves similar sensitivities and lower errors compared to established gravitational wave detection algorithms, while being far more computationally efficient and more resilient to noise anomalies. The method allows faster than real-time processing of gravitational waves in LIGO’s raw data, and also enables new physics, since it can detect new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. George and Huerta are extending this method to identify in real-time electromagnetic counterparts to gravitational wave events in future LSST data.

NCSA’s Gravity Group leveraged NCSA resources from its Innovative Systems Laboratory, NCSA’s Blue Waters supercomputer, and collaborated with talented interdisciplinary staff at the University of Illinois. Also critical to this research were the GPUs (Tesla P100 and DGX-1) provided by NVIDIA, which enabled an accelerated training of neural networks. Wolfram Research also played an important role, as the Wolfram Language was used in creating this framework for deep learning.

This work was awarded first place at the ACM Student Research Competition at SC17, and also received the Best Poster Award at the 24th IEEE international Conference on HPC, Data, and Analytics. This research was presented as a contributed talk at the NIPS 2017 Workshop on Deep Learning for the Physical Sciences.

Sign up for our insideHPC Newsletter

Leave a Comment


Resource Links: