Why the World’s Largest Telescope Relies on GPUs

Print Friendly, PDF & Email

The telescope is shown using lasers to create artificial stars high in the atmosphere. The first stone ceremony for the telescope was attended by the President of Chile. Image courtesy of the European Southern Observatory.
Focusing in on Outer Space.

Over at the NVIDIA blog, Jamie Beckett writes that the new European-Extremely Large Telescope, or E-ELT, will capture images 15 times sharper than the dazzling shots the Hubble telescope has beamed to Earth for the past three decades. Set to begin operations in 2024 on a Chilean mountaintop, it will gather more than 200 times more light than the Hubble, lighting the way for scientists to peer into galaxies far, far away and study the universe in unprecedented detail.

It could tell us about the origins of the universe, understand how galaxies evolve and even predict what will happen to our galaxy over time,” said Damien Gratadour, a professor at Université Paris Diderot and research scientist at LESIA, Observatoire de Paris, in a talk at the GPU Technology Conference this week.

The E-ELT is the first telescope that incorporates what’s known as adaptive optics to reduce the effects of turbulence on the beam of light and provide increased and uniform image quality. In the same way a road seems to shimmer on a hot day, that turbulence prevents scientists from capturing clear, sharp images of the cosmos.

Deploying extremely large telescopes without adaptive optics is like acquiring a fast sport car and driving it only in first gear,” said Hatem Ltaief, a senior research scientist at King Abdullah University of Science and Technology, who works with Gratadour on E-ELT.

Hatem Ltaief from KAUST

That’s where GPUs are critical, said Ltaief.

The scientists, who met a few years ago at GTC, are running GPU-powered simulations to predict how different configurations of E-ELT will affect image quality. Changes to the angle of the telescope’s mirrors, different numbers of cameras and other factors could improve image quality.

We’re looking for the best tradeoff between scientific output and price,” Gratadour said.

Simulated Galaxies Observations in Seconds

Ltaief and Gratadour are using two NVIDIA DGX-1 AI supercomputers — one with Tesla P100 GPUs and the other with Tesla V100 GPUs — to run many simulations in parallel and make changes in real time.

Using the DGX-1, it takes only a few seconds to simulate the observation of several galaxies,” said Ltaief. “A few years ago it would take days to do that.”

The pair next plan to use deep learning to more accurately predict what configuration changes will most effectively improve images. But they declined to discuss their plans in more detail because, Ltaief said, “we really want to come back to GTC next year and talk about it then.”

Sign up for our insideHPC Newsletter