Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: TNG50 cosmic simulation depicts formation of a single massive galaxy

“This cosmic simulation was made possible by the Hazel Hen supercomputer in Stuttgart, where 16,000 cores worked together for more than a year – the longest and most resource-intensive simulation to date. The simulation itself consists of a cube of space measuring more than 230 million light-years in diameter that contains more than 20 billion particles representing dark matter, stars, cosmic gas, magnetic fields, and supermassive black holes (SMBHs).”

Deep Learning at scale for the construction of galaxy catalogs

A team of scientists is now applying the power of artificial intelligence (AI) and high-performance supercomputers to accelerate efforts to analyze the increasingly massive datasets produced by ongoing and future cosmological surveys. “Deep learning research has rapidly become a booming enterprise across multiple disciplines. Our findings show that the convergence of deep learning and HPC can address big-data challenges of large-scale electromagnetic surveys.”

Podcast: ExaStar Project Seeks Answers in Cosmos

In this podcast, Daniel Kasen from LBNL and Bronson Messer of ORNL discuss advancing cosmology through EXASTAR, part of the Exascale Computing Project. “We want to figure out how space and time get warped by gravitational waves, how neutrinos and other subatomic particles were produced in these explosions, and how they sort of lead us down to a chain of events that finally produced us.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Supercomputing Neutron Star Structures and Mergers

Over at XSEDE, Kimberly Mann Bruch & Jan Zverina from the San Diego Supercomputer Center write that researchers are using supercomputers to create detailed simulations of neutron star structures and mergers to better understand gravitational waves, which were detected for the first time in 2015. “XSEDE resources significantly accelerated our scientific output,” noted Paschalidis, whose group has been using XSEDE for well over a decade, when they were students or post-doctoral researchers. “If I were to put a number on it, I would say that using XSEDE accelerated our research by a factor of three or more, compared to using local resources alone.”

Video: Flying through the Universe with Supercomputing Power

In this video from SC18, Mike Bernhardt from the Exascale Computing Project talked with Salman Habib of Argonne National Laboratory about cosmological computer modeling and simulation. Habib explained that the ExaSky project is focused on developing a caliber of simulation that will use the coming exascale systems at maximal power. Clearly, there will be different types of exascale machines,” he said, “and so they [DOE] want a simulation code that can use not just one type of computer, but multiple types, and with equal efficiency.”

Watch 5,000 Robots Merge to Map the Universe in 3-D

In this video, scientists describe how the Dark Energy Spectroscopic Instrument (DESI) will measure the effect of dark energy on the expansion of the universe. It will obtain optical spectra for tens of millions of galaxies and quasars, constructing a 3D map spanning the nearby universe to 11 billion light years. “How do you create the largest 3D map of the universe? It’s as easy as teaching 5,000 robots how to “dance.” DESI, the Dark Energy Spectroscopic Instrument, is an experiment that will target millions of distant galaxies by automatically swiveling fiber-optic positioners (the robots) to point at them and gather their light.”

The Galactos Project: Using HPC To Run One of Cosmology’s Hardest Challenges

Debbie Bard from NERSC gave this talk at the HPC User Forum. “We present Galactos, a high performance implementation of a novel, O(N^2 ) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and signicant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8 PF (peak) and 5.06 PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.”

Deep Learning at Scale for Cosmology Research

In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. “Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic.”

Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software

Gilles Fourestey from EPFL gave this talk at the Swiss HPC Conference. “LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.”