Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Podcast: Will the ExaSky Project be First to Reach Exascale?

In this Lets Talk Exascale podcast, Katrin Heitmann from Argonne describes how the ExaSky project may be one of the first applications to reach exascale levels of performance. “Our current challenge problem is designed to run across the full machine [on both Aurora and Frontier], and doing so on a new machine is always difficult,” Heitmann said. “We know from experience, having been first users in the past on Roadrunner, Mira, Titan, and Summit; and each of them had unique hurdles when the machine hit the floor.”

SC19 Invited Talk: OpenSpace – Visualizing the Universe

Anders Ynnerman from Linköping University gave this invited talk at SC19. “This talk will present and demonstrate the NASA funded open source initiative, OpenSpace, which is a tool for space and astronomy research and communication, as well as a platform for technical visualization research. OpenSpace is a scalable software platform that paves the path for the next generation of public outreach in immersive environments such as dome theaters and planetariums.”

Video: TNG50 cosmic simulation depicts formation of a single massive galaxy

“This cosmic simulation was made possible by the Hazel Hen supercomputer in Stuttgart, where 16,000 cores worked together for more than a year – the longest and most resource-intensive simulation to date. The simulation itself consists of a cube of space measuring more than 230 million light-years in diameter that contains more than 20 billion particles representing dark matter, stars, cosmic gas, magnetic fields, and supermassive black holes (SMBHs).”

Deep Learning at scale for the construction of galaxy catalogs

A team of scientists is now applying the power of artificial intelligence (AI) and high-performance supercomputers to accelerate efforts to analyze the increasingly massive datasets produced by ongoing and future cosmological surveys. “Deep learning research has rapidly become a booming enterprise across multiple disciplines. Our findings show that the convergence of deep learning and HPC can address big-data challenges of large-scale electromagnetic surveys.”

Podcast: ExaStar Project Seeks Answers in Cosmos

In this podcast, Daniel Kasen from LBNL and Bronson Messer of ORNL discuss advancing cosmology through EXASTAR, part of the Exascale Computing Project. “We want to figure out how space and time get warped by gravitational waves, how neutrinos and other subatomic particles were produced in these explosions, and how they sort of lead us down to a chain of events that finally produced us.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Supercomputing Neutron Star Structures and Mergers

Over at XSEDE, Kimberly Mann Bruch & Jan Zverina from the San Diego Supercomputer Center write that researchers are using supercomputers to create detailed simulations of neutron star structures and mergers to better understand gravitational waves, which were detected for the first time in 2015. “XSEDE resources significantly accelerated our scientific output,” noted Paschalidis, whose group has been using XSEDE for well over a decade, when they were students or post-doctoral researchers. “If I were to put a number on it, I would say that using XSEDE accelerated our research by a factor of three or more, compared to using local resources alone.”

Video: Flying through the Universe with Supercomputing Power

In this video from SC18, Mike Bernhardt from the Exascale Computing Project talked with Salman Habib of Argonne National Laboratory about cosmological computer modeling and simulation. Habib explained that the ExaSky project is focused on developing a caliber of simulation that will use the coming exascale systems at maximal power. Clearly, there will be different types of exascale machines,” he said, “and so they [DOE] want a simulation code that can use not just one type of computer, but multiple types, and with equal efficiency.”

Watch 5,000 Robots Merge to Map the Universe in 3-D

In this video, scientists describe how the Dark Energy Spectroscopic Instrument (DESI) will measure the effect of dark energy on the expansion of the universe. It will obtain optical spectra for tens of millions of galaxies and quasars, constructing a 3D map spanning the nearby universe to 11 billion light years. “How do you create the largest 3D map of the universe? It’s as easy as teaching 5,000 robots how to “dance.” DESI, the Dark Energy Spectroscopic Instrument, is an experiment that will target millions of distant galaxies by automatically swiveling fiber-optic positioners (the robots) to point at them and gather their light.”

The Galactos Project: Using HPC To Run One of Cosmology’s Hardest Challenges

Debbie Bard from NERSC gave this talk at the HPC User Forum. “We present Galactos, a high performance implementation of a novel, O(N^2 ) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and signicant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8 PF (peak) and 5.06 PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.”