Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

The Galactos Project: Using HPC To Run One of Cosmology’s Hardest Challenges

Debbie Bard from NERSC gave this talk at the HPC User Forum. “We present Galactos, a high performance implementation of a novel, O(N^2 ) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and signicant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8 PF (peak) and 5.06 PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.”

Deep Learning at Scale for Cosmology Research

In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. “Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic.”

Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software

Gilles Fourestey from EPFL gave this talk at the Swiss HPC Conference. “LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.”

Why the World’s Largest Telescope Relies on GPUs

Over at the NVIDIA blog, Jamie Beckett writes that the new European-Extremely Large Telescope, or E-ELT, will capture images 15 times sharper than the dazzling shots the Hubble telescope has beamed to Earth for the past three decades. “are running GPU-powered simulations to predict how different configurations of E-ELT will affect image quality. Changes to the angle of the telescope’s mirrors, different numbers of cameras and other factors could improve image quality.”

HACC: Fitting the Universe inside a Supercomputer

Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. “In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one.”

SC17 Keynote Looks at the SKA Telescope: Life, the Universe, and Computing

In this special guest feature, Robert Roe reports from the SC17 conference keynote. “Philip Diamond, director general of SKA and Rosie Bolton, SKA regional centre project scientist and project scientist for the international engineering consortium designing the high performance computing systems used in the project, took to the stage to highlight the huge requirements for computation and data processing required by the SKA project.”

Berkeley Lab-led Collaborations win HPC Innovation Awards

Two Berkeley Lab-led projects—Celeste and Galactos—were honored with Hyperion Research’s 2017 HPC Innovation Excellence Awards for “the outstanding application of HPC for business and scientific achievements.” The HPC Innovation Excellence awards are designed to showcase return on investment and success stories involving HPC; to help other users better understand the benefits of adopting HPC; and to help justify HPC investments, including for small and medium-size enterprises.

Podcast: Optimizing Cosmos Code on Intel Xeon Phi

In this TACC podcast, Cosmos code developer Chris Fragile joins host Jorge Salazar for a discussion on how researchers are using supercomputers to simulate the inner workings of Black holes. “For this simulation, the manycore architecture of KNL presents new challenges for researchers trying to get the best compute performance. This is a computer chip that has lots of cores compared to some of the other chips one might have interacted with on other systems,” McDougall explained. “More attention needs to be paid to the design of software to run effectively on those types of chips.”

Video: Supercomputing Models Enable Detection of a Cosmic Cataclysm

In this podcast, Peter Nugent from Berkeley Lab explains how scientists confirmed the first-ever measurement of the merger of two neutron stars and its explosive aftermath. “Simulations succeeded in modeling what would happen in an incredibly complex phenomenon like a neutron star merger. Without the models, we all probably all would have been mystified by exactly what we were seeing in the sky.”

Illinois Supercomputers Tag Team for Big Bang Simulation

Researchers are tapping Argonne and NCSA supercomputers to tackle the unprecedented amounts of data involved with simulating the Big Bang. “Researchers performed cosmological simulations on the ALCF’s Mira supercomputer, and then sent huge quantities of data to UI’s Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance.”