Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Podcast: ExaScale is a 4-way Competition

In this podcast, the RadioFree team discusses the 4-way competition for Exascale computing between the US, China, Japan, and Europe. “The European effort is targeting 2 pre-exa installation in the coming months, and 2 actual ExaScale installations in the 2022-2023 timeframe at least one of which will be based on European technology.”

The Challenges of Updating Scientific Codes for New HPC Architectures

In this video from PASC19 in Zurich, Benedikt Riedel from the University of Wisconsin describes the challenges researchers face when it comes to updating their scientific codes for new HPC architectures. After that he describes his work on the IceCube Neutrino Observatory.

Supercomputing Potential Impacts of a Major Quake by Building Location and Size

National lab researchers from Lawrence Livermore and Berkeley Lab are using supercomputers to quantify earthquake hazard and risk across the Bay Area. Their work is focused on the impact of high-frequency ground motion on thousands of representative different-sized buildings spread out across the California region. “While working closely with the NERSC operations team in a simulation last week, we used essentially the entire Cori machine – 8,192 nodes, and 524,288 cores – to execute an unprecedented 5-hertz run of the entire San Francisco Bay Area region for a magnitude 7 Hayward Fault earthquake.”

NEC Embraces Open Source Frameworks for SX-Aurora Vector Computing

In this video from ISC 2019, Dr. Erich Focht from NEC Deutschland GmbH describes how the company is embracing open source frameworks for the SX-Aurora TSUBASA Vector Supercomputer. “Until now, with the existing server processing capabilities, developing complex models on graphical information for AI has consumed significant time and host processor cycles. NEC Laboratories has developed the open-source Frovedis framework over the last 10 years, initially for parallel processing in Supercomputers. Now, its efficiencies have been brought to the scalable SX-Aurora vector processor.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.

Modular Supercomputing Moves Forward in Europe

In this video from ISC 2019, Thomas Lippert from the Jülich Supercomputing Centre describes how modular supercomputing is paving the way forward for HPC in Europe. “The Modular Supercomputer Architecture (MSA) is an innovative approach to build High-Performance Computing (HPC) and High-Performance Data Analytics (HPDA) systems by coupling various compute modules, following a building-block principle. Each module is tailored to the needs of a specific group of applications, and all modules together behave as a single machine.”

HPE to Build Bridges-2 Supercomputer at PSC

NSF is funding $10 Million for a new supercomputer at the Pittsburgh Supercomputing Center (PSC), a joint research center of Carnegie Mellon University and the University of Pittsburgh. We designed Bridges-2 to drive discoveries that will come from the rapid evolution of research, which increasingly needs new, scalable ways for combining large, complex data with high-performance simulation and modeling.”

Jülich Supercomputing Centre Announces Quantum Computing Research Partnership with Google

Today the Jülich Supercomputing Centre announced it is partnering with Google in the field of quantum computing research. The partnership will include joint research and expert trainings in the fields of quantum technologies and quantum algorithms and the mutual use of quantum hardware. “The German research center will operate and make publicly accessible a European quantum computer with 50 to 100 superconducting qubits, to be developed within the EU’s Quantum Flagship Program, a large-scale initiative in the field of quantum technologies funded at the 1 billion € level on a 10 years timescale.”

Announcing the Student Cluster Competition Leadership List

The HPC-AI Advisory Council Dan Olds have just posted the first-ever Student Cluster Competition Leadership List. “The list is a ranking of every institution that has ever competed in a cluster competition. The teams are ranked by the number of times they’ve participated and the awards they’ve earned throughout the years. It covers every cluster competition including the ISC competition in Europe, the SC competition in the US, and the Asian competition in China.”

Video: Data-Centric Parallel Programming

In this slidecast, Torsten Hoefler from ETH Zurich presents: Data-Centric Parallel Programming. “To maintain performance portability in the future, it is imperative to decouple architecture-specific programming paradigms from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that enables separating code definition from its optimization.”