Podcast: A Shift to Modern C++ Programming Models

In this Code Together podcast, Alice Chan from Intel and Hal Finkel from Argonne National Lab discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning. “We discuss the important shift to modern C++ programming models, and how the cross-industry oneAPI initiative, and DPC++, bring much-needed portable performance to today’s developers.”

Supercomputing the San Andreas Fault with CyberShake

With help from DOE supercomputers, a USC-led team expands models of the fault system beneath its feet, aiming to predict its outbursts. For their 2020 INCITE work, SCEC scientists and programmers will have access to 500,000 node hours on Argonne’s Theta supercomputer, delivering as much as 11.69 petaflops. “The team is using Theta “mostly for dynamic earthquake ruptures,” Goulet says. “That is using physics-based models to simulate and understand details of the earthquake as it ruptures along a fault, including how the rupture speed and the stress along the fault plane changes.”

Accelerating vaccine research for COVID-19 with HPC and AI

In this special guest feature, Peter Ungaro from HPE writes that HPC is playing a leading role in our fight against COVID-19 to support the urgent need to find a vaccine that will save lives and reduce suffering worldwide. “At HPE, we are committed to advancing the way we live and work. As a world leader in HPC and AI, we recognize the impact we can make by applying modeling, simulation, machine learning and analytics capabilities to data to accelerate insights and discoveries that were never before possible.”

Video: Preparing to program Aurora at Exascale – Early experiences and future directions

Hal Finkel from Argonne gave this talk at IWOCL / SYCLcon 2020. “Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date.”

DOE INCITE program seeks proposals for 2021

The DOE INCITE program has issued its Call for Proposals. “Open to researchers from academia, industry and government agencies, the INCITE program is aimed at large-scale scientific computing projects that require the power and scale of DOE’s leadership-class supercomputers. The program will award up to 60 percent of the allocable time on Summit, the OLCF’s 200-petaflop IBM AC922 machine, and Theta, the ALCF’s 12-petaflop Cray XC40 system.”

Podcast: AI for Science

In this podcast, the Radio Free HPC team looks at the AI for Science program coming out of Argonne National Laboratory. “This is one of the biggest potential changes in our industry and well worth the investigation. But figuring out where AI fits into the traditional world of research and simulation is a difficult problem. Henry points out that nearly every grant proposal needs to include ‘AI’ in order to get serious consideration.”

Video: Profiling Python Workloads with Intel VTune Amplifier

Paulius Velesko from Intel gave this talk at the ALCF Many-Core Developer Sessions. “This talk covers efficient profiling techniques that can help to dramatically improve the performance of code by identifying CPU and memory bottlenecks. Efficient profiling techniques can help dramatically improve the performance of code by identifying CPU and memory bottlenecks. We will demonstrate how to profile a Python application using Intel VTune Amplifier, a full-featured profiling tool.”

Podcast: How Community Collaboration Drives Compiler Technology at the LLVM Project

In this Let’s Talk Exascale podcast, Hal Finkel of Argonne National Laboratory describes how community collaboration is driving compiler infrastructure at the LLVM project. “LLVM is important to a wide swath of technology professionals. Contributions shaping its development have come from individuals, academia, DOE and other government entities, and industry, including some of the most prominent tech companies in the world, both inside and outside of the traditional high-performance computing space.”

New Leaders Join Exascale Computing Project

The US Department of Energy’s Exascale Computing Project has announced three leadership staff changes within the Hardware and Integration (HI) group. “Over the past several months, ECP’s HI team has been adapting its organizational structure and key personnel to prepare for the next phase of exascale hardware and software integration.”

Argonne Publishes AI for Science Report

Argonne National Lab has published a comprehensive AI for Science Report based on a series of Town Hall meetings held in 2019. Hosted by Argonne, Oak Ridge, and Berkeley National Laboratories, the four town hall meetings were attended by more than 1,000 U.S. scientists and engineers. The goal of the town hall series was to examine scientific opportunities in the areas of artificial intelligence (AI), Big Data, and high-performance computing (HPC) in the next decade, and to capture the big ideas, grand challenges, and next steps to realizing these opportunities.