Video: Preparing to program Aurora at Exascale – Early experiences and future directions

Hal Finkel from Argonne gave this talk at IWOCL / SYCLcon 2020. “Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date.”

DOE INCITE program seeks proposals for 2021

The DOE INCITE program has issued its Call for Proposals. “Open to researchers from academia, industry and government agencies, the INCITE program is aimed at large-scale scientific computing projects that require the power and scale of DOE’s leadership-class supercomputers. The program will award up to 60 percent of the allocable time on Summit, the OLCF’s 200-petaflop IBM AC922 machine, and Theta, the ALCF’s 12-petaflop Cray XC40 system.”

Podcast: AI for Science

In this podcast, the Radio Free HPC team looks at the AI for Science program coming out of Argonne National Laboratory. “This is one of the biggest potential changes in our industry and well worth the investigation. But figuring out where AI fits into the traditional world of research and simulation is a difficult problem. Henry points out that nearly every grant proposal needs to include ‘AI’ in order to get serious consideration.”

Video: Profiling Python Workloads with Intel VTune Amplifier

Paulius Velesko from Intel gave this talk at the ALCF Many-Core Developer Sessions. “This talk covers efficient profiling techniques that can help to dramatically improve the performance of code by identifying CPU and memory bottlenecks. Efficient profiling techniques can help dramatically improve the performance of code by identifying CPU and memory bottlenecks. We will demonstrate how to profile a Python application using Intel VTune Amplifier, a full-featured profiling tool.”

Podcast: How Community Collaboration Drives Compiler Technology at the LLVM Project

In this Let’s Talk Exascale podcast, Hal Finkel of Argonne National Laboratory describes how community collaboration is driving compiler infrastructure at the LLVM project. “LLVM is important to a wide swath of technology professionals. Contributions shaping its development have come from individuals, academia, DOE and other government entities, and industry, including some of the most prominent tech companies in the world, both inside and outside of the traditional high-performance computing space.”

New Leaders Join Exascale Computing Project

The US Department of Energy’s Exascale Computing Project has announced three leadership staff changes within the Hardware and Integration (HI) group. “Over the past several months, ECP’s HI team has been adapting its organizational structure and key personnel to prepare for the next phase of exascale hardware and software integration.”

Argonne Publishes AI for Science Report

Argonne National Lab has published a comprehensive AI for Science Report based on a series of Town Hall meetings held in 2019. Hosted by Argonne, Oak Ridge, and Berkeley National Laboratories, the four town hall meetings were attended by more than 1,000 U.S. scientists and engineers. The goal of the town hall series was to examine scientific opportunities in the areas of artificial intelligence (AI), Big Data, and high-performance computing (HPC) in the next decade, and to capture the big ideas, grand challenges, and next steps to realizing these opportunities.

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

Podcast: Co-Design for Online Data Analysis and Reduction at Exascale

In this Let’s Talk Exascale podcast, Ian Foster from Argonne National Lab describes how the CODAR project at ECP is addressing the needs for data reduction, analysis, and management in the exascale era. “When compressing data produced by a simulation, the idea is to keep the parts that are scientifically interesting and toss those that are not. However, every application and, perhaps, every scientist, has a different definition of what “interesting” means in that context. So, CODAR has developed a system called Z-checker to enable users to monitor the compression method.”

Video: Overview of HPC Interconnects

Ken Raffenetti from Argonne gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”