Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Argonne Publishes AI for Science Report

Argonne National Lab has published a comprehensive AI for Science Report based on a series of Town Hall meetings held in 2019. Hosted by Argonne, Oak Ridge, and Berkeley National Laboratories, the four town hall meetings were attended by more than 1,000 U.S. scientists and engineers. The goal of the town hall series was to examine scientific opportunities in the areas of artificial intelligence (AI), Big Data, and high-performance computing (HPC) in the next decade, and to capture the big ideas, grand challenges, and next steps to realizing these opportunities.

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

Podcast: Co-Design for Online Data Analysis and Reduction at Exascale

In this Let’s Talk Exascale podcast, Ian Foster from Argonne National Lab describes how the CODAR project at ECP is addressing the needs for data reduction, analysis, and management in the exascale era. “When compressing data produced by a simulation, the idea is to keep the parts that are scientifically interesting and toss those that are not. However, every application and, perhaps, every scientist, has a different definition of what “interesting” means in that context. So, CODAR has developed a system called Z-checker to enable users to monitor the compression method.”

Video: Overview of HPC Interconnects

Ken Raffenetti from Argonne gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

New Argonne etching technique could advance semiconductors

Researchers at Argonne National Laboratory have developed a new molecular layer etching technique that could potentially enable the manufacture of increasingly small microelectronics. “Our ability to control matter at the nanoscale is limited by the kinds of tools we have to add or remove thin layers of material. Molecular layer etching (MLE) is a tool to allow manufacturers and researchers to precisely control the way thin materials, at microscopic and nanoscales, are removed,” said lead author Matthias Young, an assistant professor at the University of Missouri and former postdoctoral researcher at Argonne.

Exascale Computing Project Announces Staff Changes Within Software Technology Group

The US Department of Energy’s Exascale Computing Project (ECP) has announced the following staff changes within the Software Technology group. Lois Curfman McInnes from Argonne will replace Jonathan Carter as Deputy Director for Software Technology. Meanwhile Sherry Li is now team lead for Math Libraries. “We are fortunate to have such an incredibly seasoned, knowledgeable, and respected staff to help us lead the ECP efforts in bringing the nation’s first exascale computing software environment to fruition,” said Mike Heroux from Sandia National Labs.

Video: Data Parallel Deep Learning

Huihuo Zheng from Argonne National Laboratory gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale

Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution. The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads.”

Video: The Convergence of Big Data and Large-scale Simulation

David Keyes from KAUST gave this talk at ATPESC 2019. “Analytics can provide to machine learning feature vectors for training. Machine learning, in turn, can impute missing data and provide detection and classification. The scientific opportunities are potentially enormous enough to overcome the inertia of the specialized communities that have gathered around each of paradigms and spur convergence.”

Call for Applications: ATPESC 2020 Extreme-Scale Computing Training Program

The Argonne Training Program on Extreme-Scale Computing (ATPESC) has issued its Call for Applications. The event will take place from July 26–August 7 in the Chicago area. “ATPESC provides intensive, two-week training on the key skills, approaches, and tools needed to carry out scientific computing research on the world’s most powerful supercomputers.”