Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Porting a Particle-in-Cell Code to Exascale Architectures

By Nils Heinonen on behalf of the Argonne Leadership Computing Facility As part of a series aimed at sharing best practices in preparing applications for Aurora, we highlight researchers’ efforts to optimize codes to run efficiently on graphics processing units. Take advantage of upgrades being made to high-level, non-machine-specific libraries and programming models Developed in […]

 Preparing for Exascale: ALCF’s Aurora Early Science Program and Visualizing Cancer’s Spread

Scientists are preparing a cancer modeling study to run on Argonne’s upcoming Aurora supercomputer before it goes online in 2022. The U.S. Department of Energy’s (DOE) Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers – Aurora is scheduled to arrive in 2022. To prepare codes for the architecture and scale of […]

NERSC, ALCF, Codeplay Partner on SYCL GPU Compiler

The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL) and Argonne Leadership Computing Facility (ALCF) are working with Codeplay Software to enhance the LLVM SYCL GPU compiler capabilities for Nvidia A100 GPUs. The collaboration is designed to help NERSC and ALCF users, along with the HPC community in general, produce […]

Argonne’s Rick Stevens named ACM Fellow

Rick Stevens has been named a Fellow of the Association of Computer Machinery (ACM). Stevens is associate laboratory director of the Computing, Environment and Life Sciences directorate at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and a professor of computer science at the University of Chicago. Stevens was honored “for outstanding contributions in […]

At SC20: Intel Provides Aurora Update as Argonne Developers Use Intel Xe-HP GPUs in Lieu of ‘Ponte Vecchio’

In an update to yesterday’s “Bridge to ‘Ponte Vecchio'” story, today we interviewed, Jeff McVeigh, Intel VP/GM of data center XPU products and solutions, who updated us on developments at Intel with direct bearing on Aurora, including the projected delivery of Ponte Vecchio (unchanged); on Aurora’s deployment (sooner than forecast yesterday by industry analyst firm Hyperion Research); on Intel’s “XPU” cross-architecture strategy and its impact on Aurora application development work ongoing at Argonne; and on the upcoming release of the first production version of oneAPI (next month), Intel’s cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators.

Video: Profiling Python Workloads with Intel VTune Amplifier

Paulius Velesko from Intel gave this talk at the ALCF Many-Core Developer Sessions. “This talk covers efficient profiling techniques that can help to dramatically improve the performance of code by identifying CPU and memory bottlenecks. Efficient profiling techniques can help dramatically improve the performance of code by identifying CPU and memory bottlenecks. We will demonstrate how to profile a Python application using Intel VTune Amplifier, a full-featured profiling tool.”

Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale

Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution. The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads.”

Call for Proposals: ALCF Data Science Program

Argonne is now accepting proposals for the ALCF Data Science Program (ADSP) through July 1, 2019. “The ADSP open call provides an opportunity for researchers to submit proposals for projects that will employ advanced statistical, machine learning, and artificial intelligence techniques to gain insights into massive datasets produced by experimental, simulation, or observational methods.”

Argonne Looks to Singularity for HPC Code Portability

Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code almost anywhere. “Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced.”

Data Science Program at Argonne Looks to Machine Learning for New Breakthroughs

Over at Argonne, Nils Heinonen writes that four new projects for the ALCF Data Science Program that will utilize machine learning, deep learning, and other artificial intelligence methods to enable data-driven discoveries across scientific disciplines. “Each project intends to implement novel machine learning techniques; some will integrate these methods with simulations and experiments, while others will pioneer uncertainty quantification and visualization to aid in the interpretation of deep neural networks.”