Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Profiling Python Workloads with Intel VTune Amplifier

Paulius Velesko from Intel gave this talk at the ALCF Many-Core Developer Sessions. “This talk covers efficient profiling techniques that can help to dramatically improve the performance of code by identifying CPU and memory bottlenecks. Efficient profiling techniques can help dramatically improve the performance of code by identifying CPU and memory bottlenecks. We will demonstrate how to profile a Python application using Intel VTune Amplifier, a full-featured profiling tool.”

Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale

Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution. The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads.”

Call for Proposals: ALCF Data Science Program

Argonne is now accepting proposals for the ALCF Data Science Program (ADSP) through July 1, 2019. “The ADSP open call provides an opportunity for researchers to submit proposals for projects that will employ advanced statistical, machine learning, and artificial intelligence techniques to gain insights into massive datasets produced by experimental, simulation, or observational methods.”

Argonne Looks to Singularity for HPC Code Portability

Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code almost anywhere. “Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced.”

Data Science Program at Argonne Looks to Machine Learning for New Breakthroughs

Over at Argonne, Nils Heinonen writes that four new projects for the ALCF Data Science Program that will utilize machine learning, deep learning, and other artificial intelligence methods to enable data-driven discoveries across scientific disciplines. “Each project intends to implement novel machine learning techniques; some will integrate these methods with simulations and experiments, while others will pioneer uncertainty quantification and visualization to aid in the interpretation of deep neural networks.”

ALCF – The March toward Exascale

David E. Martin gave this talk at the HPC User Forum. “In 2021, the Argonne Leadership Computing Facility (ALCF) will deploy Aurora, a new Intel-Cray system. Aurora, will be capable of over 1 exaflops. It is expected to have over 50,000 nodes and over 5 petabytes of total memory, including high bandwidth memory. The Aurora architecture will enable scientific discoveries using simulation, data and learning.”

Argonne is Supercomputing Big Data from the Large Hadron Collider

Over at Argonne, Madeleine O’Keefe writes that the Lab is supporting CERN researchers working to interpret Big Data from the Large Hadron Collider (LHC), the world’s largest particle accelerator. The LHC is expected to output 50 petabytes of data this year alone, the equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

Evolving Scientific Computing at Argonne

Over at Argonne, John Spizzirri writes that the Lab has helped advance the boundaries of high-performance computing technologies through the Argonne Leadership Computing Facility (ALCF). “Realizing the promise of exascale computing, the ALCF is developing the framework by which to harness this immense computing power to an advanced combination of simulation, data analysis, and machine learning. This effort will undoubtedly reframe the way science is conducted, and do so on a global scale.”

DOE Awards 1.5 billion Hours of Computing Time at Argonne

The ASCR Leadership Computing Challenge has awarded 20 projects for a total of 1.5 billion core-hours at Argonne to pursue challenging, high-risk, high-payoff simulations. “The Advanced Scientific Computing Program (ASCR), which manages some of the world’s most powerful supercomputing facilities, selects projects every year in areas directly related to the DOE mission for broadening the community of researchers capable of using leadership computing resources, and serving national interests for the advancement of scientific discovery, technological innovation, and economic competitiveness.”

Argonne Helps to Develop all-new Lithium-air Batteries

Scientists at Argonne are helping to develop better batteries for our electronic devices. The goal is to develop beyond-lithium-ion batteries that are even more powerful, cheaper, safer and longer lived. “The energy storage capacity was about three times that of a lithium-ion battery, and five times should be easily possible with continued research. This first demonstration of a true lithium-air battery is an important step toward what we call beyond-lithium-ion batteries.”