Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Call for Applications: ATPESC 2020 Extreme-Scale Computing Training Program

The Argonne Training Program on Extreme-Scale Computing (ATPESC) has issued its Call for Applications. The event will take place from July 26–August 7 in the Chicago area. “ATPESC provides intensive, two-week training on the key skills, approaches, and tools needed to carry out scientific computing research on the world’s most powerful supercomputers.”

Michela Taufer presents: Scientific Applications and Heterogeneous Architectures

Michela Taufer from UT Knoxville gave this talk at ATPESC 2019. “This talk discusses two emerging trends in computing (i.e., the convergence of data generation and analytics, and the emergence of edge computing) and how these trends can impact heterogeneous applications. This talk presents case studies of heterogenous applications in precision medicine and precision farming that expand scientist workflows beyond the supercomputing center and shed our reliance on large-scale simulations exclusively, for the sake of scientific discovery.”

Podcast: A Codebase for Deep Learning Supercomputers to Fight Cancer

In this Let’s Talk Exascale podcast, Gina Tourassi from ORNL describes how the CANDLE project is setting the stage to fight cancer with the power of Exascale computing. “Basically, as we are leveraging supercomputing and artificial intelligence to accelerate cancer research, we are also seeing how we can drive the next generation of supercomputing.”

SW/HW co-design for near-term quantum computing

Yunong Shi from the University of Chicago gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: FPGAs and Machine Learning

James Moawad and Greg Nash from Intel gave this talk at ATPESC 2019. “FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design.”

Video: The Parallel Computing Revolution Is Only Half Over

In this video from ATPESC 2019, Rob Schreiber from Cerebras Systems looks back at historical computing advancements, Moore’s Law, and what happens next. “A recent report by OpenAI showed that, between 2012 and 2018, the compute used to train the largest models increased by 300,000X. In other words, AI computing is growing 25,000X faster than Moore’s law at its peak. To meet the growing computational requirements of AI, Cerebras has designed and manufactured the largest chip ever built.”

Altair PBS Works Steps Up to Exascale and the Cloud

In this video from SC19, Sam Mahalingam from Altair describes how the company is enhancing PBS Works software to ease the migration of HPC workloads to the Cloud. “Argonne National Laboratory has teamed with Altair to implement a new scheduling system that will be employed on the Aurora supercomputer, slated for delivery in 2021. PBS Works runs big — 50,000 nodes in one cluster, 10,000,000 jobs in a queue, and 1,000 concurrent active users.”

Theta and the Future of Accelerator Programming at Argonne

Scott Parker from Argonne gave this talk at ATPESC 2019. “Designed in collaboration with Intel and Cray, Theta is a 6.92-petaflops (Linpack) supercomputer based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.”

Video: I/O Architectures and Technology

Glenn Lockwood from NERSC gave this talk at ATPESC 2019. “Systems are very different, but the APIs you use shouldn’t be. Understanding performance is easier when you know what’s behind the API. What really happens when you read or write some data?”

The Coming Age of Extreme Heterogeneity in HPC

Jeffrey Vetter from ORNL gave this talk at ATPESC 2019. “In this talk, I’m going to cover some of the high-level trends guiding our industry. Moore’s Law as we know it is definitely ending for either economic or technical reasons by 2025. Our community must aggressively explore emerging technologies now!”