New Argonne etching technique could advance semiconductors

Researchers at Argonne National Laboratory have developed a new molecular layer etching technique that could potentially enable the manufacture of increasingly small microelectronics. “Our ability to control matter at the nanoscale is limited by the kinds of tools we have to add or remove thin layers of material. Molecular layer etching (MLE) is a tool to allow manufacturers and researchers to precisely control the way thin materials, at microscopic and nanoscales, are removed,” said lead author Matthias Young, an assistant professor at the University of Missouri and former postdoctoral researcher at Argonne.

Exascale Computing Project Announces Staff Changes Within Software Technology Group

The US Department of Energy’s Exascale Computing Project (ECP) has announced the following staff changes within the Software Technology group. Lois Curfman McInnes from Argonne will replace Jonathan Carter as Deputy Director for Software Technology. Meanwhile Sherry Li is now team lead for Math Libraries. “We are fortunate to have such an incredibly seasoned, knowledgeable, and respected staff to help us lead the ECP efforts in bringing the nation’s first exascale computing software environment to fruition,” said Mike Heroux from Sandia National Labs.

Video: Data Parallel Deep Learning

Huihuo Zheng from Argonne National Laboratory gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale

Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution. The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads.”

Video: The Convergence of Big Data and Large-scale Simulation

David Keyes from KAUST gave this talk at ATPESC 2019. “Analytics can provide to machine learning feature vectors for training. Machine learning, in turn, can impute missing data and provide detection and classification. The scientific opportunities are potentially enormous enough to overcome the inertia of the specialized communities that have gathered around each of paradigms and spur convergence.”

Call for Applications: ATPESC 2020 Extreme-Scale Computing Training Program

The Argonne Training Program on Extreme-Scale Computing (ATPESC) has issued its Call for Applications. The event will take place from July 26–August 7 in the Chicago area. “ATPESC provides intensive, two-week training on the key skills, approaches, and tools needed to carry out scientific computing research on the world’s most powerful supercomputers.”

Michela Taufer presents: Scientific Applications and Heterogeneous Architectures

Michela Taufer from UT Knoxville gave this talk at ATPESC 2019. “This talk discusses two emerging trends in computing (i.e., the convergence of data generation and analytics, and the emergence of edge computing) and how these trends can impact heterogeneous applications. This talk presents case studies of heterogenous applications in precision medicine and precision farming that expand scientist workflows beyond the supercomputing center and shed our reliance on large-scale simulations exclusively, for the sake of scientific discovery.”

Podcast: A Codebase for Deep Learning Supercomputers to Fight Cancer

In this Let’s Talk Exascale podcast, Gina Tourassi from ORNL describes how the CANDLE project is setting the stage to fight cancer with the power of Exascale computing. “Basically, as we are leveraging supercomputing and artificial intelligence to accelerate cancer research, we are also seeing how we can drive the next generation of supercomputing.”

SW/HW co-design for near-term quantum computing

Yunong Shi from the University of Chicago gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: FPGAs and Machine Learning

James Moawad and Greg Nash from Intel gave this talk at ATPESC 2019. “FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design.”