Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Registration Open for Argonne Training Program on Extreme-Scale Computing

“As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or shorter courses. The 2017 ATPESC program will be held at a new location from previous years, at the Q Center, one of the largest conference facilities in the Midwest, located just outside Chicago.”

Understanding Cities through Computation, Data Analytics, and Measurement

“For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling.”

Building for the Future Aurora Supercomputer at Argonne

“Argonne National Labs has created a process to assist in moving large applications to a new system. Their current HPC system, Mira will give way to the next generation system, Aurora, which is part of the collaboration of Oak Ridge, Argonne, and Livermore (CORAL) joint procurement. Since Aurora contains technology that was not available in Mira, the challenge is to give scientists and developers access to some of the new technology, well before the new system goes online. This allows for a more productive environment once the full scale new system is up.”

Exascale Computing Project Announces $48 Million to Establish Four Exascale Co-Design Centers

Today the Department of Energy’s Exascale Computing Project (ECP) today announced that it has selected four co-design centers as part of a 4 year, $48 million funding award. The first year is funded at $12 million, and is to be allocated evenly among the four award recipients. “By targeting common patterns of computation and communication, known as “application motifs”, we are confident that these ECP co-design centers will knock down key performance barriers and pave the way for applications to exploit all that capable exascale has to offer.”

DOE to Showcase Leadership in HPC at SC16

Researchers and staff from the U.S. Department of Energy’s national laboratories will showcase some of DOE’s best computing and networking innovations and techniques at SC16 in Salt Lake City. “Computational scientists working for various DOE laboratories have been in involved in the conference since its 1988 beginnings, and this year’s event is no different. Experts from 14 national laboratories will be sharing a booth featuring speakers, presentations, demonstrations, discussions and simulations.”

Argo Project Developing OS Technology for Exascale

Today’s operating systems were not developed with the immense complexity of Exascale in mind. Now, researchers at Argonne National Lab are preparing for HPC’s next wave, where the operating system will have to assume new roles in synchronizing and coordinating tasks. “The Argo team is making several of its experimental OS modifications available. Beckman expects to test them on large machines at Argonne and elsewhere in the next year.”

TotalView: Debugging from Desktop to Supercomputer

Peter Thompson from Rogue Wave Software presented this talk at the Argonne Training Program on Extreme-Scale Computing. “Purpose-built for applications using hundreds or thousands of cores, TotalView for HPC provides a set of tools that give scientific and academic developers unprecedented control over processes and thread execution, along with deep visibility into program states and data. By allowing the simultaneous debugging of many processes and threads in a single window, you get complete control over program execution: running, stepping, and halting line-by-line through code within a single thread or within arbitrary groups of processes or threads.”

Jack Dongarra Presents: Adaptive Linear Solvers and Eigensolvers

Jack Dongarra presented this talk at the Argonne Training Program on Extreme-Scale Computing. “ATPESC provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: Introduction to Parallel Supercomputing

Pete Beckman presented this talk at the Argonne Training Program on Extreme-Scale Computing. “Here is the Parallel Platform Paradox: The average time required to implement a moderate-sized application on a parallel computer architecture is equivalent to the half-life of the latest parallel supercomputer.”

Video: Experiences in eXtreme Scale HPC

In this video from the 2016 Argonne Training Program on Extreme-Scale Computing, Mark Miller from LLNL leads a panel discussion on Experiences in eXtreme Scale in HPC with FASTMATH team members. “The FASTMath SciDAC Institute is developing and deploying scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborating with U.S. Department of Energy (DOE) domain scientists to ensure the usefulness and applicability of our work. The focus of our work is strongly driven by the requirements of DOE application scientists who work extensively with mesh-based, continuum-level models or particle-based techniques.”