Ian Foster and other researchers in CODAR are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information. “Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” said Foster. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”
In this silent video from the Blue Brain Project at SC16, 865 segments from a rodent brain are simulated with isosurfaces generated from Allen Brain Atlas image stacks. For this INCITE project, researchers from École Polytechnique Fédérale de Lausanne will use the Mira supercomputer at Argonne to advance the understanding of these fundamental mechanisms of the brain’s neocortex.
Today the PASC17 Conference announced that this year’s plenary presentation will be entitled “Unlocking the Mysteries of the Universe with Supercomputers.” The plenary presentation will be given by Katrin Heitmann, Senior Member of the Computation Institute at the University of Chicago and the Kavli Institute for Physical Cosmology, USA.
“As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or shorter courses. The 2017 ATPESC program will be held at a new location from previous years, at the Q Center, one of the largest conference facilities in the Midwest, located just outside Chicago.”
“For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling.”
“Argonne National Labs has created a process to assist in moving large applications to a new system. Their current HPC system, Mira will give way to the next generation system, Aurora, which is part of the collaboration of Oak Ridge, Argonne, and Livermore (CORAL) joint procurement. Since Aurora contains technology that was not available in Mira, the challenge is to give scientists and developers access to some of the new technology, well before the new system goes online. This allows for a more productive environment once the full scale new system is up.”
Today the Department of Energy’s Exascale Computing Project (ECP) today announced that it has selected four co-design centers as part of a 4 year, $48 million funding award. The first year is funded at $12 million, and is to be allocated evenly among the four award recipients. “By targeting common patterns of computation and communication, known as “application motifs”, we are confident that these ECP co-design centers will knock down key performance barriers and pave the way for applications to exploit all that capable exascale has to offer.”
Researchers and staff from the U.S. Department of Energy’s national laboratories will showcase some of DOE’s best computing and networking innovations and techniques at SC16 in Salt Lake City. “Computational scientists working for various DOE laboratories have been in involved in the conference since its 1988 beginnings, and this year’s event is no different. Experts from 14 national laboratories will be sharing a booth featuring speakers, presentations, demonstrations, discussions and simulations.”
Today’s operating systems were not developed with the immense complexity of Exascale in mind. Now, researchers at Argonne National Lab are preparing for HPC’s next wave, where the operating system will have to assume new roles in synchronizing and coordinating tasks. “The Argo team is making several of its experimental OS modifications available. Beckman expects to test them on large machines at Argonne and elsewhere in the next year.”
Peter Thompson from Rogue Wave Software presented this talk at the Argonne Training Program on Extreme-Scale Computing. “Purpose-built for applications using hundreds or thousands of cores, TotalView for HPC provides a set of tools that give scientific and academic developers unprecedented control over processes and thread execution, along with deep visibility into program states and data. By allowing the simultaneous debugging of many processes and threads in a single window, you get complete control over program execution: running, stepping, and halting line-by-line through code within a single thread or within arbitrary groups of processes or threads.”