“In this keynote, Al Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.”
In this video, Ruben Cruz Garcia from the Earth Sciences department at BSC, describes how supercomputing is key to his research. He also explains what he would do if he had unlimited access to a fully operational exascale computer.
In this podcast, the Radio Free HPC team looks at some the top High Performance Computing stories from this week. First up, we look at Europe’s effort to lead HPC in the next decade. After that, we look at why small companies like Scalable Informatics have such a hard time surviving in the HPC marketplace.
“HPC is moving towards its next frontier – more than 100 times faster than the fastest machines currently available in Europe,” said Andrus Ansip, European Commission Vice-President for the Digital Single Market. “But not all EU countries have the capacity to build and maintain such infrastructure, or to develop such technologies on their own. If we stay dependent on others for this critical resource, then we risk getting technologically ‘locked’, delayed or deprived of strategic know-how. Europe needs integrated world-class capability in supercomputing to be ahead in the global race.”
On Thursday, the U.S.-China Economic & Security Review Commission (USCC) held a hearing on the current and potential future state of supercomputing innovation worldwide, with an emphasis on China’s position on the global stage relative to the USA. Addison Snell from Intersect360 Research provided this testimony in answer to USCC’s questions for the hearing.
“Back in 2013 I wrote the following blog expressing my opinion that I doubted we would reach Exascale before 2020. However, recently it was announced that the world’s first Exascale supercomputer prototype will be ready by the end of 2017 (recently pushed back to early 2018), created by the Chinese. I did some digging and wanted to share my thoughts on the news.”
The Exascale Computing Project (ECP) has selected its fifth Co-Design Center to focus on Graph Analytics — combinatorial (graph) kernels that play a crucial enabling role in many data analytic computing application areas as well as several ECP applications. Initially, the work will be a partnership among PNNL, Lawrence Berkeley National Laboratory, Sandia National Laboratories, and Purdue University.
Today the OpenFabrics Alliance (OFA) published the session abstracts for its 13th Annual OFA Workshop. Sponsored by Intel, the workshop takes place March 27-31 in Austin, Texas. “The workshop will include more than 50 sessions covering a variety of critical networking topics delivered by industry experts from around the world. Additionally, the OFA has announced that Al Geist of Oak Ridge National Laboratory (ORNL) will deliver a workshop keynote address on the impact of the Exascale Computing Project. The workshop program is designed to educate attendees and encourage lively exchanges among OFA members, developers, and users who share a vested interest in high performance networks.”
The Department of Energy’s Oak Ridge National Laboratory has announced the latest release of its Adaptable I/O System (ADIOS), a middleware that speeds up scientific simulations on parallel computing resources such as the laboratory’s Titan supercomputer by making input/output operations more efficient. “As we approach the exascale, there are many challenges for ADIOS and I/O in general,” said Scott Klasky, scientific data group leader in ORNL’s Computer Science and Mathematics Division. “We must reduce the amount of data being processed and program for new architectures. We also must make our I/O frameworks interoperable with one another, and version 1.11 is the first step in that direction.”
In this video from the 2017 HPC Advisory Council Stanford Conference, Subhasish Mitra from Stanford presents: Beyond the Moore’s Law Cliff: The Next 1000X. Professor Subhasish Mitra directs the Robust Systems Group in the Department of Electrical Engineering and the Department of Computer Science of Stanford University, where he is the Chambers Faculty Scholar of Engineering. Prior to joining Stanford, he was a Principal Engineer at Intel Corporation. He received Ph.D. in Electrical Engineering from Stanford University.