When the DOE’s pre-exascale supercomputers come online soon, all three will be running an optimized version of the XGC dynamic fusion code. Developed by a team at the DOE’s Princeton Plasma Physics Laboratory (PPPL), the XGC code was one of only three codes out of more than 30 science and engineering programs selected to participate in Early Science programs on all three new supercomputers, which will serve as forerunners for even more powerful exascale machines that are to begin operating in the United States in the early 2020s.
In his keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.
DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”
“China and the United States have been in the race to develop the most capable supercomputer. China has announced that its exascale computer could be released sooner than originally planned. Steve Conway, VP for high performance computing at IDC, joins Federal Drive with Tom Temin for analysis.”
Gilad Shainer moderated this panel discussion on Exascale Computing at the Stanford HPC Conference. “The creation of a capable exascale ecosystem will have profound effects on the lives of Americans, improving our nation’s national security, economic competitiveness, and scientific capabilities. The exponential increase of computation power enabled with exascale will fuel a vast range of breakthroughs and accelerate discoveries in national security, medicine, earth sciences and many other fields.”
In this video, Dr. Carl J. Williams, Deputy Director of the Physical Measurement Laboratory at the National Institute of Standards and Technology within the United States Department of Commerce, reviews the National Strategic Computing Initiative. Issued by Executive Order, the initiative aims to maximize benefits of high-performance computing research, development and deployment.
“I’m pleased to have the opportunity to lead this important Council,” said Dr. J. Michael McQuade of United Technologies Corporation, who will serve as the first Chair of the ECP Industry Council. “Exascale level computing will help industry address ever more complex, competitively important problems, ones which are beyond the reach of today’s leading edge computing systems. We compete globally for scientific, technological and engineering innovations. Maintaining our lead at the highest level of computational capability is essential for our continued success.”
Ian Foster and other researchers in CODAR are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information. “Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” said Foster. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”
“Nanomagnetic devices may allow memory and logic functions to be combined in novel ways. And newer, perhaps more promising device concepts continue to emerge. At the same time, research in new architectures has also grown. Indeed, at the leading edge, researchers are beginning to focus on co-optimization of new devices and new architectures. Despite the growing research investment, the landscape of promising research opportunities outside the “FET devices and circuits box” is still largely unexplored.”
Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. “We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”