In this video, Dr. Carl J. Williams, Deputy Director of the Physical Measurement Laboratory at the National Institute of Standards and Technology within the United States Department of Commerce, reviews the National Strategic Computing Initiative. Issued by Executive Order, the initiative aims to maximize benefits of high-performance computing research, development and deployment.
“I’m pleased to have the opportunity to lead this important Council,” said Dr. J. Michael McQuade of United Technologies Corporation, who will serve as the first Chair of the ECP Industry Council. “Exascale level computing will help industry address ever more complex, competitively important problems, ones which are beyond the reach of today’s leading edge computing systems. We compete globally for scientific, technological and engineering innovations. Maintaining our lead at the highest level of computational capability is essential for our continued success.”
Ian Foster and other researchers in CODAR are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information. “Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” said Foster. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”
“Nanomagnetic devices may allow memory and logic functions to be combined in novel ways. And newer, perhaps more promising device concepts continue to emerge. At the same time, research in new architectures has also grown. Indeed, at the leading edge, researchers are beginning to focus on co-optimization of new devices and new architectures. Despite the growing research investment, the landscape of promising research opportunities outside the “FET devices and circuits box” is still largely unexplored.”
Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. “We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”
“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”
The Xinhua news agency reports that China is planning to develop a prototype exascale supercomputer by the end of 2017. “A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country’s first petaflop computer Tianhe-1, recognized as the world’s fastest in 2010,” said Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, when attending the sixth session of the 16th Tianjin Municipal People’s Congress Tuesday.
Today the Mont-Blanc European project announced it has selected Cavium’s ThunderX2 ARM server processor to power its new HPC prototype. The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.
In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.
Oak Ridge National Laboratory reports that its team of experts are playing leading roles in the recently established DOE’s Exascale Computing Project (ECP), a multi-lab initiative responsible for developing the strategy, aligning the resources, and conducting the R&D necessary to achieve the nation’s imperative of delivering exascale computing by 2021. “ECP’s mission is to ensure all the necessary pieces are in place for the first exascale systems – an ecosystem that includes applications, software stack, architecture, advanced system engineering and hardware components – to enable fully functional, capable exascale computing environments critical to scientific discovery, national security, and a strong U.S. economy.”