Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Designing HPC & Deep Learning Middleware for Exascale Systems

DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”

Podcast: IDC’s Steve Conway on China’s New Plan for Exascale

“China and the United States have been in the race to develop the most capable supercomputer. China has announced that its exascale computer could be released sooner than originally planned. Steve Conway, VP for high performance computing at IDC, joins Federal Drive with Tom Temin for analysis.”

Panel Discussion: The Exascale Endeavor

Gilad Shainer moderated this panel discussion on Exascale Computing at the Stanford HPC Conference. “The creation of a capable exascale ecosystem will have profound effects on the lives of Americans, improving our nation’s national security, economic competitiveness, and scientific capabilities. The exponential increase of computation power enabled with exascale will fuel a vast range of breakthroughs and accelerate discoveries in national security, medicine, earth sciences and many other fields.”

Video: A Look at the National Strategic Computing Initiative

In this video, Dr. Carl J. Williams, Deputy Director of the Physical Measurement Laboratory at the National Institute of Standards and Technology within the United States Department of Commerce, reviews the National Strategic Computing Initiative. Issued by Executive Order, the initiative aims to maximize benefits of high-performance computing research, development and deployment.

Exascale Computing Project Announces ECP Industry Council

“I’m pleased to have the opportunity to lead this important Council,” said Dr. J. Michael McQuade of United Technologies Corporation, who will serve as the first Chair of the ECP Industry Council. “Exascale level computing will help industry address ever more complex, competitively important problems, ones which are beyond the reach of today’s leading edge computing systems. We compete globally for scientific, technological and engineering innovations. Maintaining our lead at the highest level of computational capability is essential for our continued success.”

A Look at the CODAR Co-Design Center for Online Data Analysis and Reduction at Exascale

Ian Foster and other researchers in CODAR are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information. “Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” said Foster. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”

Beyond Exascale: Emerging Devices and Architectures for Computing

“Nanomagnetic devices may allow memory and logic functions to be combined in novel ways. And newer, perhaps more promising device concepts continue to emerge. At the same time, research in new architectures has also grown. Indeed, at the leading edge, researchers are beginning to focus on co-optimization of new devices and new architectures. Despite the growing research investment, the landscape of promising research opportunities outside the “FET devices and circuits box” is still largely unexplored.”

NERSC Selects Six Teams for Exascale Science Applications Program

Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. “We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”

Richard Gerber to Head NERSC’s HPC Department

“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”

China to Develop Exascale Prototype in 2017

The Xinhua news agency reports that China is planning to develop a prototype exascale supercomputer by the end of 2017. “A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country’s first petaflop computer Tianhe-1, recognized as the world’s fastest in 2010,” said Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, when attending the sixth session of the 16th Tianjin Municipal People’s Congress Tuesday.