Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NERSC Selects Six Teams for Exascale Science Applications Program

Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. “We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”

Richard Gerber to Head NERSC’s HPC Department

“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”

China to Develop Exascale Prototype in 2017

The Xinhua news agency reports that China is planning to develop a prototype exascale supercomputer by the end of 2017. “A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country’s first petaflop computer Tianhe-1, recognized as the world’s fastest in 2010,” said Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, when attending the sixth session of the 16th Tianjin Municipal People’s Congress Tuesday.

Bull Atos to Build for HPC Prototype for Mont-Blanc Project using Cavium ThunderX2 Processor

Today the Mont-Blanc European project announced it has selected Cavium’s ThunderX2 ARM server processor to power its new HPC prototype. The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.

Exascale Computing: A Race to the Future of HPC

In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.

Oak Ridge Plays key role in Exascale Computing Project

Oak Ridge National Laboratory reports that its team of experts are playing leading roles in the recently established DOE’s Exascale Computing Project (ECP), a multi-lab initiative responsible for developing the strategy, aligning the resources, and conducting the R&D necessary to achieve the nation’s imperative of delivering exascale computing by 2021. “ECP’s mission is to ensure all the necessary pieces are in place for the first exascale systems – an ecosystem that includes applications, software stack, architecture, advanced system engineering and hardware components – to enable fully functional, capable exascale computing environments critical to scientific discovery, national security, and a strong U.S. economy.”

Reflecting on the Goal and Baseline for Exascale Computing

Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”

The Festivus Airing of Grievances from Radio Free HPC

In this podcast, the Radio Free HPC team honors the Festivus tradition of the annual Airing of Grievances. Our random gripes include: the need for a better HPC benchmark suite, the missed opportunity for ARM servers, the skittish battery in the new Macbook Pro, and a lack of an industry standards body for cloud computing.

SAGE Project Looks to Percipient Storage for Exascale

“The SAGE project, which incorporates research and innovation in hardware and enabling software, will significantly improve the performance of data I/O and enable computation and analysis to be performed more locally to data wherever it resides in the architecture, drastically minimizing data movements between compute and data storage infrastructures. With a seamless view of data throughout the platform, incorporating multiple tiers of storage from memory to disk to long-term archive, it will enable API’s and programming models to easily use such a platform to efficiently utilize the most appropriate data analytics techniques suited to the problem space.”

Thomas Sterling Presents: HPC Runtime System Software for Asynchronous Multi-Tasking

Thomas Sterling presented this Invited Talk at SC16. “Increasing sophistication of application program domains combined with expanding scale and complexity of HPC system structures is driving innovation in computing to address sources of performance degradation. This presentation will provide a comprehensive review of driving challenges, strategies, examples of existing runtime systems, and experiences. One important consideration is the possible future role of advances in computer architecture to accelerate the likely mechanisms embodied within typical runtimes. The talk will conclude with suggestions of future paths and work to advance this possible strategy.”