Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


White House Releases Strategic Plan for NSCI Initiative

This week the White House Office of Science and Technology Policy released the Strategic Plan for the NSCI Initiative. “The NSCI strives to establish and support a collaborative ecosystem in strategic computing that will support scientific discovery and economic drivers for the 21st century, and that will not naturally evolve from current commercial activity,” writes Altaf Carim, William Polk, and Erin Szulman from the OSTP in a blog post.

Preliminary Agenda Posted for HPC User Forum in Austin, Sept. 6-8

IDC has published the preliminary agenda for their next HPC User Forum. The event will take place Sept. 6-8 in Austin, Texas.

Intel® Xeon Phi™ Processor—Highly Parallel Computing Engine for HPC

For decades, Intel has been enabling insight and discovery through its technologies and contributions to parallel computing and High Performance Computing (HPC). Central to the company’s most recent work in HPC is a new design philosophy for clusters and supercomputers called Intel® Scalable System Framework (Intel® SSF), an approach designed to enable sustained, balanced performance as the community pushes towards the Exascale era.

With China Kicking the FLOP Out of Us, the Gold Medal Prize is Future Prosperity

“Achieving the No. 1 ranking is significant for China’s economic and energy security, not to mention national security. With 125 petaFLOP/s (peak), China’s supercomputer is firmly on the path toward applying incredible modeling and simulation capabilities enabling them to spur innovations in the fields of clean energy, manufacturing, and yes, nuclear weapons and other military applications. The strong probability of China gaining advantages in these areas should be setting off loud alarms, but it is hard to see what the U.S. is going to do differently to respond.”

China Leads TOP500 with Home-grown Technology

In this special guest feature, Robert Roe from Scientific Computing World writes that the new #1 system on the TOP500 is using home-grown processors to shake up the supercomputer industry. “While the system does have a focus towards computation, as opposed to the more data-centric computing strategies that we have begun to see implemented in the US and Europe, it is most certainly not just a Linpack supercomputer. The report explains that there are already three applications running on the Sunway TaihuLight system which are finalists for the Gordon Bell Award at SC16.”

Video: DEEP-ER Project Moves Europe Closer to Exascale

In this video from ISC 2016, Estela Suarez from the Jülich Supercomputing Centre provides an update on the DEEP-ER project, which is paving the way towards Exascale computing. “In the predecessor DEEP project, an innovative architecture for heterogeneous HPC systems has been developed based on the combination of a standard HPC Cluster and a tightly connected HPC Booster built of many- core processors. DEEP-ER now evolves this architecture to address two significant Exascale computing challenges: highly scalable and efficient parallel I/O and system resiliency. Co-Design is key to tackle these challenges – through thoroughly integrated development of new hardware and software components, fine-tuned with actual HPC applications in mind.”

Mellanox and PNNL to Collaborate on Exascale System

Today Mellanox announced a joint technology collaboration with Pacific Northwest National Laboratory (PNNL) to architect, design and explore technologies for future Exascale platforms. The agreement will explore the advanced capabilities of Mellanox interconnect technology while focusing on a new generation of in-network computing architecture and the laboratory application requirements. This collaboration will also enable the DOE lab, through its Center for Advanced Technology Evaluation (CENATE), and Mellanox to effectively explore new software and hardware synergies that can drive high performance computing to the next level.

Challenges for Climate and Weather Prediction in the Era of Heterogeneous Architectures

Beth Wingate from the University of Exeter presented this talk at the PASC16 conference in Switzerland. “For weather or climate models to achieve exascale performance on next-generation heterogeneous computer architectures they will be required to exploit on the order of million- or billion-way parallelism. This degree of parallelism far exceeds anything possible in today’s models even though they are highly optimized. In this talk I will discuss the mathematical issue that leads to the limitations in space- and time-parallelism for climate and weather prediction models – oscillatory stiffness in the PDE.”

Job of the Week: Associate Director for Computation at Livermore

Lawrence Livermore National Lab is seeking an Associate Director for Computation in our Job of the Week. LLNL seeks to fill the position of Associate Director (AD) for Computation, a position key to the continued success of LLNL’s world-premier high performance computing, computer science, and data science enterprise. The AD for Computation is responsible for […]

Paul Messina on the New ECP Exascale Computing Project

Argonne Distinguished Fellow Paul Messina has been tapped to lead the Exascale Computing Project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project will focus its efforts on four areas: Applications, Software, Hardware, and Exascale Systems.