Sign up for our newsletter and get the latest HPC news and analysis.

DOE Task Force Releases Recommendations for Exascale Investment


A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.

With ACME, National Labs Collaborate on Climate Change


In an unprecedented collaboration, eight national laboratories will apply supercomputing resources to a new climate study with the National Center for Atmospheric Research. The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications.

NERSC Leads Next-Generation Code Optimization Effort


“We are excited about launching NESAP in partnership with Cray and Intel to help transition our broad user base to energy-efficient architectures,” said Sudip Dosanjh, director of NERSC, the primary HPC facility for the DOE’s Office of Science. “We expect to see many aspects of Cori in an exascale computer, including dramatically more concurrency and on-package memory. The response from our users has been overwhelming—they recognize that Cori will allow them to do science that can’t be done on today’s supercomputers.”

Is Light-speed Computing Only Months Away?


In this video, Professor Heinz Wolff explains the Optalysys Optical Processor. The Cambridge UK based startup announced today that the company is only months away from launching a prototype optical processor with “the potential to deliver Exascale levels of processing power on a standard-sized desktop computer.”

New Paper: Toward Exascale Resilience – 2014 update


The all-new Journal of Supercomputing Frontiers and Innovations has published published a new paper entitled: Toward Exascale Resilience – 2014 Update. Written by Franck Cappello, Al Geist, William Gropp, Sanjay Kale, Bill Kramer, and Marc Snir, the paper surveys what the community has learned in the past five years and summarizes the research problems still considered critical by the HPC community.

SC14 Technical Program: an Interview with Jack Dongarra

Jack Dongarra

“Over the years I have chaired many parts of the Technical Program, but never had a chance to chair the whole Technical Program. SC plays an important role in the high-performance community. It is through the SC Conference that HPC practitioners get an overview of the field, get to showcase our important work, and network with the community.”

Video: Pathways to Exascale N-body Simulations

In this video from the Exascale Computing in Astrophysics Conference, Tom Quinn from the University of Washington presents: Pathways to Exascale N-body Simulations.

New Approaches to Energy Efficient Exascale


“As displayed at ISC’14, DEEP combines a standard InfiniBand cluster of Intel Xeon nodes, with a new, highly scalable ‘booster’ consisting of Phi co-processors and a high-performance 3D torus network from Extoll, the German interconnect company spun out of the University of Heidelberg.”

#HPC Matters: What Would You Do with an Exaflop?

Rich Brueckner

In this Industry Perspective, insideHPC editor Rich Brueckner asks our readers an important question: What Would You Do with an Exaflop? ” I went looking for such exascale use cases this morning, and I found this remarkable story in Harvard Topics Magazine about how an exascale system could predict heart attacks and artery blockage.”

Video: Overcoming Barriers to Exascale through Innovation


“Today, the fastest supercomputers perform about 10^15 arithmetic operations per second and are thus described as petascale systems. However, developers and scientists from supercomputing centres and industry are already planning the route to exascale systems, which are about one thousand times faster than present supercomputers. In order to achieve this kind of performance, amongst other aspects, several million processor cores have to be synchronized and new storage technologies developed. The reliability of the components must be guaranteed and a key factor is the reduction of energy consumption.”