Sign up for our newsletter and get the latest HPC news and analysis.

NERSC Leads Next-Generation Code Optimization Effort

NERSClogocolor

“We are excited about launching NESAP in partnership with Cray and Intel to help transition our broad user base to energy-efficient architectures,” said Sudip Dosanjh, director of NERSC, the primary HPC facility for the DOE’s Office of Science. “We expect to see many aspects of Cori in an exascale computer, including dramatically more concurrency and on-package memory. The response from our users has been overwhelming—they recognize that Cori will allow them to do science that can’t be done on today’s supercomputers.”

Is Light-speed Computing Only Months Away?

light

In this video, Professor Heinz Wolff explains the Optalysys Optical Processor. The Cambridge UK based startup announced today that the company is only months away from launching a prototype optical processor with “the potential to deliver Exascale levels of processing power on a standard-sized desktop computer.”

New Paper: Toward Exascale Resilience – 2014 update

superfri-thumbnail-blue-without-issn

The all-new Journal of Supercomputing Frontiers and Innovations has published published a new paper entitled: Toward Exascale Resilience – 2014 Update. Written by Franck Cappello, Al Geist, William Gropp, Sanjay Kale, Bill Kramer, and Marc Snir, the paper surveys what the community has learned in the past five years and summarizes the research problems still considered critical by the HPC community.

SC14 Technical Program: an Interview with Jack Dongarra

Jack Dongarra

“Over the years I have chaired many parts of the Technical Program, but never had a chance to chair the whole Technical Program. SC plays an important role in the high-performance community. It is through the SC Conference that HPC practitioners get an overview of the field, get to showcase our important work, and network with the community.”

Video: Pathways to Exascale N-body Simulations

In this video from the Exascale Computing in Astrophysics Conference, Tom Quinn from the University of Washington presents: Pathways to Exascale N-body Simulations.

New Approaches to Energy Efficient Exascale

deep

“As displayed at ISC’14, DEEP combines a standard InfiniBand cluster of Intel Xeon nodes, with a new, highly scalable ‘booster’ consisting of Phi co-processors and a high-performance 3D torus network from Extoll, the German interconnect company spun out of the University of Heidelberg.”

#HPC Matters: What Would You Do with an Exaflop?

Rich Brueckner

In this Industry Perspective, insideHPC editor Rich Brueckner asks our readers an important question: What Would You Do with an Exaflop? ” I went looking for such exascale use cases this morning, and I found this remarkable story in Harvard Topics Magazine about how an exascale system could predict heart attacks and artery blockage.”

Video: Overcoming Barriers to Exascale through Innovation

stephen

“Today, the fastest supercomputers perform about 10^15 arithmetic operations per second and are thus described as petascale systems. However, developers and scientists from supercomputing centres and industry are already planning the route to exascale systems, which are about one thousand times faster than present supercomputers. In order to achieve this kind of performance, amongst other aspects, several million processor cores have to be synchronized and new storage technologies developed. The reliability of the components must be guaranteed and a key factor is the reduction of energy consumption.”

The Coral Project: How History will Guide us to Exascale

Tom Wilkie, Scientific Computing World

“As discussed in previous articles in this series, there are (at least) three ways in which Governments are forcing the pace of technological development. One is by international research cooperation – usually on projects that do not have an immediate commercial product as their end-goal. A second is by funding commercial companies to conduct technological research – and thus subsidising, at taxpayers’ expense, the creation or strengthening of technical expertise within commercial companies. The third is subsidy by the back door, through military and civil procurement contracts.”

Burst Buffers and Data-Intensive Scientific Computing

glenn

“For those who haven’t been following the details of one of DOE’s more recent procurement rounds, the NERSC-8 and Trinity request for proposals (RFP) explicitly required that all vendor proposals include a burst buffer to address the capability of multi-petaflop simulations to dump tremendous amounts of data in very short order. The target use case is for petascale checkpoint-restart, where the memory of thousands of nodes (hundreds of terabytes of data) needs to be flushed to disk in an amount of time that doesn’t dominate the overall execution time of the calculation.”