Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


SC17 Panel Preview: How Serious Are We About the Convergence Between HPC and Big Data?

SC17 will feature a panel discussion entitled How Serious Are We About the Convergence Between HPC and Big Data? “The possible convergence between the third and fourth paradigms confronts the scientific community with both a daunting challenge and a unique opportunity. The challenge resides in the requirement to support both heterogeneous workloads with the same hardware architecture. The opportunity lies in creating a common software stack to accommodate the requirements of scientific simulations and big data applications productively while maximizing performance and throughput.

OSC Helps Map the Invisible Universe

The Ohio Supercomputer Center played a critical role in helping researchers reach a milestone mapping the growth of the universe from its infancy to present day. “The new results released Aug. 3 confirm the surprisingly simple but puzzling theory that the present universe is composed of only 4 percent ordinary matter, 26 percent mysterious dark matter, and the remaining 70 percent in the form of mysterious dark energy, which causes the accelerating expansion of the universe.”

When Neutron Stars and Black Holes Collide

Working with an international team, scientists at Berkeley Lab have developed new computer models to explore what happens when a black hole joins with a neutron star – the superdense remnant of an exploded star. “If we can follow up LIGO detections with telescopes and catch a radioactive glow, we may finally witness the birthplace of the heaviest elements in the universe,” he said. “That would answer one of the longest-standing questions in astrophysics.”

RCE Podcast Looks at Shifter Containers for HPC

In this RCE Podcast, Brock Palen and Jeff Squyres speak with Shane Canon and Doug Jacobsen from NERSC, the authors of Shifter. “Shifter is a prototype implementation that NERSC is developing and experimenting with as a scalable way of deploying containers in an HPC environment. It works by converting user or staff generated images in Docker, Virtual Machines, or CHOS (another method for delivering flexible environments) to a common format.”

Boosting Manycore Code Optimization Efforts with Roofline Technology

A software toolkit developed at Berkeley Lab to better understand supercomputer performance is now being used to boost application performance for researchers running codes at NERSC and other supercomputing facilities. “Since its initial development, what is now known as the Empirical Roofline Toolkit (ERT) has benefitted from contributions by several Berkeley Lab staff. Along the way, HPC users who write scientific applications for manycore systems have been able to apply the toolkit to their applications and see how changing parameters of their code can improve performance.”

Berkeley Lab Tunes NWChem for Intel Xeon Phi Processor

A team of researchers at Berkeley Lab, PNNL, and Intel are working hard to make sure that computational chemists are prepared to compute efficiently on next-generation exascale machines. Recently, they achieved a milestone, successfully adding thread-level parallelism on top of MPI-level parallelism in the planewave density functional theory method within the popular software suite NWChem. “Planewave codes are useful for solution chemistry and materials science; they allow us to look at the structure, coordination, reactions and thermodynamics of complex dynamical chemical processes in solutions and on surfaces.”

SDSC Seismic Simulation Software Exceeds 10 Petaflops on Cori Supercomputer

Researchers at SDSC have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date. SDSC’s ground-breaking performance of 10.4 Petaflops on earthquake simulations used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at NERSC.

NERSC Selects Six Teams for Exascale Science Applications Program

Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. “We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”

Richard Gerber to Head NERSC’s HPC Department

“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”

Supercomputing Sheds Light on Leaf Study

A new study led by a research scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) highlights a literally shady practice in plant science that has in some cases underestimated plants’ rate of growth and photosynthesis, among other traits. “More standardized fieldwork, in parallel with new computational tools and theoretical work, will contribute to better global plant models,” Keenan said.