Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Boosting Manycore Code Optimization Efforts with Roofline Technology

A software toolkit developed at Berkeley Lab to better understand supercomputer performance is now being used to boost application performance for researchers running codes at NERSC and other supercomputing facilities. “Since its initial development, what is now known as the Empirical Roofline Toolkit (ERT) has benefitted from contributions by several Berkeley Lab staff. Along the way, HPC users who write scientific applications for manycore systems have been able to apply the toolkit to their applications and see how changing parameters of their code can improve performance.”

Berkeley Lab Tunes NWChem for Intel Xeon Phi Processor

A team of researchers at Berkeley Lab, PNNL, and Intel are working hard to make sure that computational chemists are prepared to compute efficiently on next-generation exascale machines. Recently, they achieved a milestone, successfully adding thread-level parallelism on top of MPI-level parallelism in the planewave density functional theory method within the popular software suite NWChem. “Planewave codes are useful for solution chemistry and materials science; they allow us to look at the structure, coordination, reactions and thermodynamics of complex dynamical chemical processes in solutions and on surfaces.”

SDSC Seismic Simulation Software Exceeds 10 Petaflops on Cori Supercomputer

Researchers at SDSC have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date. SDSC’s ground-breaking performance of 10.4 Petaflops on earthquake simulations used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at NERSC.

NERSC Selects Six Teams for Exascale Science Applications Program

Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. “We’re very excited to welcome these new data-intensive science application teams to NESAP,” said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP’s tools and expertise should help accelerate the transition of these data science codes to KNL. But I’m also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way.”

Richard Gerber to Head NERSC’s HPC Department

“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”

Supercomputing Sheds Light on Leaf Study

A new study led by a research scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) highlights a literally shady practice in plant science that has in some cases underestimated plants’ rate of growth and photosynthesis, among other traits. “More standardized fieldwork, in parallel with new computational tools and theoretical work, will contribute to better global plant models,” Keenan said.

Supercomputing Drug Discovery to Combat Heart Disease

Using a unique computational approach to rapidly sample proteins in their natural state of gyrating, bobbing, and weaving, a research team from UC San Diego and Monash University in Australia has identified promising drug leads that may selectively combat heart disease, from arrhythmias to cardiac failure.

Machine Learning and HPC Converge at NERSC

In this video from the Intel HPC Developer Conference, Prabhat from NERSC describes how high performance computing techniques are being used to scale Machine Learning to over 100,000 compute cores. “Using TB-sized datasets from three science applications: astrophysics, plasma physics, and particle physics, we show that our implementation can construct kd-tree of 189 billion particles in 48 seconds on utilizing ∼50,000 cores.”

DOE to Showcase Leadership in HPC at SC16

Researchers and staff from the U.S. Department of Energy’s national laboratories will showcase some of DOE’s best computing and networking innovations and techniques at SC16 in Salt Lake City. “Computational scientists working for various DOE laboratories have been in involved in the conference since its 1988 beginnings, and this year’s event is no different. Experts from 14 national laboratories will be sharing a booth featuring speakers, presentations, demonstrations, discussions and simulations.”

SLAC & Berkeley Researchers Prepare for Exascale

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory are playing key roles in two recently funded computing projects with the goal of developing cutting-edge scientific applications for future exascale supercomputers that can perform at least a billion billion computing operations per second – 50 to 100 times more than the most powerful supercomputers in the world today.