MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Who Will Write Next-generation Software?


In this special guest feature from Scientific Computing World, Robert Roe writes that software scalability and portability may be more important even than energy efficiency to the future of HPC. “As the HPC market searches for the optimal strategy to reach exascale, it is clear that the major roadblock to improving the performance of applications will be the scalability of software, rather than the hardware configuration – or even the energy costs associated with running the system.”

With APEX, National Labs Collaborate to Develop Next-Gen Supercomputers


Today Los Alamos, Lawrence Berkeley, and Sandia national laboratories announced the Alliance for Application Performance at Extreme Scale (APEX). The new collaboration will focus on the design, acquisition and deployment of future advanced technology high performance computing systems.

RCE Podcast: Spack Package Management Tool

Todd Gamblin, LLNL

“Spack is designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system.”

RCE Podcast on the Conduit Model for Hierarchical Scientific Data

Cyrus D Harrison, LLNL

In this RCE podcast, Brock Palen and Jeff Squyres discuss Conduit with Cyrus Harriston from LLNL. Conduit is an open source project from Lawrence Livermore that provides an intuitive model for describing hierarchical scientific data in C++, C, Fortran, and Python and is used for data coupling between packages in-core, serialization, and I/O tasks.

LLNL & Rensselaer Polytechnic to Promote Industrial HPC


Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute will combine decades of expertise to help American industry and businesses expand use of high performance computing under a recently signed memorandum of understanding.

Video: Debugging HPC Applications at Massive Scales


In this video, LLNL scientists discuss the challenges of debugging programs at scale on the Sequoia supercomputer, which has 1.6 million processors. “Bugs in parallel HPC applications are difficult to debug because errors propagate among compute nodes, programmers must debug thousands of nodes or more, and bugs might manifest only at large scale.”

Video: DARPA’s SyNAPSE and the Cortical Processor


“I will describe a decade-long, multi-disciplinary, multi-institutional effort spanning neuroscience, supercomputing and nanotechnology to build and demonstrate a brain-inspired computer and describe the architecture, programming model and applications. I also will describe future efforts in collaboration with DOE to build, literally, a “brain-in-a-box”. The work was built on simulations conducted on Lawrence Livermore National Laboratory’s Dawn and Sequoia HPC systems in collaboration with Lawrence Berkeley National Laboratory.”

Experts Focus on Code Efficiency at ISC 2015

DK Panda from Ohio State University conducts a tutorial at ISC 2015.

In this special guest feature, Robert Roe from Scientific Computing World explores the efforts made by top HPC centers to scale software codes to the extreme levels necessary for exascale computing. “The speed with which supercomputers process useful applications is more important than rankings on the TOP500, experts told the ISC High Performance Conference in Frankfurt last month.”

IBM and NVIDIA Launch Centers of Excellence at ORNL and LLNL


Today IBM along with Nvidia and two U.S. Department of Energy National Laboratories today announced a pair of Centers of Excellence for supercomputing – one at the Lawrence Livermore National Laboratory and the other at the Oak Ridge National Laboratory. The collaborations are in support of IBM’s supercomputing contract with the U.S. Department of Energy. They will enable advanced, large-scale scientific and engineering applications both for supporting DOE missions, and for the Summit and Sierra supercomputer systems to be delivered respectively to Oak Ridge and Lawrence Livermore in 2017 and to be operational in 2018.

LLNL Breaks Ground on New Supercomputing Facility

From left, Patricia Falcone, deputy director for Science and Technology, Charles Verdon, principal associate director for Weapons and Complex Integration, Michel McCoy, director of Weapons Simulation and Computing, Livermore Mayor John Marchand, Bill Goldstein, LLNL director, and Dona Crawford, associate director of the Computation Directorate.

This week Lawrence Livermore National Laboratory broke ground on a modular and sustainable supercomputing facility that will provide a flexible infrastructure able to accommodate the Laboratory’s growing demand for HPC.