Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: LANL Open Sources DeltaFS software for Wrangling Trillions of Files

A new distributed file system for HPC being distributed today via GitHub provides unprecedented performance for creating, updating and managing extreme numbers of files. “We designed DeltaFS to enable the creation of trillions of files,” said Brad Settlemyer, a Los Alamos computer scientist and project leader. “Such a tool aids researchers in solving classical problems in high-performance computing, such as particle trajectory tracking or vortex detection.”

Podcast: ECP EXAALT Program Extends the Reach of Molecular Dynamics

Computationally, EXAALT’s goal is to develop a comprehensive molecular dynamics capability for exascale. “The user should be able to say, ‘I’m interested in this kind of system size, timescale, and accuracy,’ and directly access the regime without being constrained by the usual scaling paths of current codes,” said Danny Perez of Los Alamos National Laboratory (LANL) and the EXAALT team.

LANL Upgrades to D-Wave 2000Q Quantum Computer

Today D-Wave Systems announced that Los Alamos National Laboratory has upgraded their D-Wave quantum computer to the D-Wave 2000Q system. Los Alamos is investing in D-Wave quantum technology to expand its foundational quantum computing research, enabling exploration of new and diverse quantum computing applications. “We are pleased that the Department of Energy’s National Nuclear Security Administration Advanced Simulation and Computing program funded the upgrade of the D-Wave system, allowing us to continue to explore quantum simulation and algorithms at larger scales,” said Irene Qualters, associate laboratory director for Simulation and Computation at Los Alamos National Laboratory. “D-Wave has been a valued strategic partner in Los Alamos’ pursuit of a new technology that is part of the expanding heterogeneous landscape of computing. Such strong partnerships aid the Laboratory and DOE in the development of the nation’s workforce for the future.”

LANL Solicits Bids for 18 MW Crossroads Supercomputer for Delivery in 2021

The next big supercomputer is out for bid. An RFP was released today for Crossroads, an 18 Megawatt system that will support the nation’s Stockpile Stewardship Program. “Los Alamos National Laboratory is proud to serve as the home of Crossroads. This high-performance computer will continue the Laboratory’s tradition of deploying unique capabilities to achieve our mission of national security science,” said Thom Mason from LANL.”

Video: Ramping up for Exascale at the National Labs

In this video from the Exascale Computing Project, Dave Montoya from LANL describes the continuous software integration effort at DOE facilities where exascale computers will be located sometime in the next 3-4 years. “A key aspect of the Exascale Computing Project’s continuous integration activities is ensuring that the software in development for exascale can efficiently be deployed at the facilities and that it properly blends with the facilities’ many software components. As is commonly understood in the realm of high-performance computing, integration is very challenging: both the hardware and software are complex, with a huge amount of dependencies, and creating the associated essential healthy software ecosystem requires abundant testing.”

nCorium Startup joins LANL’s Efficient Mission Centric Computing Consortium for Ultra-scale Efficiency

The San Jose-based startup company nCorium has joined Los Alamos National Laboratory’s Efficient Mission Centric Computing Consortium (EMC3) in the quest for efficient, ultra-scale computing. “We are excited to be working with nCorium to explore moving data multiple times faster than current approaches while adding value to the data as it moves,” said Gary Grider, HPC Division Leader at Los Alamos. “The prospect of using far less data movement/storage nodes in our environment while providing more in-flight data manipulation is an important step towards the higher efficiencies that the EMC3 seeks.”

Balancing the Load – A Million Cores in Concert

“If you’re doing any kind of parallel simulation, and you have a bit of imbalance, all the other cores have to wait for the slowest one,” Junghans says, a problem that compounds as the computing system’s size grows. “The bigger you go on scale, the more these tiny imbalances matter.” On a system like LANL’s Trinity supercomputer up to 999,999 cores could idle, waiting on a single one to complete a task.

Earth-modeling System steps up to Exascale

“Unveiled today by the DOE, E3SM is a state-of-the-science modeling project that uses the world’s fastest computers to more accurately understand how Earth’s climate work and can evolve into the future. The goal: to support DOE’s mission to plan for robust, efficient, and cost-effective energy infrastructures now, and into the distant future.”

Let’s Talk Exascale Podcast Looks at Co-Design Center for Particle-Based Applications

In this Let’s Talk Exascale podcast, Tim Germann from Los Alamos National Laboratory discusses the ECP’s Co-Design Center for Particle Applications (COPA). “COPA serves as centralized clearinghouse for particle-based methods, and as first users on immature simulators, emulators, and prototype hardware. Deliverables include ‘Numerical Recipes for Particles’ best practices, libraries, and a scalable open exascale software platform.”

Video: D-Wave Systems Seminar on Quantum Computing from SC17

In this video from SC17 in Denver, Bo Ewald from D-Wave Systems hosts a Quantum Computing Seminar. “We will discuss quantum computing, the D-Wave 2000Q system and software, the growing software ecosystem, an overview of some user projects, and how quantum computing can be applied to problems in optimization, machine learning, cyber security, and sampling.”