Video: A History of Los Alamos National Lab

Terry Wallace from Los Alamos National Lab gave this talk at the HPC User Forum. “The Laboratory was established in 1943 as site Y of the Manhattan Project for a single purpose: to design and build an atomic bomb. It took just 27 months. The Los Alamos of today has a heightened focus on worker safety and security awareness, with the ever-present core values of intellectual freedom, scientific excellence, and national service. Outstanding science underpins the Laboratory’s past and its future.”

Trinity Supercomputer lands at #7 on TOP500

The Trinity Supercomputer at Los Alamos National Laboratory was recently named as a top 10 supercomputer on two lists: it made number three on the High Performance Conjugate Gradients (HPCG) Benchmark project, and is number seven on the TOP500 list. “Trinity has already made unique contributions to important national security challenges, and we look forward to Trinity having a long tenure as one of the most powerful supercomputers in the world.” said John Sarrao, associate director for Theory, Simulation and Computation at Los Alamos.

MarFS – A Scalable Near-POSIX File System over Cloud Objects

Gary Grider from LANL presented this talk at the Storage Developer Conference. “MarFS is a Near-POSIX File System using cloud storage for data and many POSIX file systems for metadata. Extreme HPC environments require that MarFS scale a POSIX namespace metadata to trillions of files and billions of files in a single directory while storing the data in efficient massively parallel ways in industry standard erasure protected cloud style object stores.”

Trinity Supercomputer Wiring Reconfiguration Saves Millions

LANL reports that a moment of inspiration during a wiring diagram review has saved more than $2 million in material and labor costs for the Trinity supercomputer at Los Alamos National Laboratory.

Gary Grider Presents: HPC Storage and IO Trends and Workflows

“Trends in computer memory/storage technology are in flux perhaps more so now than in the last two decades. Economic analysis of HPC storage hierarchies has led to new tiers of storage being added to the next fleet of supercomputers including Burst Buffers or In-System Solid State Storage and Campaign Storage. This talk will cover the background that brought us these new storage tiers and postulate what the economic crystal ball looks like for the coming decade. Further it will suggest methods of leveraging HPC workflow studies to inform the continued evolution of the HPC storage hierarchy.”

HPC News Bytes for Oct. 7, 2015

Sometimes the inbox for HPC news fills up faster than we can handle. In an effort to keep up, we’ve compiled noteworthy news into a Jeopardy type of Speed Round that phrases topics in the form a question.

Video: Looking to the Future of NNSA Supercomputing

In this video, Douglas P. Wade from NNSA describes the computational challenges the agency faces in the stewardship of the nation’s nuclear stockpile. As the Acting Director of the NNSA Office of Advanced Simulation and Computing, Wade looks ahead to future systems on the road to exascale computing.