Lustre Plus ZFS at LLNL: Production Plans and Best Practices

Print Friendly, PDF & Email

In this video from LUG 2014, Marc Stearman from Lawrence Livermore National Laboratory presents: LLNL — Production Plans and Best Practices.

LLNL’s largest supercomputer is paired with a 55-petabyte file system, known as Grove, that stores vast quantities of simulation data. Grove must transfer information to and from Sequoia at a minimum of 500 gigabytes per second. To support Grove’s high storage capacity and bandwidth requirements, LC software developers have engaged in a multi-year project to replace much of Grove’s Lustre foundation with ZFS.

See more talks at the LUG 2014 Video Gallery.