Sign up for our newsletter and get the latest HPC news and analysis.

Lustre Plus ZFS at LLNL: Production Plans and Best Practices

llnl

“LLNL’s largest supercomputer is paired with a 55-petabyte file system, known as Grove, that stores vast quantities of simulation data. Grove must transfer information to and from Sequoia at a minimum of 500 gigabytes per second. To support Grove’s high storage capacity and bandwidth requirements, LC software developers have engaged in a multi-year project to replace much of Grove’s Lustre foundation with ZFS. “

Slidecast: Lustre Over ZFS on Linux

1b15cd7

Josh Judd from Warp Mechanics describes how the company delivers Lustre Over ZFS on Linux. “No single technology solves all problems faced in today’s complex world. WARP Mechanics’ philosophy is to customize the many and varied systems into the exact set of solutions required to address the problems. WARP Mechanics leverages tried–and-true technologies from the most advanced systems and removes the complexity, delivering customized turnkey solutions.”

Aeon Computing Ties HPC All Together at SC13

aeon

Aeon Computing was recently selected by the San Diego Supercomputer Center to build and deploy a 7 PetabyteLustre parallel file system as part of the new petascale-level Comet supercomputer. The performance of the 7 PB (Petabyte) filesystem will exceed 200 Gigabytes per second.

Lustre on ZFS at SSEC

Screen Shot 2013-09-22 at 8.03.08 AM

With the release of Lustre 2.4 support for ZFS Lustre servers has arrived. Historically, Lustre has only supported ext4/ldiskfs servers and while those servers have performed well they do suffer from a number of well known limitations. Extending Lustre to use a next generation filesystem like ZFS allows it to achieve greater levels of scalability and introduces new functionality.