In this panel discussion from LUG 2014, Lustre users predict 2020 HPC Platform Architectures and Their Impact on Storage. “What will the future of HPC storage look like in the National Labs? This panel discussion suggest that storage will be vectoring off into some very new and interesting directions.”
“LLNL’s largest supercomputer is paired with a 55-petabyte file system, known as Grove, that stores vast quantities of simulation data. Grove must transfer information to and from Sequoia at a minimum of 500 gigabytes per second. To support Grove’s high storage capacity and bandwidth requirements, LC software developers have engaged in a multi-year project to replace much of Grove’s Lustre foundation with ZFS. “
In this video from LUG 2014, Roger Ronald from System Fabric Works presents: Integrating Array Management into Lustre. “Intel Enterprise Edition for Lustre Plug-ins address a significant adoption barrier by improving ease of use. Now, System Fabric Works has implemented a NetApp plug-in for Intel EE Lustre and additional plug-ins for storage, networks, and servers are being encouraged.”
In this video from LUG 2014, Brent Gorda from the Intel High Performance Data Division provides an update on the Whamcloud team has been up to over the past two years as part of Intel. He then shares his views on the news that Seagate has donated Lustre.org back to the community. Finally he wraps up with an assessment of the State of Lustre and it’s progress in the enterprise.
In this video from LUG 2014, Galen Shipmen from OpenSFS, Ken Claffey from Xyratex, and Hugo Falter from EOFS discuss the recent announcement that Seagate has donated Lustre.org back to the user community. “From my perspective, this move is a great indication that Seagate intends to be an active, contributing member of the Lustre community. After years of upheaval, Lustre users can now leave the politics behind and focus the world’s toughest computing challenges.”