Sign up for our newsletter and get the latest HPC news and analysis.

Lustre 2.5 Performance Improvements with Large I/O Patches and More

lutre

In this video from LUG 2014, Hitoshi Sato from TITECH and Shuichi Ihara from DDN present: “Lustre 2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE.”

EIO: Collective I/O for Exascale I/O Intensive Applications

e10

“HPC IO problems for HPC have not been resolved so far and the future of exascale is full of uncertainties. The good news is that we have detected an appetite for change both in the storage and in the application community. In addition, a wilderness of new hardware will be arriving such as deeper hierarchies of storage devices, storage class memories, and large numbers of cores per node. This new hardware may both contribute parts of the solution but will also bring new issues to the forefront, requiring storage and application architects to revisit some ideas used so far.”

Video: An Efficient Distributed Burst Buffer System for Lustre

burst2

In this video from LUG 2014, Bradley Settlemyer from Oak Ridge National Laboratory presents: An Efficient Distributed Burst Buffer System for Lustre.

Video: Intel Lustre File Level Replication

lustre

In this video from LUG 2014, Jinshan Xiong from Intel presents: Intel Lustre File Level Replication.

Progress Report on Efficient Integration of Lustre and Hadoop/YARN

goal

Using Hadoop with Lustre provides several benefits, including: Lustre is a real parallel file system, which enables temporary or intermediate data to be stored in parallel on multiple nodes reducing the load on single nodes. In addition, Lustre has its own network protocol, which is more efficient for bulk data transfer than the HTTP protocol. Additionally, because Lustre is a shared file system, each client sees the same file system image, so hardlinks can be used to avoid data transfer between nodes.

Xyratex Update from LUG 2014

xyratex

“We wanted to send out a special thank you to Seagate for showing such a strong commitment to Lustre. Their announcement at LUG – intent to transfer ownership of lustre.org to the community – is great news.”

Video: Lustre Releases Presentation from LUG 2014

roadmap

“The Lustre community has banded together to work on the development of the Lustre source code. As part of that effort, we regularly discuss the roadmap for major Lustre releases. We have developed a schedule of major releases that occur every six months.”

A Vision of Storage for Exascale Computing

eric

“Back in July 2012, Whamcloud was awarded the Storage and I/O Research & Development subcontract for the Department of Energy’s FastForward program. Shortly afterward, the company was acquired by Intel. Nearly completed now, the two-year contract scope includes key R&D necessary for a new object storage paradigm for HPC exascale computing, and the developed technology will also address next-generation storage mechanisms required by the Big Data market.”

Video: Transitioning OpenSFS to a Community Nexus

Galen Shipman

In this video from LUG 2014, Galen Shipman from OpenSFS presents: Transitioning OpenSFS to a Community Nexus.

Lustre Plus ZFS at LLNL: Production Plans and Best Practices

llnl

“LLNL’s largest supercomputer is paired with a 55-petabyte file system, known as Grove, that stores vast quantities of simulation data. Grove must transfer information to and from Sequoia at a minimum of 500 gigabytes per second. To support Grove’s high storage capacity and bandwidth requirements, LC software developers have engaged in a multi-year project to replace much of Grove’s Lustre foundation with ZFS. “