Sign up for our newsletter and get the latest HPC news and analysis.

Video: Lustre Future Features

features

In this video from LUG 2014, Andreas Dilger from Intel presents: Lustre Future Features. The talk features an overview of the Feature Submission Process and a description of features proposed for Lustre 2.7 and future releases.

Integrating Array Management into Lustre

roger

In this video from LUG 2014, Roger Ronald from System Fabric Works presents: Integrating Array Management into Lustre. “Intel Enterprise Edition for Lustre Plug-ins address a significant adoption barrier by improving ease of use. Now, System Fabric Works has implemented a NetApp plug-in for Intel EE Lustre and additional plug-ins for storage, networks, and servers are being encouraged.”

Brent Gorda on What’s New with Lustre

gorda

In this video from LUG 2014, Brent Gorda from the Intel High Performance Data Division provides an update on the Whamcloud team has been up to over the past two years as part of Intel. He then shares his views on the news that Seagate has donated Lustre.org back to the community. Finally he wraps up with an assessment of the State of Lustre and it’s progress in the enterprise.

Video: Lustre Client IO Performance Improvements

andrew

In this video from the Lustre User Group 2014, Andrew Usleton from Intel presents: Lustre Client IO Performance Improvements.

Interview: OpenSFS Welcomes Lustre.org Back into User Community

trio

In this video from LUG 2014, Galen Shipmen from OpenSFS, Ken Claffey from Xyratex, and Hugo Falter from EOFS discuss the recent announcement that Seagate has donated Lustre.org back to the user community. “From my perspective, this move is a great indication that Seagate intends to be an active, contributing member of the Lustre community. After years of upheaval, Lustre users can now leave the politics behind and focus the world’s toughest computing challenges.”

Lustre Client Performance Comparison and Tuning (1.8.x to 2.x)

fragalla

In this video from the Lustre User Group 2014, John Fragalla from Xyratex presents: Lustre Client Performance Comparison and Tuning (1.8.x to 2.x)

OpenSFS Publishes First Lustre Annual Report

Meghan McClelland from Xyratex presents the new Lustre Annual Report to LUG 2014.

“The annual Lustre report is segmented into three sections: Market Dynamics; The State of Lustre in 2014; and Intersect360 Research Analysis. The report also looks at market trends around file systems, exploring the past, present and future of Lustre in Big Data.”

Moving Lustre Forward — What We’ve Learned and What’s Coming

gorda

“Today, HPC has expanded beyond just national laboratories and research institutes to become a key technology for enterprises of all sizes as they seek to develop improved products or entirely new industries. Getting the maximum performance from HPC and data-intensive applications requires fast and scalable storage software. Simply put, today’s HPC workloads require storage infrastructure that scales endlessly and delivers unmatched I/O levels.”

Seagate Donates Lustre.org Back to the User Community

ken

Today at LUG 2014, Ken Claffey from Xyratex/Seagate announced that the company is donating the Lustre.org domain back the user community. “From my perspective, this move is a great indication that Seagate intends to be an active, contributing member of the Lustre community. After years of upheaval, Lustre users can now leave the politics behind and focus the world’s toughest computing challenges.”

OpenSFS Benchmarking Working Group Releases I/O Characterization Report

figure1

“The OpenSFS Benchmarking Working Group (BWG) was created with the intent of defining an I/O benchmark suite to satisfy the requirements of the scalable parallel file system users and facilities. The first step toward this end was identified as characterization of I/O workloads, from small- to very large-scale parallel file systems, deployed at various high-performance and parallel computing (HPC) facilities and institutions. The characterization will then drive the design of the I/O benchmarks that emulate these workloads.”