Video: Matching the Speed of SGI UV with Multi-rail LNet for Lustre

Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”

Interview: Ken Claffey on Seagate HPC Innovation On Deck at ISC 2016

“One of the benefits of our ClusterStor modular architecture is its flexibility – we can deliver a very comparable performance with either Lustre or Spectrum Scale on the same extensible architecture. There are two key reasons for that balance of performance and flexibility. Firstly, we have a unique scale out storage architecture with a distributed processing model, meaning you’re not tied to a centralized legacy RAID controller hardware. Secondly, there is no proprietary hardware or RAID firmware in the system. All the software runs in a standard Linux environment, so we are able to take our software stack and it is really agnostic as to whether we are running with Lustre or SS.”

Lustre 101: A Quick Overview

In this video from LUG 2016 in Portland, Steve Simms from Indiana University presents: Lustre 101 – A Quick Overview. Now in its 14th year, the Lustre User Group is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies.

Video: Lustre 2.9 and Beyond

In this video from LUG 2016, Andreas Dilger from Intel presents: Lustre 2.9 and Beyond. “I do this presentation every year and I think it is important to focus on features that are going to be available in the short term.”

Video: Lustre Community Release Update

Peter Jones from Intel presented this talk at LUG 2016 in Portland. “The OpenSFS Lustre Working Group (LWG) is the place the where the participants of OpenSFS come together to coordinate their software development efforts for the Lustre high-performance, Open Source, parallel filesystem. This includes planning and the roadmap for community releases of Lustre.”

Live Report from LUG 2016 Day 3

In this special guest feature, Ken Strandberg offers this live report from Day 3 of the Lustre User Group meeting in Portland. “Rick Wagner from San Diego Supercomputing Center presented progress on his team’s replication tool that allows copying large blocks of storage from object storage to their disaster recovery durable storage system. Because rsync is not a tool for moving massive amounts of data, SDSC created recursive worker services running in parallel to have each worker handle a directory or group of files. The tool uses available Lustre clients, a RabbitMQ server, Celery scripts, and bash scripts.”

Live Report from LUG 2016 Day 1

“The Lustre User Group (LUG) 2016 conference is well under way. The morning of the first day was spent on looking at Lustre today and tomorrow and security developments in the code. Peter Jones and Andreas Dilger described what is in the newest release of Lustre 2.8 and will be in Lustre 2.9, targeted for release this fall, and beyond. These features include growing support for ZFS, security, multi-rail LNET, progressive file layouts, project quotas, and more.”