Sign up for our newsletter and get the latest HPC news and analysis.


Special Feature: Slideshow from #LUG2015

lug

We had a great time covering the Lustre User Group meeting in Denver last week. In this special feature, we added a blues music track to the OpenSFS slideshow from the event. Enjoy!

Video: Lustre HSM in the Cloud

images

“The combination of the ephemeral nature of the cloud and directly addressable archives such as S3 suggest novel methods for using the Lustre HSM interface. Persistent data sets in the cloud need to be managed independently from a ephemeral filesystem and compute resources. Managing datasets in the cloud could, for example, involves importing data from Amazon’s S3 back into a freshly-created Lustre filesystem, performing I/O intensive computations, and then persisting the datasets back to S3 before terminating the filesystem and compute resources. Alternatives for archive formats will also be discussed. AWS S3 will be used for concrete examples, but the general methods should be applicable to other cloud environments as well.”

Video: Shared File Performance in Lustre – Challenges and Solutions

Patrick Farrell

“Large scale HPC IO is usually done either with a file per process or to a single shared file. Single shared file IO does not scale well in Lustre compared to file per process. This presentation from Cray’s Patrick Farrell will give details, examine the reasons for this, and explore existing and potential solutions. Group locks and a new feature, lock ahead, will be discussed in the context of strided IO.”

Video: Application-optimized Lustre Solutions for Big-Data Workflows

ddn

In this video from LUG 2015 in Denver, Robert Triendl from DDN presents: Application-optimized Lustre Solutions for Big-Data Workflows.

Video: Defending the Planet with Lustre

nasa

“Pleiades, one of the world’s most powerful supercomputers, represents NASA’s state-of-the-art technology for meeting the agency’s supercomputing requirements, enabling NASA scientists and engineers to conduct modeling and simulation for NASA missions. Powered by Lustre, this distributed-memory SGI ICE cluster is connected with InfiniBand in a dual-plane hypercube technology.”

Video: Seagate Lustre Update

bojanic

In this video from LUG 2015 in Denver, Peter Bojanic from Seagate presents: Seagate Lustre Update. “Seagate powers four of the five 1 TB/sec filesystems in the world today.”

Video: There and Back Again – The Battle of Lustre at LANL

lanl

“A little over a year ago LANL’s HPC Division purchased and fielded our first general purpose InfiniBand-based Lustre parallel file system. This new Lustre deployment, being the first of several similar planned deployments, gave us the opportunity to design a new storage backbone from the ground up and to gain in depth experience with and insight into Lustre technology in order to facilitate the installment and configuration of future systems.”

Video: SDSC’s Data Oasis Gen II: ZFS, 40GbE, and Replication

rick

“The second generation of SDSC’s Data Oasis Lustre storage is coming online to support Comet, a new XSEDE cluster targeted at the long tail of science. The servers have been designed with Lustre on ZFS in mind, and also update the network to use bonded 40GbE interfaces. The raw storage totals 7.7 PB and are again based on commodity hardware provided by Aeon Computing, maintaining our focus on cost.”

Video: Cray’s Storage History and Outlook – Lustre+

Jason Goodman

In this video from LUG 2015 in Denver, Jason Goodman from Cray presents: Cray’s Storage History and Outlook – Lustre+. “As a leader in open systems and parallel file systems, Cray builds on open source Lustre to unlock any industry-standard x86 Linux compute cluster using InfiniBand or 10/40 GbE utilizing proven Cray storage architectures.”

Video: OpenSFS Update

FullSizeRender

In this video from LUG 2015, Charlie Carroll from Cray presents an OpenSFS update. “An important new trend is Lustre file system usage in the enterprise environment. Through community input and new or existing working groups, OpenSFS is moving to support this trend as required by the community.”