Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Lustre and Persistent Storage

This is the fifth in a series of articles on “6 Things You Should Know About Lustre”. Other topics cover the Lustre in the Enterprise, the Cloud, Financial Services, next-generation storage and the role Intel® Solutions for Lustre play in the Intel® Scalable Systems Framework.

Lustre was originally developed as the fastest scratch file system for HPC workloads that supercomputer centers could get, but has over the years matured to be an enterprise-class parallel file system supporting mission-critical workloads. Unfortunately, in spite of Lustre having become extremely attractive to enterprises and adopted by IT departments across multiple industries, some naysayers continue to think of and claim that Lustre is still just a scratch file system. We in the Lustre community see quite a different picture.

Caption: Comet, SDSC’s petascale supercomputer, offers a breakthrough storage technology based on the Intel® Foundation Edition of Lustre* Software combined with OpenZFS. (Photo courtesy SDSC)

Caption: Comet, SDSC’s petascale supercomputer, offers a breakthrough storage technology based on the Intel® Foundation Edition of Lustre* Software combined with OpenZFS. (Photo courtesy SDSC)

Lustre—Matured to Become Enterprise-Grade

Performance has always been Lustre’s key strength. Oakridge Leadership Computing Facility is reporting 1 TB/s on Spider II, a Lustre-based storage arrayi. Comet at San Diego Computing Center is delivering 300 GB/s while supporting thousands of usersii. Performance is why over the last decade the fastest supercomputers in the world chose Lustre. But Lustre has made many enterprise-grade advances without losing its performance edge.

Lustre’s High Availability Solution: An Established, Proven Design Pattern

Business continuity is a key driver for any mission-critical, enterprise computing solution. Different file systems use various mechanisms to maintain data availability. The Hadoop Distributed File System (HDFS) and IBM’s General Parallel File System (GPFS) replicate data across multiple disks. Lustre uses a well-known and understood high availability (HA) design pattern using cooperative HA cluster pairs. In the event a server fails, storage targets on that server are migrated to the paired server. It’s a very mature and understood design pattern, and a common model in IT centers around the world.

Lustre’s high availability (HA) design pattern does not compromise performance. Some replication methodologies can continuously degrade performance and compromise latency in the file system. Synchronous replication is especially known for substantial latency degradation, because the application has to wait until all the targets acknowledge the data has been written before it can continue execution. The high availability method used in Lustre deployments makes more effective use of the available network bandwidth.

Any consideration of costs for a design solution must evaluate Total Cost of Ownership over the expected life-cycle of the system. Lustre’s HA design pattern allows more efficient use of storage compared to other solutions. Combined with Lustre’s overall performance and the impact it has on business operations versus other architectures, the overall TCO and cost for data reliability may be lower than a coarse-grained replication solution, where every terabyte drive must have at least one or possibly two additional terabyte drives for replication. That’s a potentially costly method to maintain data reliability.

Innovative Replication on the Horizon

For some deployments, replication is a requirement. But not all workloads are the same. Instead of requiring a method at file system setup that restricts how users store their data, they should be able to choose an option at runtime that best meets their application’s or project’s requirements. Since Lustre maintains a history of innovation, integrating a replication methodology is going to follow that tradition. So, Lustre developers are diligently working on a replication solution that is flexible and innovative. They are first defining a strategy for arbitrary file layouts that can be decided upon at run time. That will give users the flexibility to choose the best file layout for their application when they’re ready to run it, and then extend that layout to replicated data across the file system.

For example, where throughput is most important, applications will more likely benefit from a striped file layout, equivalent to RAID 0 and essentially the storage structure employed in Lustre today. However, data that is vital to business operations or upon which human life depends, might set different requirements on the persistence of the data. In such applications, loss of data can have immediate negative consequences, placing emphasis on data availability in the file system.

Lustre developers have focused on adding reliability and availability features that enterprise users have wanted, including a reliable HA methodology. And these features have made Lustre very attractive to the businesses where performance, reliability, and availability are among the highest expectations from IT. Lustre has matured significantly over the years; it’s no longer a scratch file system. Replication is in the roadmap, but innovation is critical to Lustre’s solution.

Learn more about Intel® Solutions for Lustre Software

[1] http://users.nccs.gov/~yk7/papers/SC14-SOP-Spider.pdf

[1] http://www.sdsc.edu/services/hpc/parallel_file_systems.html

*Other trademarks and brands may be the property of others.

 

Resource Links: