Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Unleashing the performance and scalability of the Lustre Parallel File System

The Dell Storage for HPC with Intel EE for Lustre Solution helps bring the benefits of an HPC file system to a broad range of enterprise organizations. Dell’s enterprise-ready Lustre parallel file storage solution can deliver a higher level of computing power and throughput, and make available information and insights derived from big data and compute-intensive applications — such as advanced modeling, simulation, and data analysis—to a wider audience of enterprise users. This HPC storage solution, backed by Dell and Intel, offers the potential to drive innovation, deliver higher quality products and designs, and sustain competitive advantage. Download this white paper to read more.

Lustre Dell Storage for HPC with Intel

The following sections of this paper will describe the Lustre File System, the Dell Storage for HPC with Intel EE for Lustre solution followed by performance analysis and conclusions. Appendix A: Benchmark Command Reference

Lustre Parallel File System

Lustre is a complex set of codes that takes time
to enhance and expand with new capabilities. Thus, the Lustre community of developers remains committed to keep it not only as the fastest and most scalable, open source parallel file system available, but to also add the rich enhancements the user community desires.

Lustre Solution with the Dell MD3460

This paper clearly demonstrates that once optimized for large I/O throughput the Dell MD3460 / Intel Enterprise Edition Lustre (IEEL) solution provides storage density and performance characteristics that are very well aligned to the requirements of the mid-to-high end research storage market. After the throughput tuning had been applied the I/O performance of the Dell storage brick doubled, producing single brick IOR client performance maxima of 4.5GB/s R/W. Single rack configurations can thus be implemented that provide 2.1 PB of usable storage and 36 GB/s R/W performance. A capacity optimized configuration is also illustrated providing a solution with a cost reduction of ~35% relative to the performance optimized solution. These bulk performance and density metrics place the Dell / IEEL solution at the high end of the solution space but within the commodity IT supply chain model. This will provide the price performance step change that the scientific, technical and medical research computing communities need to help close the demand vs. budget gap that has emerged due to huge growth in demand seen within the research community for both storage capacity and performance. This marks a turning point in commoditization of research storage solutions echoing the commodity revolution that was seen in research computing market with the advent of HPC clusters. Many large scale HPC customers are finding it difficult to architect HPC and data analysis system with the required capacity, performance and cost parameters. Commodity high end parallel files system as described in this paper dramatically improve this situation.

Lustre File System High Performance Guide

Today, Lustre File System is based entirely on Linux and is using kernel- based server modules to deliver the expected performance. Lustre can support many types of clients and runs on almost any modern hardware. Scalability is one of the most important features of Lustre File System and can be used to create a single namespace of what appears to be almost limitless capacity.

Lustre Software for Intel Cloud

Even the largest HPC clusters can experience degradation due to poor I/O performance. This occurs as massive amounts of data and increasingly large individual files combine limited disk drive hardware capacity to cause significant bottlenecks. Lustre is an open source parallel file system that improves the overall scalability and performance of HPC clusters. It provides cluster client nodes with shared access to file system data in parallel, greatly increasing throughout and performance. Lustre is the most widely used HPC storage system in the world-with parallel storage capabilities utilized by over 50% of HPC deployments-and can scale to tens of thousands of clients.

InsideHPC Guide to Lustre Solutions for Business

In this Guide, we take a look at what an HPC solution like Lustre can deliver for a broad community of business and commercial organizations struggling with the challenge of big data and demanding storage growth.

OpenSFS Releases Lustre 2.8.0 for LUG 2016 Conference

Today the Open Scalable File Systems (OpenSFS) community announced the release of Lustre 2.8.0, the fastest and most scalable parallel file system. OpenSFS, founded in 2010 to advance Lustre development, is the premier non-profit organization promoting the use of Lustre and advancing its capabilities through coordinated releases of the Lustre file system.

Podcast: Intel Moves HPC Forward with Broadwell Family of Xeon Processors

In this podcast, Rich Brueckner interviews Hugo Saleh, Director of Marketing for the Intel High Performance Computing Platform Group. They discuss the new Intel® Xeon® processor E5-2600 v4 product family, based upon the Broadwell microarchitecture, and the first processor within Intel® Scalable System Framework (Intel® SSF). Hugo describes how the new processors improve HPC performance and examine the impact of Intel® SSF on vastly different procurements ranging from the massive 200 Petaflops Aurora system to small and large enterprise clusters plus scientific systems.

High-Performance Lustre* Storage Solution Helps Enable the Intel® Scalable System Framework

“Intel has incorporated Intel Solutions for Lustre Software as part of the Intel SSF because it provides the performance to move data and minimize storage bottlenecks. Lustre is also open source based, and already enjoys a wide foundation of deployments in research around the world, while gaining significant traction in enterprise HPC. Intel’s version of Lustre delivers a high-performance storage solution in the Intel SSF that next-generation HPC needs to move toward the era of Exascale.”