Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Lustre Parallel File System

Lustre is a complex set of codes that takes time
to enhance and expand with new capabilities. Thus, the Lustre community of developers remains committed to keep it not only as the fastest and most scalable, open source parallel file system available, but to also add the rich enhancements the user community desires.

Oil and Gas Exploration Using HPC Storage

Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available.

Lustre Solution with the Dell MD3460

This paper clearly demonstrates that once optimized for large I/O throughput the Dell MD3460 / Intel Enterprise Edition Lustre (IEEL) solution provides storage density and performance characteristics that are very well aligned to the requirements of the mid-to-high end research storage market. After the throughput tuning had been applied the I/O performance of the Dell storage brick doubled, producing single brick IOR client performance maxima of 4.5GB/s R/W. Single rack configurations can thus be implemented that provide 2.1 PB of usable storage and 36 GB/s R/W performance. A capacity optimized configuration is also illustrated providing a solution with a cost reduction of ~35% relative to the performance optimized solution. These bulk performance and density metrics place the Dell / IEEL solution at the high end of the solution space but within the commodity IT supply chain model. This will provide the price performance step change that the scientific, technical and medical research computing communities need to help close the demand vs. budget gap that has emerged due to huge growth in demand seen within the research community for both storage capacity and performance. This marks a turning point in commoditization of research storage solutions echoing the commodity revolution that was seen in research computing market with the advent of HPC clusters. Many large scale HPC customers are finding it difficult to architect HPC and data analysis system with the required capacity, performance and cost parameters. Commodity high end parallel files system as described in this paper dramatically improve this situation.

Lustre File System High Performance Guide

Today, Lustre File System is based entirely on Linux and is using kernel- based server modules to deliver the expected performance. Lustre can support many types of clients and runs on almost any modern hardware. Scalability is one of the most important features of Lustre File System and can be used to create a single namespace of what appears to be almost limitless capacity.

Lustre Software for Intel Cloud

Even the largest HPC clusters can experience degradation due to poor I/O performance. This occurs as massive amounts of data and increasingly large individual files combine limited disk drive hardware capacity to cause significant bottlenecks. Lustre is an open source parallel file system that improves the overall scalability and performance of HPC clusters. It provides cluster client nodes with shared access to file system data in parallel, greatly increasing throughout and performance. Lustre is the most widely used HPC storage system in the world-with parallel storage capabilities utilized by over 50% of HPC deployments-and can scale to tens of thousands of clients.

InsideHPC Guide to Lustre Solutions for Business

In this Guide, we take a look at what an HPC solution like Lustre can deliver for a broad community of business and commercial organizations struggling with the challenge of big data and demanding storage growth.

Video: Parallel I/O Best Practices

In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.

Call for Participation: Lustre User Group at NCI in Australia

NCI in Australia has issued its Call for Participation for the the Down-Under version of the 2016 Lustre User Group. The event will be held Sept. 7-8 on the campus of The Australian National University in Canberra, ACT Australia. “LUG 2016 will be a dynamic two day workshop that will explore improvements in the performance and flexibility of the Lustre file system for supporting diverse workloads. This will be a great opportunity for the Lustre community to discuss the challenges associated with enhancing Lustre for diverse applications, the technological advances necessary, and the associated ecosystem.”

Video: Matching the Speed of SGI UV with Multi-rail LNet for Lustre

Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”

Superior Performance Commits Kyoto University to CPUs Over GPUs

In this special guest feature, Rob Farber writes that a study done by Kyoto University Graduate School of Medicine shows that code modernization can help Intel Xeon processors outperform GPUs on machine learning code. “The Kyoto results demonstrate that modern multicore processing technology now matches or exceeds GPU machine-learning performance, but equivalently optimized software is required to perform a fair benchmark comparison. For historical reasons, many software packages like Theano lacked optimized multicore code as all the open source effort had been put into optimizing the GPU code paths.”