Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices

In this video from LUG 2015 in Denver, J.Mario Gallegos from Dell presents: Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices.

Hadoop is the de facto platform for processing big data applications, allowing fast distributed processing of vast amounts of data, especially when such data is not structured and very diverse in nature. Hadoop is particularly useful to analyze data where no a priori insights or relationships exist, where such relationships are complex or when they are hidden by the massive volume of raw data. Similarly, Lustre is the de facto open source parallel file system for HPC environments, providing compute clusters with efficient storage and very fast access to large data sets. Merging of strengths of both technologies to solve big data problems permits harvesting the power of HPC clusters on very fast storage. However, using regular Hadoop on top of Lustre presents some disadvantages. First, using the Hadoop File System (HDFS) on top of Lustre storage system requires data access to data via HTTP calls, adding considerable overhead, reducing efficiency and processing speed. Another important disadvantage is HDFS requirement for fairly large local storage on each Hadoop node.
Intel Enterprise Edition for Lustre provides a Hadoop software adapter that permits overcoming those disadvantages of HDFS by providing direct access to Lustre during MapReduce computations.

The present work is based on a proof of concept that comprised of Intel Enterprise Edition for Lustre (IEEL) software running on Dell PowerEdge Servers and PowerVault Storage, taking advantage of Bright Computing’s Bright Cluster Manager (BCM) to handle the deployment of the operating system and IEEL, while the Intel Manager for Lustre (IML) handles the administration and monitoring of Lustre. A description of the project and the different components is followed by an assortment of highlights and lessons learned during the planning, deployment and testing phases, and finally a list of best practices is provided.

Among the advantages observed on the particular project are:

  • The existing HPC cluster has limited storage on each compute node. Using Lustre allowed converge of current HPC infrastructure with big data applications.
  • HDFS requires by default keeping three copies of each data block, while this solution uses 8+2 RAID 6. Since raw capacity is 480 TB (disregarding filesystems overhead), using HDFS implies 160 TB of usable space, while using RAID6 allows using 384 TB. A gain of 240% compared to HDFS.
  • HDFS is not POSIX complaint; the HDFS API must be used to load raw data, process it and copy the results out of HDFS. Since Lustre is POSIX complaint, any system with a Lustre client can access data as if it were local, using regular OS commands to manage such data, simplifying system administration.
  • Moreover, Lustre POSIX compliance enables data backup/recovery using existing infrastructure.
  • Using Lustre is more efficient for accessing data than using HDFS, since HDFS file transfers rely on the HTTP protocol, involving higher overhead and slower access.
  • In addition to avoid HDFS file transfers to load data, retrieve results and keep 3 redundant copies, using Lustre centralized access allows any data to be available to all compute nodes at any time, eliminating transfers during the MapReduce ”shuffle” phase, and consequently better performance (i.e. higher jobs throughput).
  • Intel is about to release a new plugin for the SLURM resource manager that will enable scheduling HPC and MapReduce jobs using SLURM management interface, replacing YARN as the resource manager for Hadoop jobs.”

Download the SlidesSee more talks in the LUG 2015 Video Gallery. * Sign up for our insideHPC Newsletter.

 

Resource Links: