In this video from LUG 2016 in Portland, Steve Simms from Indiana University presents: Lustre 101 – A Quick Overview. Now in its 14th year, the Lustre User Group is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies.
In this video from LUG 2016, Andreas Dilger from Intel presents: Lustre 2.9 and Beyond. “I do this presentation every year and I think it is important to focus on features that are going to be available in the short term.”
Peter Jones from Intel presented this talk at LUG 2016 in Portland. “The OpenSFS Lustre Working Group (LWG) is the place the where the participants of OpenSFS come together to coordinate their software development efforts for the Lustre high-performance, Open Source, parallel filesystem. This includes planning and the roadmap for community releases of Lustre.”
In this special guest feature, Ken Strandberg offers this live report from Day 3 of the Lustre User Group meeting in Portland. “Rick Wagner from San Diego Supercomputing Center presented progress on his team’s replication tool that allows copying large blocks of storage from object storage to their disaster recovery durable storage system. Because rsync is not a tool for moving massive amounts of data, SDSC created recursive worker services running in parallel to have each worker handle a directory or group of files. The tool uses available Lustre clients, a RabbitMQ server, Celery scripts, and bash scripts.”
The Intel Cloud Edition for Lustre* Software is now available on Microsoft’s Azure platform. Intel Cloud Edition for Lustre Software on Azure is a scalable, parallel file system designed as the working file system for HPC or other IO intensive workloads. Built for use with the virtualized compute instances available from Microsoft Azure scalable cloud infrastructure, it is designed for dynamic, pay-as-you-go applications.
In this special guest feature, Ken Strandberg offers this live report from Day 2 of the Lustre User Group meeting in Portland. “Scott Yockel from Harvard University shared how they are deploying Lustre across their massive three data centers up to 90 miles apart with 25 PB of storage, about half of which is Lustre. They’re using Docker containers and employing a backup strategy across the miles of every NFS system, parsing of the entire MDT, and includes 10k directories of small files.”
“The Lustre User Group (LUG) 2016 conference is well under way. The morning of the first day was spent on looking at Lustre today and tomorrow and security developments in the code. Peter Jones and Andreas Dilger described what is in the newest release of Lustre 2.8 and will be in Lustre 2.9, targeted for release this fall, and beyond. These features include growing support for ZFS, security, multi-rail LNET, progressive file layouts, project quotas, and more.”
“With Supermicro’s 90 top-load 3.5” hot-swap bay JBOD as the storage core of our Lustre Pod Cluster, we maximize performance, density and capacity and simplify serviceability for massive scale HA storage deployments. Combining our preconfigured, validated 2U SuperStorage OSS, 1U Ultra SuperServer with Intel Enterprise Edition for Lustre software, and global service and support, Supermicro has the Total Solution for Lustre ready for HPC, Genomics and Big Data.”
Today Seagate announced it will incorporate Intel Enterprise Edition for Lustre (IEEL), a big data software platform, into its market-leading ClusterStor storage architecture for high-performance computing. The move will strengthen Seagate’s HPC data storage product line and provide customers with an additional choice of Lustre parallel file systems to help drive advancements in the HPC and big data market.
“Traditionally, storage have been using brute force rather than intelligent design to deliver the required throughputs but the current trend is to design balanced systems with full utilization of the back-end storage and other related components. These new systems need to use fine grained power control all the way down to individual disk drives as well as tools for continuous monitoring and management of these systems. In addition, the storage solutions of tomorrow needs to support multiple tiers including backend archiving systems supported by HSM as well multiple file systems if required. This presentation is intended to provide a short update of where Seagate HPC storage is today.”