“With more than a decade of experience in designing, installing and supporting Lustre-based storage, DDN is the most experienced Lustre provider and has worked closely with us over many years to design optimized Lustre-based storage systems. DDN’s latest ES14K offering delivers a high-performing, high density appliance for the HPC market built on Intel Enterprise Edition for Lustre,” said Brent Gorda, GM of Intel’s High Performance Data Division.
“One of the benefits of our ClusterStor modular architecture is its flexibility – we can deliver a very comparable performance with either Lustre or Spectrum Scale on the same extensible architecture. There are two key reasons for that balance of performance and flexibility. Firstly, we have a unique scale out storage architecture with a distributed processing model, meaning you’re not tied to a centralized legacy RAID controller hardware. Secondly, there is no proprietary hardware or RAID firmware in the system. All the software runs in a standard Linux environment, so we are able to take our software stack and it is really agnostic as to whether we are running with Lustre or SS.”
Today RAID Inc. announced a contract to provide Lawrence Livermore National Laboratory (LLNL) a custom parallel file system solution for its unclassified computing environment. RAID will deliver a 17PB file system able to sustain up to 180 GB/s. These high performance, cost-effective solutions are designed to meet LLNL’s current and future demands for parallel access data storage.
With ISC 2016 coming up in June, a number of ancillary events have been scheduled in Frankfurt to take advantage of this annual gathering of over 2500 supercomputing professionals. We’ve compiled a full listing for what looks to be an exciting week in the history of high performance computing.
Peter Bojanic presented this talk at LUG 2016 in Portland. “At LUG 2016, Seagate announced it will incorporate Intel Enterprise Edition for Lustre (IEEL), a big data software platform, into its market-leading ClusterStor storage architecture for high-performance computing. The move will strengthen Seagate’s HPC data storage product line and provide customers with an additional choice of Lustre parallel file systems to help drive advancements in the HPC and big data market.”
Intel has been working on a new HPC design philosophy for HPC systems called Intel® Scalable System Framework (Intel® SSF), an approach designed to enable sustained, balanced performance in HPC as the community pushes towards the Exascale computing era. Central to Intel SSF performance is the Lustre* scalable, parallel file system (PFS). Intel® Enterprise Edition for Lustre software (Intel® EE for Lustre software) is the Intel distribution of the well-known PFS, which is used by the majority of the fastest supercomputers around the world.
In this video from LUG 2016 in Portland, Steve Simms from Indiana University presents: Lustre 101 – A Quick Overview. Now in its 14th year, the Lustre User Group is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies.
In this video from LUG 2016, Andreas Dilger from Intel presents: Lustre 2.9 and Beyond. “I do this presentation every year and I think it is important to focus on features that are going to be available in the short term.”
Peter Jones from Intel presented this talk at LUG 2016 in Portland. “The OpenSFS Lustre Working Group (LWG) is the place the where the participants of OpenSFS come together to coordinate their software development efforts for the Lustre high-performance, Open Source, parallel filesystem. This includes planning and the roadmap for community releases of Lustre.”
In this special guest feature, Ken Strandberg offers this live report from Day 3 of the Lustre User Group meeting in Portland. “Rick Wagner from San Diego Supercomputing Center presented progress on his team’s replication tool that allows copying large blocks of storage from object storage to their disaster recovery durable storage system. Because rsync is not a tool for moving massive amounts of data, SDSC created recursive worker services running in parallel to have each worker handle a directory or group of files. The tool uses available Lustre clients, a RabbitMQ server, Celery scripts, and bash scripts.”