In this special guest feature, Ken Strandberg offers this live report from Day 2 of the Lustre User Group meeting in Portland. “Scott Yockel from Harvard University shared how they are deploying Lustre across their massive three data centers up to 90 miles apart with 25 PB of storage, about half of which is Lustre. They’re using Docker containers and employing a backup strategy across the miles of every NFS system, parsing of the entire MDT, and includes 10k directories of small files.”
“The Lustre User Group (LUG) 2016 conference is well under way. The morning of the first day was spent on looking at Lustre today and tomorrow and security developments in the code. Peter Jones and Andreas Dilger described what is in the newest release of Lustre 2.8 and will be in Lustre 2.9, targeted for release this fall, and beyond. These features include growing support for ZFS, security, multi-rail LNET, progressive file layouts, project quotas, and more.”
“With Supermicro’s 90 top-load 3.5” hot-swap bay JBOD as the storage core of our Lustre Pod Cluster, we maximize performance, density and capacity and simplify serviceability for massive scale HA storage deployments. Combining our preconfigured, validated 2U SuperStorage OSS, 1U Ultra SuperServer with Intel Enterprise Edition for Lustre software, and global service and support, Supermicro has the Total Solution for Lustre ready for HPC, Genomics and Big Data.”
The University of Michigan is collaborating with IBM to develop and deliver “data-centric” supercomputing systems designed to increase the pace of scientific discovery in fields as diverse as aircraft and rocket engine design, cardiovascular disease treatment, materials physics, climate modeling and cosmology. “Scientific research is now at the crossroads of big data and high performance computing,” said Sumit Gupta, vice president, high performance computing and data analytics, IBM. “The explosion of data requires systems and infrastructures based on POWER8 plus accelerators that can both stream and manage the data and quickly synthesize and make sense of data to enable faster insights.”
Today Egypt’s Bibliotheca Alexandrina library announced plans to build an HPC platform using Huawei technologies. Based on high-density FusionServer servers, the 118 Teraflop Huawei cluster employs high-speed InfiniBand and 288 TB in storage capacity for concurrent file systems.
“Trends in computer memory/storage technology are in flux perhaps more so now than in the last two decades. Economic analysis of HPC storage hierarchies has led to new tiers of storage being added to the next fleet of supercomputers including Burst Buffers or In-System Solid State Storage and Campaign Storage. This talk will cover the background that brought us these new storage tiers and postulate what the economic crystal ball looks like for the coming decade. Further it will suggest methods of leveraging HPC workflow studies to inform the continued evolution of the HPC storage hierarchy.”
Today Seagate announced it will incorporate Intel Enterprise Edition for Lustre (IEEL), a big data software platform, into its market-leading ClusterStor storage architecture for high-performance computing. The move will strengthen Seagate’s HPC data storage product line and provide customers with an additional choice of Lustre parallel file systems to help drive advancements in the HPC and big data market.
Addison Snell from Intersect360 Research presented this talk at the Switzerland HPC Conference. “Based on updated research studies, Addison Snell of Intersect360 Research will present on forward-looking topics for HPC and Hyperscale markets. With an expanding look at hyper- scale, Intersect360 Research will describe the size and influence of the market, including evolving standards like Open Compute Project, OpenStack, and Beiji/Scorpio. Intersect360 Research has also investigated users’ plans for evaluating competing processing and interconnect options, including Xeon, Xeon Phi, GPU, FPGA, POWER, ARM, InfiniBand, and OmniPath.”
“Traditionally, storage have been using brute force rather than intelligent design to deliver the required throughputs but the current trend is to design balanced systems with full utilization of the back-end storage and other related components. These new systems need to use fine grained power control all the way down to individual disk drives as well as tools for continuous monitoring and management of these systems. In addition, the storage solutions of tomorrow needs to support multiple tiers including backend archiving systems supported by HSM as well multiple file systems if required. This presentation is intended to provide a short update of where Seagate HPC storage is today.”
As high performance and webscale applications become mainstream, HPE’s continued focus on this market is yielding positive results for our customers,” said Bill Mannel, vice president and general manager, HPC, Big Data and IoT Servers, HPE. “Already, more than a third of the HPC market is using HPE compute platforms to enhance scientific and business innovation and gain a competitive edge. Today’s announcement reinforces our commitment to delivering new infrastructure solutions that satisfy our customers’ insatiable need for massive compute power to fuel new applications and unlock the value of their data.”