Gleb Budman from Backblaze presented this talk at the 2016 MSST Conference. “For Q1 2016 we are reporting on 61,590 operational hard drives used to store encrypted customer data in our data center. In Q1 2016, the hard drives in our data center, past and present, totaled over one billion hours in operation to date. That’s nearly 42 million days or 114,155 years worth of spinning hard drives. Let’s take a look at what these hard drives have been up to.”
Argonne Distinguished Fellow Paul Messina has been tapped to lead the Exascale Computing Project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project will focus its efforts on four areas: Applications, Software, Hardware, and Exascale Systems.
“Storage technologies are quickly innovating to reduce latency, providing a significant performance improvement for today’s cutting-edge applications. NVM Express (NVMe) is a significant step forward in high-performance, low-latency storage I/O and reduction of I/O stack overheads. NVMe over Fabrics is an essential technology to extend NVMe storage connectivity such that NVMe-enabled hosts can access NVMe-enabled storage anywhere in the datacenter, ensuring that the performance of today’s and tomorrow’s solid state storage technologies is fully unlocked, and that the network itself is not a bottleneck.”
Our in-depth series on Intel architects continues with this profile of Mark Seager, a key driver in the company’s mission to achieve Exascale performance on real applications. “Creating and incentivizing an exascale program is huge. Yet more important, in Mark’s view, NCSI has inspired agencies to work together to spread the value from predictive simulation. In the widely publicized Project Moonshot sponsored by Vice President Biden, the Department of Energy is sharing codes with the National Institutes of Health to simulate the chemical expression pathway of genetic mutations in cancer cells with exascale systems.”
Bret Weber from DDN presented this talk at the 2016 MSST Conference. “SSDs and all flash arrays are being marketed as a panacea. This may be true if you’re a small to medium enterprise that simply needs more performance for email servers or wants to speed-up just a few hundred VMs. But, for Enterprise At-scale and High Performance Computing environments, identifying and removing I/O bottlenecks is much more complex than simply exchanging spinning disk drives with flash devices. Aside from performance – efficiency, scalability and integration are also critical success factors in larger and non-standard environments. In this domain, selecting a partner with the tools, technology and experience to holistically examine and optimize your entire I/O path can deliver orders of magnitude greater acceleration and competitive advantage to your organization.”
Today ThinkParQ from Germany announced certification of BeeGFS over Intel Omni-Path Architecture (OPA). “Without a doubt, Intel has made a big leap in performance with the new 100Gbps OPA technology compared to previous interconnect generations,” said Sven Breuner, CEO of ThinkParQ. “The fact that we didn’t need to modify even a single line of the BeeGFS source code to deliver this new level of throughput, confirms that the BeeGFS internal design is really future-proof.”
“One of the benefits of our ClusterStor modular architecture is its flexibility – we can deliver a very comparable performance with either Lustre or Spectrum Scale on the same extensible architecture. There are two key reasons for that balance of performance and flexibility. Firstly, we have a unique scale out storage architecture with a distributed processing model, meaning you’re not tied to a centralized legacy RAID controller hardware. Secondly, there is no proprietary hardware or RAID firmware in the system. All the software runs in a standard Linux environment, so we are able to take our software stack and it is really agnostic as to whether we are running with Lustre or SS.”
Today Mellanox announced the BlueField family of programmable processors for networking and storage applications. “As a networking offload co-processor, BlueField will complement the host processor by performing wire-speed packet processing in-line with the network I/O, freeing the host processor to deliver more virtual networking functions (VNFs),” said Linley Gwennap, principal analyst at the Linley Group. “Network offload results in better rack density, lower overall power consumption, and deterministic networking performance.”
“As the industry’s leading server vendor, HPE is committed to bringing new infrastructure innovations to the market that enable organizations to derive more value from their data,” said Vikram K, Director, Servers, Hewlett Packard Enterprise India. “We are delivering on that commitment by delivering a complete Persistent Memory hardware and software ecosystem into our server portfolio, as well as high-performance computing enhancements that will allow customers to increase agility, protect critical information and deliver new applications and services more quickly than ever before.”
Today RAID Inc. announced a contract to provide Lawrence Livermore National Laboratory (LLNL) a custom parallel file system solution for its unclassified computing environment. RAID will deliver a 17PB file system able to sustain up to 180 GB/s. These high performance, cost-effective solutions are designed to meet LLNL’s current and future demands for parallel access data storage.