Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Spend Less on HPC/AI Storage (and more on CPU/GPU compute)

[SPONSORED POST] In this whitepaper courtesy of HPE, you’ll learn about the three approaches that can help you to feed your CPU- and GPU-accelerated compute nodes without I/O bottlenecks while creating efficiencies in Gartner’s Run category. As the market share leader in HPC servers, HPE saw the convergence of classic modeling and simulation with AI methods such as machine learning and deep learning coming and now offers you a new portfolio of parallel HPC/AI storage systems that are purpose engineered to address all of the previously mentioned challenges—in a cost-effective way.

HPE Reference Architecture for SAS 9.4 on HPE Superdome Flex 280 and HPE Primera Storage

This Reference Architecture highlights the key findings and demonstrated scalability when running SAS® 9.4 using the Mixed Analytics Workload running on HPE Superdome Flex 280 Server and HPE Primera Storage. These results demonstrate that the combination of the HPE Superdome Flex 280 Server and HPE Primera Storage with SAS 9.4 delivers up to 20GB/s of sustained throughput, up to a 2x performance improvement from the previous server and storage generation testing.

HPE Reference Architecture for SAS 9.4 on HPE Superdome Flex 280 and HPE Primera Storage

This Reference Architecture highlights the key findings and demonstrated scalability when running SAS® 9.4 using the Mixed Analytics Workload running on HPE Superdome Flex 280 Server and HPE Primera Storage. These results demonstrate that the combination of the HPE Superdome Flex 280 Server and HPE Primera Storage with SAS 9.4 delivers up to 20GB/s of sustained throughput, up to a 2x performance improvement from the previous server and storage generation testing.

Supermicro’s New Line of AMD EPYC-based Systems: Addressing HPC Needs across the Spectrum

Supermicro recently launched its A+ line of systems based on AMD’s new EPYC 7nm microprocessors – products that include servers, storage, GPU-optimized, SuperBlade, and Multi-Node Twin Solutions designed, according to Vik Malyala, Supermicro’s Senior Vice President, FAE & Business Development, to exactly match system requirements for challenging enterprise workloads. In this interview, Malyala discusses the […]

Getting More Quantitative Analysis Modeling and Backtesting by Fixing Your Storage

HPC storage is under strain to deliver fast, shared dataset access for backtesting quantitative trading strategies. One solution is Panasas’ ActiveStor Ultra – a scalable, parallel, data storage system for reduced backtest times and higher accuracy modeling.

Active Archive Alliance Names Rich Gadomski and Betsy Doughty Co-Chairpersons of the Board

Boulder, Colo.—January 19, 2021—The Active Archive Alliance today announced that Rich Gadomski, head of tape evangelism at FUJIFILM Recording Media U.S.A., Inc., and Betsy Doughty, vice president of corporate marketing at Spectra Logic, have been elected to serve as co-chairpersons of the Board of Directors for the Active Archive Alliance. Gadomski and Doughty have been engaged with the […]

Lenovo Offers Optimal Storage Platform for Intel DAOS

In this sponsored post, our friends over at Lenovo and Intel highlight how Lenovo is doing some exciting stuff with Intel’s DAOS software. DAOS, or Distributed Asynchronous Object Storage, is a scale-out HPC storage stack that uses the object storage paradigm to bypass some of the limitations of traditional parallel file system architectures.

What Do You Mean “What’s My Workload?” I Have Hundreds of Them!

In this sponsored post, Curtis Anderson, Senior Software Architect at Panasas, Inc., takes a look at what Panasas is calling Dynamic Data Acceleration (DDA) and how it dramatically improves HPC performance in a mixed-workload environment. DDA is a new, proprietary software feature of the Panasas PanFS® parallel file system that utilizes a carefully architected combination of technologies to get the most out of all the storage devices in the subsystem.

I Really Don’t Care About TCO … It’s RCO I am Worried About

In this sponsored post by Adam Marko, Director of Life Science Solutions – Panasas, Inc., we look at what we are calling Research Cost of Ownership (RCO) and the effect of HPC storage downtime and reduced productivity on the overall scientific mission.

Panasas PanFS 8: Architectural Overview

Panasas has released this timely new white paper “Panasas PanFS 8: Architectural Overview.” The report takes a “breadth-first” tour of the architecture of the PanFS® parallel file system, looking at its key components then diving deep into the main benefits. HPC environments, by their very nature, tend to be large and are usually quite complex. […]