The Evolution of HPC Storage: More Choices Yields More Decisions

Print Friendly, PDF & Email

By: Simon Thompson, Sr. Manager, HPC Storage and Performance, Lenovo

[SPONSORED CONTENT] The past few years have brought many changes into the HPC storage world, both with technology like non-volatile memory express (NVMe) or persistent memory, and the growth of software defined storage solutions. Gone are the days when IBM Spectrum Scale (secretly, we know we all still call it GPFS) or Lustre were the only real choices in the market. In retrospect, the choice was easy: you picked one of the two and away you went.  And nobody ever wondered if they made the right choice.

Now, there are new breeds of software defined solutions we can use to take advantage of new classes of storage and some to take advantage of (hybrid) cloud. The explosion of data and the rise of artificial intelligence (AI) have all brought new I/O workloads and challenges. NVMe features huge amounts of bandwidth coupled with low latency, and the potential to accelerate workloads to get more out of the performance of their systems. This performance boost opens a world of possibilities including accelerating new drug discovery and supporting real-time high frequency trading. But to take advantage of this, we need to change the way we use this technology.

Tiering of storage is key to providing the right mix of cost-performance-capacity and while some vendors talk about only having a single tier, this is unlikely to ever happen. To paraphrase Mark Twain, the death of tape has been greatly exaggerated. Tape still sits firmly in place for both long term preservation, backup and disaster recovery purposes, and even more so for cyber-attack threats. An air-gapped tape solution economically ticks many of the boxes for data protection.

The Challenge: increased demand requires an increase in storage performance

In the past, network performance hadn’t been a major issue for single client access. With spinning disk solutions, it typically wasn’t possible to saturate the bandwidth into a single compute node. With the advent of NVMe attached storage solutions, this has changed. In some of the synthetic benchmark tests we have run in the Lenovo HPC performance centre, we can saturate 200Gbps links into a single system with peak read and write at around 20GB/s. We know that PCIe Gen5 network adapters will arrive on the market soon and while this might solve the problem for storage clients, it won’t necessarily fix it for storage dense systems. The rise of accelerator driven computing has increased the demand on storage performance, often changing the pattern of I/O from being sequential to highly randomized. Luckily, advancements in storage through solid state, either NVMe or persistent memory, have happened in parallel.

The Solution: Lenovo storage technologies are engineered to scale

Lenovo’s storage approach is to offer solutions based on our server portfolio as well as partner hardware solutions to help fulfil these needs for diverse storage. By developing offerings based on the server portfolio, we can offer different software defined storage offerings and take advantage of design cost efficiencies.

Lenovo Distributed Storage Solution (DSS-G) systems, based on spinning disk and SSD, provide high capacity and highly performance storage based on IBM Spectrum Scale. We partner this effectively with NVMe using Erasure Code Edition integrated either as a hybrid storage system or using caching technology.

For the ultimate in performance, Lenovo offers a fully supported solution using persistent memory and high performance NVMe drives with the file-layer based on Intel DAOS. These offerings are engineered considering the internal architecture with the aim to evenly balance performance and capacity across the CPU sockets in system. In partnership with Intel, the Lenovo HPC Customer Solutions Team is constantly evaluating these systems both with current and future generation hardware and have a keen eye on how future technology will affect these designs. We eagerly await the arrival of Compute Link Express (CXL) storage class systems!

What is certain is that all of Lenovo’s solutions are engineered to scale using our From Exascale to Everyscale™ approach.

Looking ahead: the future of storage

We must consider where the data we need to process is coming from – whether the source is lab equipment, microscopes, sequencers, vision systems, IoT and Edge devices, or sensors in the middle of a forest. It sounds simple but getting data from sources into the right storage solution is still a challenge. This leads me to think about multiprotocol access and what that means. Native parallel, server message block (SMB), network file sharing (NFS) and even object are all taken for granted. We still see lack of heterogenous CPU architecture for some of these technologies, and this will surely have to change. The rise of Arm in the HPC market requires this to change as we can’t rely on NFS forever!

Distributed Asynchronous Object Storage (DAOS) has an interesting take on multiprotocol support. From the Dfuse module to give “legacy” POSIX access, through to native libraries for HDF5 format. We’re likely to see more application specific I/O libraries to provide peak performance, but this brings us back to the challenge of getting the data into the right systems and in the right format. GPUDirect brought new ways to load data into GPU memory, but will this be enough for the future? I suspect we’ll need big and diverse shifts away from POSIX being the way to access data.

While we have many approaches to cooling HPC compute systems (Lenovo’s own Neptune™ systems use liquid), so far, we haven’t seen much need for cooled storage systems. But with increasing bandwidth and density, we’re likely to see increased power demands to utilize all this capability. It’s unclear if we’ll see liquid or two-phase cooled storage systems, but it certainly is a possibility we have to consider, particularly for HPC customers who could place their system in hot data centre were it not for the ambient needs of the storage. In the short-term, at the very least, I expect we’ll see cold-plate systems where heat is captured and transferred elsewhere in the system for thermal recovery. The days of racks of storage with screaming multitudes of fans are surely numbered, or we must consider long optical networks (with latency) to de-locate the storage. If we consider the need for rack or node-level scaled burst buffers, then the need to change the story on thermals becomes clear.

HPC storage is not a solved problem. We have many challenges to overcome. Whatever the software defined storage experts tell us, no one storage solution can fix all the I/O requirements we face. We need to start attacking these challenges now, but we can’t do them all at once, particularly as there will be diverse solutions. Lenovo has a strong heritage of co-design of our HPC compute systems and value this approach. Whether it be efficient cooling, supporting diverse solutions and access mechanisms, or educating users on how to get the most out of their storage, we have a strong foundation to start tackling these challenges together.

####

Simon Thompson is a Senior Manager for HPC Storage and Performance within Lenovo’s HPC and AI customer solutions team. Before joining Lenovo, he spent 20 years working in a research organisation running, building, and designing research systems.