Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Infinidat De-risks Storage Infrastructure with New Offerings and Support for NVMe over Fabrics

Infinidat, a leading provider of multi-petabyte data storage solutions, announced new offerings that reduce storage infrastructure costs, mitigate the risks of technology failures and deficits, and add an extensible NVMe over Fabrics option. These offerings and functional enhancements will provide new and existing customers more flexibility in managing their high-end storage infrastructure while lowering the cost and risk associated with meeting enterprise service level objectives.

Magseis Fairfield Uses a Sea of Data to Support Environmentally Responsible Energy Exploration

This whitepaper contains a compelling HPC data storage solution case study highlighting the use of Panasas ActivStor by Magseis Fairfield, a geophysics firm that specializes in providing seismic 3D and 4D data acquisition services to exploration and production (E&P) companies.

Panasas ActiveStor Solution: Architectural Overview

Our friends over at Panasas have released this timely new white paper “Panasas ActiveStor Solution: Architectural Overview.” The Panasas ActiveStor architecture running the PanFS storage operating system breaks through the performance constraints of other parallel file systems.

Long Live Posix – HPC Storage and the HPC Datacenter

Robert Triendl from DDN gave this talk at the Swiss HPC Conference. “The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. Since it was developed over 30 years ago, storage has changed dramatically. To improve the IO performance of applications, many users have called for the relaxation in POSIX IO that could lead to the development of new storage mechanisms to improve not only application performance, but management, reliability, portability, and scalability.”

HPE Scalable Storage for Lustre: The Middle Way

Lustre is a widely-used parallel file system in the High Performance Computing (HPC) market. It offers the performance required for HPC workloads, with its parallel design, flexibility, and scalability. This sponsored post explores HPE scalable storage and the Lustre parallel file system, and outlines ‘a middle ground’ available via solutions that offer the combination of Community Lustre within a qualified hardware solution.

Why you can save money by being left behind

In this special guest feature, Dr Rosemary Francis from Ellexus writes that Data Storage needs to be integral to your plans when moving HPC workloads to the Cloud. We can all admit to being pressured into investing in a new solution as soon as it becomes available. We want to be the fastest, the best-informed, […]

Making Storage Bigger on the Inside

This sponsored post from HPE delves into how tools like HPE Data Management Framework are working to make HPC storage “bigger on the inside” and streamline data workflows. “DMF seamlessly moves data between tiers, whether they’re “hot” tiers based on flash storage, “warm” tiers based on hard drives or “cold tiers based on tape.”

Improving Speed, Scalability and the Customer Experience with In-Memory Data Grids

Over the last decade, the new anytime, anywhere, personalized experience has driven query and transaction volumes up 10 to 1000x. It has created 50x more data about customers, products, and interactions. It has also shrunk the response times customers expect from days or hours to seconds or less. Download the new report from GridGain to learn how in-memory computing and in-memory data grids are tackling today’s data storage challenges. 

Quantum Drives High Performance Storage at SC17

In this video from SC17, Molly Presley from Quantum describes how the company’s high performance storage systems power HPC. “So why have an autonomous car at a Supercomputing show? The answer is Big Data. The Autonomous Stuff vehicle in this video is actually a rolling software development platform equipped with sensors that generate a whopping 30 Terabytes of data per day. Now just imagine if there were millions of vehicles on the road generating this kind of data. Only HPC could deal with that problem at scale. Companies like Quantum are stepping up to help solve this big data problem, both in the vehicle, on the edge, and in the datacenter.”

Increasing the Efficiency of Storage Systems

Have you ever wondered why your HPC installation is not performing as you had envisioned ? You ran small simulations. You spec’d out the CPU speed, the network speed and the disk drive speed. You optimized your application and are taking advantage of new architectures. But now as you scale the installation, you realize that the storage system is not performing as expected. Why ? You bought the latest disk drives and expect even better than linear performance from the last time you purchased a storage system. Read how you can get increased efficiency of your storage system.