Interview: Scalable Informatics Breaks the IOP Barrier

Print Friendly, PDF & Email

I think it’s fair to say that I/O is the defining characteristic of a performing HPC system. Scalable Informatics is small company that has been providing world-class performing storage solutions to the HPC community for many years. To learn what the company is up to this year at SC12, I caught up with the company’s founder & CEO, Joe Landman.

insideHPC: I’ve been reading your blog at Scalabilty.org this week. What is the “IOP barrier” and how to you get beyond it?

Joe Landman: We define the storage bandwidth wall as the effective time to read or write a storage system at its maximum full bandwidth. IOP barriers are the rate at which your system can handle effectively random uncached IO patterns … similar to what you get when you take a system capable of high streaming bandwidth and place it in a situation of many simultaneous users performing IO that hits the storage system in something that looks fairly random. Tiering really doesn’t help in this situation. Adding a tier or cache is only a win if you are going to reuse that data on the read side, or not overflow the cache before you can flush it on the write side.

The IOP barrier occurs on the write side, when you overflow that cache, and you need to wait for each write to be serviced. On the read side, the barrier occurs when you have to wait for each cache line to be invalidated, and add in all the additional cache management overhead. Those rates represent an asymptotic performance barrier in terms of an overall service rate.

So, how do you get past it? Here are two mechanisms:

  1. Eliminate the cache tier. It hurts you for very large random IO that will overflow the cache tier.
  2. Build a storage system that reduces the time to service each request, and increases the service rate.

This is what we did in siFlash.

insideHPC: What’s new and exciting about the Extreme Density JackRabbit storage device?

Joe Landman: Basic takeaway: 240TB in 4U of rack space, with considerable computing and IO power.

Longer version: Scalable’s history of tightly coupling computing and storage continues with this system, and it represents a significant capability step function for us. In one fell swoop, we’ve done the following:

a) increased processor core count
b) increased memory capacity
c) increased number of PCIe lanes for IO

This results in a system that is well designed to process huge data, in part due to the tight coupling between the very fast and numerous processor cores, and the massive data pipes to the storage.

insideHPC: What is the siFlash SSD Flash Array and what is it for?

It is a tightly coupled extreme performance storage system, designed to maximize both the amount of data flow per unit time, as well as rate at which requests could be serviced. That is, it seeks to remove barriers to working with your data, streaming or otherwise.

insideHPC: What kind of performance are you seeing on real applications?

Joe Landman: siFlash has been used in a range of applications, from file and block service, through financial data processing. We see sustained performance of 3+ GB/s for NFS exported file systems, and locally run code runs at about 80-90% of maximum performance.

insideHPC: What file systems do these new solutions support and will users of varied file systems still see superior performance?

Joe Landman: Here are three things to consider:

  • All systems will support Linux, and Illumos based systems (SmartOS, Illumian, and soon OmniOS).
  • All systems are usable in our siCluster system with Lustre, GlusterFS, Fraunhofer Parallel File System, Ceph, and OrangeFS.
  • All of these systems outperform our previous generation of systems by a substantial amount. The performance advantages of these systems should be significant.

Check out Scalable Informatics this week at SC12 booth #4154.