While HPC is all about performance, optimizing for speed has to be weighed against the need for storage capacity, density, and one’s overall budget. Now Scalable Informatics has developed theFastPath Unison storage appliance, a device that hits the sweet spot for these parameters. To learn more, we caught up with the company’s CEO, Joe Landman.
insideHPC: What is the FastPath Unison storage appliance and what makes it special?
Joe Landman: FastPath Unison Storage is our converged scale out tightly coupled storage and computing platform, designed specifically to simultaneously scale storage capacity, bandwidth, IOPs, computing, and network fabric. It is designed in such a way as to minimize the complexity of building and scaling up and out systems, leveraging our Scalable Informatics OS technology (SIOS, a linux based platform).
SIOS enables us to painlessly scale our environment, including bare metal (our FastPath Unison nodes), VMs, containers. SIOS running on Unison nodes is heavily tuned for performance.
Our FastPath Unison nodes are designed to provide unmatched raw and usable performance on IO, industry leading density, tightly coupled to best in market computing and networking capabilities. These systems support spinning disk (SAS and SATA), as well as SSD (SAS and SATA).
insideHPC: So you are delivering a petabyte in half a rack. Couldnt anyone cobblethis together? What is the secret sauce?
Joe Landman: What makes these different at the hardware level is an attention to the design such that we purposefully avoid bottlenecks common to other systems that people cobble together. We see many designs in market for similar concepts, we don’t claim to be the first, that often have to use many more systems than we require to achieve similar performance. And that is one of the critical issues.
Tremendous density is meaningless if you can’t move data in or out at a reasonable data rate. And as you scale up your storage, the definition of a reasonable data rate should scale up as well.
This goes to the concept we use of the storage bandwidth wall height. Its the ratio of the capacity of a system to its bandwidth. It gives you a rough measure of how quickly you can fill/empty a storage system. A 1PB system with a 20GB/s bandwidth would have a storage bandwidth wall height of 50,000 seconds. That is, assuming you could maintain the 20GB/s bandwidth across the entire capacity, it would take about 50,000 seconds, or just under 14 hours to fill or empty.
In comparison, if you use one of a range of cobbled together systems, chances are you’ll achieve a bit less than 20GB/s, likely around 1/4 to 1/3 of this. And as you scale up your storage, you won’t achieve much more than that. With FastPath Unison, you will see bandwidth scale with capacity.
We have single rack systems in the field that sustain more than 40 GB/s for very large writes from a small number of clients. This is unheard of performance density, and its not even our best effort.
Our results on benchmarks such as STAC M3 show our FastPath Cadence appliance, utilizing the same Unison node design, and same SIOS stack, dominating all published results, more than a year after its release. This one single 4U unit beat rack-level appliances on 10 of 17 benchmarks, with the other systems having often 2-4x the number of CPU cores, and 2-4x the RAM, as well as 8-20x the number of disks and similar numbers of SSD.
More simply, design matters. Inefficient designs, the cobbled together designs, will often have a larger acquisition and TCO than a well designed appliance such as this. We view performance in these terms, in that if you can achieve your performance goals in less rack space, and with fewer systems, you can often save considerable money in the long term, and up-front.
insideHPC:What kind of users would need these this performance and density at such a low cost?
Our argument is that everyone in Big Data, Finance, HPC, Media, Semiconductors, Oil and Gas, … pretty much everyone everywhere, needs this combination of performance and density.
Scaling performance up without density gives rise to caching and tiering band-aids that don’t work terribly well in a growing number of use cases. You need to enforce locality of reference to have such things work well, which, for even reasonable size storage and analytical tasks, isn’t likely to be the norm. And with unstructured data growing about 60% per year, the problem gets worse over time. This is why flash as a primary storage medium is so interesting to larger swaths of users.
Scaling density up without scaling performance up at the same rate means you are making your data that much more difficult to access. This is a problem if you need to work on a terabyte of data, and you have less than a gigabyte per second access to it. Now make that a petabyte of data, and less than 10GB/s access to it.
- FastPath Unison platform, plus excellent time series analytics tools such as kdb+ from Kx*, are the basis for the FastPath Cadence time series appliance. This appliance is used for analyzing time series big data sets by customers on Wall Street and elsewhere in the financial services industry.
- FastPath Unison platform, utilizing disk and flash storage are used by a number of groups performance large scale Genomic, Proteomic , and Phylogenetic analysis at a number of research organizations.
- FastPath Unison platform, utilizing disk storage is in use by a semiconductor industry organization to store inbound data sets at more than 40GB/s.
- FastPath Unison platform, utilizing thinner flash storage and computing nodes, is in use by a financial services firm, together with Scalable’s siRouter SDN hardware, to provide a cloud computing platform and set of cloud based exchanges on multiple continents.
- Basically, any group that has a great deal of data, and needs to process it in a reasonable period of time, at high performance and low latency, would be a candidate for FastPath Unison. In our view, this is pretty much everyone.
insideHPC: The appliance sounds very flexible. Why does the device have a BeeGFS native client?
Joe Landman: We’ve been working with the BeeGFS (f.k.a. FhGFS) team for a while, and have found the system to be tremendously flexible. Specifically, after light tuning, BeeGFS was able to drive our FastPath Unison nodes at the highest sustained speeds and efficiency we’ve recorded. The combination of an excellent scale out file system, with our tremendously powerful converged storage and computing nodes results in an unbeatable sustained and end user achievable performance.
insideHPC: Why not just go all flash SSDs for something like this?
Joe Landman: But we do 😀 , though it would be a little more than the $250k USD price for entry. One customer is using the all-flash version of this to build their cloud. Another is using the all flash version of this (having just tripled their installed base) to provide very high performance storage for life science computing.
The beautiful aspect of the SSD version is the IOP rates are almost unbelievable. We’ve had customers run on the FastPath Cadence unit, sustain more than 1.5M IOPs for very random/seeky code.
We stealthily showed off what we could do with this last year with the SiCloud platform. Performance this year is … somewhat better … and we are happy to speak to people about what we can do if the have needs that require this level of performance. FastPath Unison can handle this.
insideHPC: What kinds of reactions are you getting from your customers about the appliance?
Joe Landman: Very positive reactions, we had our first request for a formal quotation within hours of the notice going up. We’ll have more information going into SC14, but we invite people to try this out. We’ll have an engineering version of this at our SC14booth #3053, and would invite everyone to come by and see it. Additionally, we are available to answer questions, and provide formal quotations. All a user has to do is to reach out to us at firstname.lastname@example.org, or fill out the formand someone will reach out to them to discuss how FastPath Unison could help.
Download the insideHPC Guide to HPC Storage