“Face-Melting Performance” with the Forte Hyperconverged NVMe Appliance from Scalable Informatics

Print Friendly, PDF & Email

joeIn this video from SC15, Joe Landman from Scalable Informatics describes the company’s new Forte Hyperconverged NVMe Appliance.

“Forte Hyperconverged NVMe Appliances provide the fastest NVMe storage in market. With Hadoop, Ceph, or BeeGFS preinstalled, Forte is a rethinking of connectivity between IO devices and CPU/RAM/network. Forte unlocks hardware efficiencies, capitalizing on its strengths to place incredible speed in your hands:

  • Up to 24 GB/s per unit
  • Up to 6M IOPs per unit

Transcript:

insideHPC: What is the big thing that Scalable Informatics is showcasing this year at SC15?

Forte Hyperconverged NVMe Appliance

Forte Hyperconverged NVMe Appliance

Joe Landman: What we’re showcasing this year is – what we’re jokingly calling – face-melting performance. What we’re trying to do is make extreme performance available at a very aggressive price point, and at a very aggressive space point, for end users. So, what we’ve been doing and what we’ve been working on for the past couple of months has been, basically, building an NVMe-type unit. This NVMe unit connects flash devices through a PCIe interface to the processor complex. In addition to that, we’re putting a lot of networking horsepower on this. So, we’ve got 100 gigabit coming out of the back of it, possibly multiple if people have applications for that, as well as putting a lot of processing power – imagine up to 36 Intel processor cores – and up to a terabyte of memory, into a single 2U package. The other aspect of this is the capacity of these SSDs. We’re looking at about, typically, a 480, 960, 1.92 or 3.84 terabyte SSD – and we can build these units in 12 terabyte scenarios, 24, 48, 96. And in relatively short order once our friends at the companies who build these SSDs for us really, really come out with this thing – and make my day – we’ll be able to put 16 terabyte SSDs – NVMs – in this for a whopping 300 plus terabytes in a 2U package.

insideHPC: Joe, that’s a lot of density for high-speed storage – but how fast are we talking out the back end of this thing? How fast can it go?

Joe Landman: That’s a very good question, Rich. Basically, what we’ve done is worked on tuning all aspects of the IO.Path here – trying to make sure that all the SSDs can communicate with the processor complex and the networking complex at full speed – because this is a hyper-converged system. We want to make sure there’re no bottlenecks throughout this entire system. So, within this system, we can get a sustained 24 gigabytes per second – and we can get that while we’re doing 5.5 million IOPS. This is available now as an end user-realizable scenario – this is not a theoretical SFS-type benchmark – this is what end users will experience. And, they can run their applications either directly on this, or over the network – and the network are 200 GigE or 100 gig InfiniBand ports coming out the back. So, we can drive those at 20 gigabytes a second. It’s extraordinarily well-matched between the IO bandwidth and IOPS and the back end networking.

insideHPC: Joe, you and I have had the conversation before, about IOPS and fake IOPS. You see these numbers out there in the world – you’re talking about real six and a half million IOPS?

Joe Landman: We are talking about IOPS in the five-plus million region. The first time I ran the benchmark, what was left of my hair got blown back – I ran it again, and it all fell out. I ran it again, and again – I showed my other team members and they all said, “This is crazy. There’s no way.” Yet, this is exactly what we’re observing, and we’re seeing it available in applications. When we run this – when we run on the system – we have so much power, so much incredible IO power available that applications which would be slow on any other platform are incredibly fast on this thing.

insideHPC: I wanted to ask you about applications. What kind of users would need this kind of density? This kind of speed that we’re talking about?

Joe Landman: Certainly. Like I said last year – like I say almost every year – the argument I make is, anyone who’s doing any sort of analytics, with data at any scale, needs a huge amount of IO bandwidth. They need a huge amount of processing and networking bandwidth, and what we’re providing in this is an almost unlimited supply of both – or all three, I should say. We’re providing incredible IO bandwidth, we’re providing incredible IOPS – incredible networking performance with a very powerful computer within it. So, financial service analytics are a great example of this, IoT analytics, databases, Hadoop, any sort of big data, data-intensive analytics – this thing just roars on. It’s fire-breathing, face-melting – it’s incredible.

insideHPC: All right, Joe. Just to wrap up – you call the thing Forte – what do you mean by Forte? Isn’t that what you guys are about? Is this kind of speed?

Joe Landman: Exactly. Performance is our forte. As we like to tell people – it is simply faster.

See our complete coverage of SC15 * Sign up for our insideHPC Newsletter