Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Avere Introduces FXT Virtual Edge Filer for Amazon EC2

In this video from SC14, Jeff Tabor from Avere describes the company’s new Virtual FXT Edge Filer, a software-only product for the Amazon EC2 computing cloud.

With this software-only version of Avere’s FXT Edge Filer series, companies can finally connect the dots between the compute cloud, storage cloud, and on-premises storage without sacrificing performance, worrying about security, or breaking the bank,” said Ron Bianchini, president and CEO, Avere Systems. “Avere is excited that this virtual NAS solution will enable companies to take advantage of the flexibility and enormous scale of cloud computing with no radical changes to applications or storage infrastructure. For many customers, this enables them to realize the promised benefits of the cloud.”

Full Transcript:

insideHPC: We couldn’t come by without saying hello and seeing what’s new with Avere. So, what are you guys up to this year?

Jeff Tabor: So we just announced our Virtual Edge Filer. It’s a software only version of our product that runs in the Amazon EC2 compute cloud.

insideHPC: And so this is a high-performance, shared kind of storage space? Or how would you describe it?

S2 00:37 Yeah. So, it acts just like our physical Edge Filer that we’ve been shipping for more than five years and so it runs on high-performance instances in the EC2. So we use the memory optimized instances and so we configure those with SSDs from EBS. You basically get the same hardware operating EC2 cloud as you get from our physical appliances. Then you are on our software and what our software does is, it has the intelligence to automatically cache the active data up in the cloud. It pulls this data either from the Amazon S3 storage cloud or from your data center, from your NAS or object systems that are in your data center, and the goal there is to hide the latency to the storage. It holds it up there close in the EC2 compute cloud, really close to where the application is running for the customer. Because really the latency lets the application run at peak performance.

insideHPC: Well Jeff, we’re here at Supercomputing, right? It’s all about speed? When we talk about optimized NAS, why wouldn’t you want to optimize your NAS? It just seems to me to be self-evident.

Jeff Tabor: It is. But as it turns out, we’re the only ones really doing it in this way. So, we’re the only scalable NAS solution in the cloud. And so what this means is, you’ll start with three instances running our software and if you scale that out to 50 instances, so you can really deliver really, really high performance for applications you move into the cloud. So we come to a supercomputing, because we’ve been working with these customers for more than five years now, so we’re working with people doing genomics analysis in life sciences, we’re doing sciences processing in a little oriental software builds and we’re doing financial modeling. And so, they want to move these apps or they want to run the ability to run these apps in the compute cloud, but the problem is the latency to where the data is too long. So we hide the latency and scale to a massive performance capacity with up to 50 nodes, to really deliver the performance for those performance-hungry applications.

insideHPC: Well, I’ve been hearing a lot this week about HPC applications moving to the cloud, and barriers coming down. It sounds like this is an enabling technology for that very thing.

Jeff Tabor: It is. It is. People are a little leery with the cloud, because it’s not on their premises, and they can’t wrap their arms around it. And they’re concerned about the security of it a little bit. But the bursting into the cloud is a good place for these HPC people to dip their toe into the waters of the cloud. And so, there’s not as much perceived security holes there, because your data’s only up there briefly being operated on or processed in the compute cloud. It’s not being stored a long term so they don’t think it’s as likely to be subject to a hacker attack, or something like that. So there’s a good place for the HPC community get started with the cloud.

insideHPC: Yeah. Now that we’ve got these services like AWS, how are they reacting to what you’re adding to what they bring to the table?

Jeff Tabor: They love it. They love it. Basically they want, essentially, everyone’s data in the cloud. And they want to run everyone’s applications in the cloud. And entail– up until now, there have been devices they call cloud gateways, and they come from– Amazon itself had a cloud gateway, and then there’s some start-ups that have cloud gateways, but they don’t scale. They tend to be single-controller solutions. They run ZFS. ZFS has no clustering capabilities, so they can’t scale to the performance levels that the HPC crowd needs. And furthermore, they don’t have high availability, like an active-active type failover, so it’s not a very robust solution. So ours has both the scaling for performance as well as the clustering for high availabilities. So it’s always up and always humming along.

See our Full Coverage of SC14.

Resource Links: