Interview: Simplifying HPC with the IBM Very Large Memory Appliance

Print Friendly, PDF & Email

IBM recently rolled out what they are calling a Very Large Memory Appliance based on ScaleMP software. To learn more, I caught up with ScaleMP’s CEO, Shai Fultheim.

insideHPC: What is the IBM Very Large Memory Appliance and what is the problem it is designed to solve?

Shai Fultheim: IBM is addressing environments and applications where a user needs Terabytes of main memory to quickly gain insight into very large datasets for analytics, genome assembly, or transactional data. Some would call it Big Data – but this term is too confusing. The IBM Very Large Memory Appliance delivers up to 7.5 TB in a single system, ready to go. There is no need for complicated programming, administration or management in order to get access to very large memory requirements. The details can be found at scalemp.com/appliances/ibm.

insideHPC: Usually we tend to associate a cluster with complexity. How does this solution turn that around and deliver the simplicity of an appliance?

Shai Fultheim: The IBM Very Large Memory Appliance uses cluster components but operates as a single system. It provides up to 32 cores and has scalable RAM options. One could think of it as your largest existing system with transparent memory booster. Also announced with IBM is an additional appliance targeted on moderate-size cluster users seeking to simplify cluster management.

insideHPC: What are the advantages for the customer of an integrated appliance that addresses a specific need?

Shai Fultheim: This simplifies the entire user experience. Out of the box, ready to go – minimizing acquisition to production time, and providing higher performance. By giving users access to Terabytes of memory in a single server, there is no need to wait forever for disk-based or flash-based solutions that require software tuneups.

insideHPC: Does the launch of this product signal a growing market for large memory solutions?

Shai Fultheim: Absolutely. The amount of data that is being analyzed is growing at tremendous rates. Users are generally limited to the amount of data they can analyze based on the available memory in their system. Various studies have shown how much data is being generated each minute, hour, day. In order to makes sense of this data, new, easy to use solutions are needed that address the amount of information. Users looking for RAM speed, and are fed up with flash-stories or complicated disk arrays. With IBM VLMA you get RAM, you use RAM – and your applications will perform accordingly.

insideHPC: IBM has put a lot of effort into Big Data over the years with $Billions in acquisitions and product R&D. How do you think this solution fits in their strategy?

Shai Fultheim: “Big Data” typically refer to data problems that are easily partitionable (think facebook), while “Large Data” refers to the vertically scaled data problem. Examples for the latter will be genome analysis, complicated analytics, domain decomposition and more – whether the application is purchased, or a home grown solution. IBM’s Very Large Memory Appliance is an answer that addresses many, many environments.