Last month Adaptive Computing announced the latest release of the Moab Adaptive Computing Suite and a new product, Moab Viewpoint. The new features are aimed principally at banking, financial services and enterprise customers, but that doesn’t mean that Adaptive is walking away from their connections with HPC. I talked with Petter ffoulkes (yes, it’s supposed to be lower case), Adaptive’s Vice President of Marketing, to find out what’s in the new release and where the company is headed.
In 2009 Adaptive Computing adopted its new name and embarked on a strategy that reflected the growth in its customer base from HPC to a mix of HPC and enterprise business. Since then the company has been building a name for itself among enterprise customers who need to manage their infrastructure as a service.
An important new feature for this crowd that the new Moab 5.4 brings is the notion of a transactional workflow that allows customers to build complex chains of action and reaction in response to automatically detected events that impact the enterprise IT infrastructure.
For example, let’s say you are a web company that sees a spike in activity following the launch of a new product. Moab 5.4 will see the surge and know that it needs to dynamically re-provision servers and add them to the web server pool. But let’s say that your product includes video instructions that all those new customers are going to watch — in this case Moab will not only avoid stealing resources dedicated to the video pool when it’s looking for new web servers, it will also watch that load and add to video serving resources in response to the spike in web load. This gives customers a way to watch and respond to events as a system, rather than in isolation. A smart addition.
Also new in this version of Moab is support for dynamically provisioning and migrating virtual machines. Adaptive’s Peter ffoulkes says that this is in direct response to conversations they’ve had with their customers and a reflection of the dramatic increase in the use of virtual servers to get closer to full utilization out of their pricey infrastructure. Moab’s Services Manager now hooks into IBM’s open source xCAT cluster manager to provision virtual machines based on VMware, KVM, and Xen (with support for Hyper-V coming). One possible use? As the load varies throughout any given day Moab may migrate VMs from several physical servers on to one central server, either shutting down or redeploying the newly freed resources. When the VMs support it, Moab will use live migration to make the move transparent to users of that VM’s services.
ffoulkes says that Adaptive has also spent a lot of time tuning the internals of Moab: the 5.4 release using 80% less memory than previous versions. This has the real impact of allowing a single instance of Moab 5.4 to manage much larger environments.
Adaptive Computing is also expanding their portal ambitions with the release of Moab Viewpoint 1.0, a webby interface for the Moab Adaptive Computing Suite of products. The Java-based Access Portal and command line interfaces are still there as well, but Viewpoint introduces what ffoulkes described as a “Web 2.0-like” feel to the creation and management of virtual private clouds.
HPC-ers, fear not.
But the Moab Adaptive HPC Suite is still available, and ffoulkes was mindful to sketch out the lines from these features to the HPC community. Certainly the dramatic reduction in the memory footprint is a bonus for everyone, and this release included other tuning of the internals according to ffoulkes.
We also talked about some potential new uses of HPC where the dynamic resource provisioning and workflow management that the new Moab offers could be a real benefit. For example, some are experimenting with the deployment of crisis response HPC centers that have to be able to turn on a dime to provide decision support in emergency situations: earth quakes, fires, and the like. Adaptive’s software could be used to manage that infrastructure, automatically shifting it from operations to support one type of calculation to address the emergency of the moment.
South African HPC
And speaking of HPC, Peter and I had talked the week before about how South Africa’s largest supercomputing facility, is using the Moab Adaptive HPC Suite to manage its “zoo” of architectures.
CHPC has a variety of architectures including AMD Opteron, Intel Xeon, IBM Power PC, IBM Power 4+ and Sun Microsystems SPARC processor based systems running a mixture of operating systems including UNIX (Solaris), Linux (SLES) and Microsoft Windows HPC Server 2008. The center uses Moab Adaptive HPC Suite to integrate and manage all of these resources (and their respective batch systems) as one pool, automatically directing tasks to resources as they become available and alleviating users from the burden of having to track what processors are available on which machines. They are also taking advantage of Adaptive’s capabilities to dynamically re-provision portions of their clusters from Windows to Linux, avoiding the need to guess ahead of time what the demand for either operating system is going to be.