Interview: Univa Steps Up with NAVOPS 2.0 for moving HPC Workloads to the Cloud

Print Friendly, PDF & Email

Gary Tyreman is CEO of Univa.

Today Univa announced the newest release of its popular Navops Launch cloud-automation platform. Navops Launch 2.0 delivers new capabilities to help simplify the migration of enterprise HPC workloads to the cloud, enabling enterprises to transition HPC workloads to the cloud while reducing costs by 30-40 percent.

To learn more, we caught up with Univa CEO Gary Tyreman.

insideHPC: With this release, Univa aims to simplify the migration of enterprise HPC workloads to the cloud. As the long-time providers of the Univa Grid Engine workload manager, what prompted you to bring Navops to market?

Gary Tyreman: The journey to bring Navops Launch to the market started around two years ago when the market was just beginning to integrate HPC cloud into compute-intensive datacenter strategies. What has changed over the past 18 months, at least according to our market and customer surveys, is the interest in HPC cloud integration at a larger scale as strictly on-premise workloads cannot keep up with the speed at which organizations need to store and scale them in a cost-effective manner. Univa has invested in and sold cloud automation products for years, and Navops Launch – the outcome of that development and learning – isn’t entirely a new product we are bringing to the market. This announcement for Navops Launch 2.0 is an enhancement of our current Navops Launch solution that incorporates capabilities for cost control and budgeted cloud-spend association, as well as workload and resource automation features through composable automation applets, and integrated support for the Slurm workload scheduler – all of which simplify the migration of enterprise HPC workloads to the cloud for our users. Today, we have a considerable percentage of customers looking at, evaluating or using HPC cloud.

insideHPC: What new capabilities does Navops 2.0 include to help reduce spend by 30-40 percent by rightsizing cloud resources?

Gary Tyreman: Through speaking with our customers and industry analysts, we have identified the cost of HPC cloud migration to be (on average) 5 times as expensive as an organization’s current, highly-efficient compute-intensive datacenter, which poses a challenge to organizations looking to migrate workloads to the cloud. And, with over 35% of cloud-spend being wasted, Navops Launch is the only HPC cloud spend management platform in the market today that ensures complete automated resource provisioning. Understanding the current and/or future cost of an application or project gives organizations visibility into their use of cloud resources so it isn’t a black box surprise (or as simple as a ‘cloud queue’) at the end of the billing cycle. Ultimately, leveraging automation and policies to manage the flow of data and workload to the cloud results in reduced waste and unwanted usage.

insideHPC: In what areas do you see customers spending too much on cloud migration and how does Navops automation help?

Gary Tyreman: Once we became aware of the direction our customers were taking when migrating their workloads to the cloud, we developed and worked through a list of requirements, using consultations and proof of concepts, to ensure needs were being met. Three things stood out as consistent threads, above the basic capability of provisioning workload in the cloud: automation, self-serve and cost management.

In this context, automation is the ability to define and codify the behavior of the system. Navops Launch connects to multiple data sources (the scheduler, cloud service provider, storage and other third parties) and uses the information to make decisions, such as add a special instance type for a workload waiting in the queue, moving data or shutting down idle resources.

The requirement for a “self-serve” solution is understandable as well. Most enterprises have power users who they would like to provide access to consume cloud resources on-demand to ensure projects remain productive.

Then, with the need for cost management, Navops Launch sets itself apart from the competition as this is the biggest and most valuable feature of the solution. Navops Launch 2.0 ensures cost management by enabling enterprises to establish and track cloud spending by workload, user, project or department, and our customers have gained a renewed sense of control over cloud costs. When coupled with automation, an enterprise can fine-tune instance type and sizing, data movement, reduce idle time, storage and network costs.

insideHPC: How were customers going about the process of cloud migration before the advent of Navops?

Gary Tyreman: Some customers kicked the tires and stopped when the sum of the total costs were compiled. Others used manual processes to create the cluster. Regardless of the approach though, none were able to readily and routinely speak to the cost allocation of storage or compute to specific workloads or any other filter. In effect, they were blind to the costs, other than the total cloud service provider billing.

insideHPC: Is Navops just for the process of workload migration, or does it offer ongoing benefits for customer cloud operations?

Gary Tyreman: Navops Launch enables workload migration two ways. First, the solution uses automation to create a repeatable and effective path for migrating HPC workloads to the cloud. We discussed the features above. Second, Navops Launch accelerates the use of HPC cloud by instilling the confidence in the organization that costs can be managed. Once the costs are managed and visible in the chain of command, Navops Launch will continue to improve upon the management of resources for continued automated support to lower costs through right-sizing and timing of running instances.

insideHPC: Why did Univa decide to add Slurm to Navops management capabilities? Isn’t Slurm a competitor for Univa Grid Engine?

Gary Tyreman: Univa believes that HPC workloads will ultimately migrate to the cloud over a long time horizon. This is why our focus has been turned to Navops Launch within the past year; our cloud strategy is more broad than our more definitive strategy for Grid Engine customers. In regard to Slurm as a workload scheduler for HPC environments, the tooling and product capability of Slurm running in the cloud is limited, yet there is a sizeable demand to do so – the product is used by approximately 60% of the world’s Top 500 supercomputers, including top ten supercomputing centers. Because of its popularity, we wanted to provide Slurm users with the same HPC cloud automation capabilities as our general userbase since they tend to face the same challenges when integrating cloud strategies.

insideHPC: What kind of feedback are you getting from Navops customers?

Gary Tyreman: Feedback from Navops Launch customers has been very positive, both to the approach around cost management and the multi-cloud, multi-scheduler support.

Sign up for our insideHPC Newsletter