Adaptive Computing Rolls Out New Reporting & Analytics Tool

Print Friendly, PDF & Email

AdaptiveToday Adaptive Computing announced its new Reporting & Analytics tool for enabling organizations to gain the insights that improve resource utilization and efficiency, ultimately helping to better align resource usage with mission objectives.

Adaptive Computing is driving up our customers’ productivity by helping them gain true insight into how their resources are being used, how to handle future capacity planning, and the service levels they are delivering to their most critical projects,” says Marty Smuin, CEO of Adaptive Computing. “This latest solution helps deliver the insights that organizations need in order to eliminate waste, avoid unnecessary delays, and make the changes that will align resources to better achieve organizational goals.”

Organizations heavily invest in expensive resources and high-skilled individuals to achieve their goals, only to find that many resources go underutilized and that people are often less productive than desired. By streaming in massive amounts of workload and resource usage data from their High Performance Computing (HPC), High Throughput Computing (HTC) and Grid Computing environments, and then correlating that information against users, groups, and accounts, organizations can gain insights into exactly how their investment is being used and how well it aligns with their goals.

Without reporting and analytics, organizations only see an expensive black box that they hope is efficient and aligned to their organizational goals. Most organizations know their total raw utilization, and that is important, but it is easy to achieve high utilization by using up the highly valuable cluster with jobs and reservations that inefficiently use or block its resources. Perhaps more important than raw utilization is the added insight of how efficiently resources are used or application workloads are run per user and per project, as well as insight into capacity planning details such as which resources are used more than others. This new solution provides these efficiency and capacity planning details, as well as SLA delivery information such as average wait time, how long jobs stay in different states, resources allocated verses utilized, and outage impacts.

Adaptive Computing’s Reporting & Analytics solution is based on the lightning-fast Apache Spark data processing engine and the highly scalable and flexible MongoDB database. Usage and workload data from Torque and Moab is streamed into aggregated views, which can then be organized into table or chart based reports. These reports can then be combined into customizable dashboards to enable easy monitoring of key indicators.

In the streaming process, a rich UI-based data stream designer provides empowering functions such as filter, group and reduce, join, transform, flatten, fork, and union. Once pulled into the report designer, the user can choose to display the data in a number of different formats such as a table, pie chart, line chart, and bar chart. Admins may also choose from many out-of-the-box reports such as average wait times for users, groups, and accounts, as well as job state duration, highest requested resources, allocated resources, outage impact, and others. This solution also includes a UI-based interface to customize reports based on all information made available.

Reporting & Analytics will be offered as an add-on to Viewpoint, Adaptive Computing’s job submission and management portal; this integration becomes available in December of 2016. With greater insight, organizations gain a competitive advantage as they use that insight to drive better decision-making, policy enforcement, improved efficiency, and overall productivity.

Sign up for our insideHPC Newsletter