“Adaptive Computing is driving up our customers’ productivity by helping them gain true insight into how their resources are being used, how to handle future capacity planning, and the service levels they are delivering to their most critical projects,” says Marty Smuin, CEO of Adaptive Computing. “This latest solution helps deliver the insights that organizations need in order to eliminate waste, avoid unnecessary delays, and make the changes that will align resources to better achieve organizational goals.”
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. To learn more down load this white paper.
Today Adaptive Computing announced it has integrated Remote Visualization with Moab’s workload submission portal, Viewpoint, in order to improve ease-of-use and increase user productivity. “Adaptive Computing is transforming our customers’ experience so that technology is no longer a barrier and users are more empowered in their efforts to cure cancer, build safer vehicles, and better our overall environment,” says Marty Smuin, CEO of Adaptive Computing. “This latest innovation helps automate the experience in such a way that organizations can both reduce costs through sharing and improve productivity through faster application interaction and increased collaboration.”
Today Adaptive Computing announces it has set a new record in High Throughput Computing (HTC) in collaboration with Supermicro, a leader in high-performance green computing solutions. Supermicro SuperServers, custom optimized for Nitro, the new high throughput resource manager from Adaptive Computing, were able to launch up to 530 tasks per second per core on Supermicro based low latency UP SuperServer and over 17,600 tasks per second on its 4-Way based SuperServer. This record-breaking throughput can accelerate financial risk analysis, EDA regression tests, life sciences research, and other data analysis-driven projects. It can expedite the process of gaining critical insights, thereby delivering products and services to market faster.
Today Adaptive Computing announced that they have fully deployed Moab 8.1 at the HPC4Health consortium in Canada. “The folks at Adaptive Computing helped us create the technology to build a converged data center that dynamically shares resources securely and allows us to account for the workloads used by each organization involved in the HPC4Health venture.”
In this video from the Disruptive Technologies session at the 2015 HPC User Forum, Nick Ihli from Adaptive Computing presents: Leveraging Containers in Elastic Environments.
“Moab Viewpoint is the next generation of Adaptive Computing’s admin portal. This enhanced Web-based graphical user interface enables easy viewing of workload— status, reporting on resource utilization and other system metrics. The Moab Viewpoint Portal plays an instrumental role in ensuring SLAs are met — a key component of Adaptive Computing’s Big Workflow vision — by allowing HPC administrators to maximize uptime and prove services were delivered and resources were allocated fairly.”
“We received an overwhelmingly positive response to the new Moab features during SC14, so we¹re very excited to make the new features generally available. In a competitive computing landscape where enterprises need to accelerate insights, Moab matters,” said Rob Clyde, CEO of Adaptive Computing. “Automating workload workflows is imperative to shorten the timeline to discovery, and this latest version of Moab represents a huge step forward in helping enterprises achieve that. We are excited to reveal our latest innovations and continue driving competitive advantage for our customers.”
Moab 8.1 systems management software includes a revamped Web-based user interface with bolstered reporting and tracking capabilities that give greater insight into the job states, workloads and nodes of a HPC system; massive performance gains and improvements in scale; and system improvements to achieve elastic computing to expand to other resources as workloads demand.
In this special guest feature from Scientific Computing World, Tom Wilkie writes that while end-user scientists and engineers fear the complexity of running jobs in HPC, there are software toolkits available to help.