Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Big Workflow: More than Just Intelligent Workload Management for Big Data

Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. To learn more down load this white paper.

Adaptive Computing steps up with High Productivity Remote Visualization

Today Adaptive Computing announced it has integrated Remote Visualization with Moab’s workload submission portal, Viewpoint, in order to improve ease-of-use and increase user productivity. “Adaptive Computing is transforming our customers’ experience so that technology is no longer a barrier and users are more empowered in their efforts to cure cancer, build safer vehicles, and better our overall environment,” says Marty Smuin, CEO of Adaptive Computing. “This latest innovation helps automate the experience in such a way that organizations can both reduce costs through sharing and improve productivity through faster application interaction and increased collaboration.”

Adaptive Computing Achieves Record High Throughput with Supermicro Systems

Today Adaptive Computing announces it has set a new record in High Throughput Computing (HTC) in collaboration with Supermicro, a leader in high-performance green computing solutions. Supermicro SuperServers, custom optimized for Nitro, the new high throughput resource manager from Adaptive Computing, were able to launch up to 530 tasks per second per core on Supermicro based low latency UP SuperServer and over 17,600 tasks per second on its 4-Way based SuperServer. This record-breaking throughput can accelerate financial risk analysis, EDA regression tests, life sciences research, and other data analysis-driven projects. It can expedite the process of gaining critical insights, thereby delivering products and services to market faster.

Moab Powers Dynamic Resource Sharing at HPC4Health in Canada

Today Adaptive Computing announced that they have fully deployed Moab 8.1 at the HPC4Health consortium in Canada. “The folks at Adaptive Computing helped us create the technology to build a converged data center that dynamically shares resources securely and allows us to account for the workloads used by each organization involved in the HPC4Health venture.”

Video: Leveraging Containers in Elastic Environments

In this video from the Disruptive Technologies session at the 2015 HPC User Forum, Nick Ihli from Adaptive Computing presents: Leveraging Containers in Elastic Environments.

Adaptive Computing Demonstrates Viewpoint Software at SC14

“Moab Viewpoint is the next generation of Adaptive Computing’s admin portal. This enhanced Web-based graphical user interface enables easy viewing of workload— status, reporting on resource utilization and other system metrics. The Moab Viewpoint Portal plays an instrumental role in ensuring SLAs are met — a key component of Adaptive Computing’s Big Workflow vision — by allowing HPC administrators to maximize uptime and prove services were delivered and resources were allocated fairly.”

Video: Moab Adds Elastic Computing Features

“We received an overwhelmingly positive response to the new Moab features during SC14, so we¹re very excited to make the new features generally available. In a competitive computing landscape where enterprises need to accelerate insights, Moab matters,” said Rob Clyde, CEO of Adaptive Computing. “Automating workload workflows is imperative to shorten the timeline to discovery, and this latest version of Moab represents a huge step forward in helping enterprises achieve that. We are excited to reveal our latest innovations and continue driving competitive advantage for our customers.”

Adaptive Computing Rolls Out Moab 8.1

Moab 8.1 systems management software includes a revamped Web-based user interface with bolstered reporting and tracking capabilities that give greater insight into the job states, workloads and nodes of a HPC system; massive performance gains and improvements in scale; and system improvements to achieve elastic computing to expand to other resources as workloads demand.

Helping Scientists with System Management Software

In this special guest feature from Scientific Computing World, Tom Wilkie writes that while end-user scientists and engineers fear the complexity of running jobs in HPC, there are software toolkits available to help.

Supercomputing Santa’s Intractable Task

Over at the Adaptive Computing Blog, Trev Harmon takes a computational look at the Santa mythology and how a parallel machine would go about servicing the estimated 1,333,316,210 households on Planet Earth.