vScaler Integrates SLURM with GigaIO FabreX for Elastic HPC Cloud Device Scaling

Open source private HPC cloud specialist vScaler today announced the integration of SLURM workload manager with GigaIO’s FabreX for elastic scaling of PCI devices and HPC disaggregation. FabreX, which GigaIO describes as the “first in-memory network,” supports vScaler’s private cloud appliances for such workloads such as deep learning, biotechnology and big data analytics. vScaler’s disaggregated […]

GigaIO Brings FabreX to vScaler Cloud Platform

Today GigaIO announced availability of FabreX for vScaler’s management interface. As the industry’s first in-memory network, FabreX will bolster vScaler’s cloud appliances for artificial intelligence (AI), deep learning, biotechnology and big data analytics. “GigaIO entered a strategic partnership with vScaler in November 2019 to bring their excellent user interface and ease of use into the FabreX environment,” says Alan Benjamin, CEO of GigaIO. “FabreX’s integration with vScaler delivers an elegant and straight forward way for customers to improve resource utilization and create highly-composable, unified infrastructures. I am thrilled this optimization has finally come to fruition and is available to the general public.”

vScaler Launches AI Reference Architecture

A new AI reference architecture from vScaler describes how to simplify the configuration and management of software and storage in a cost-effective and easy to use environment. “vScaler – an optimized cloud platform built with AI and Deep Learning workloads in mind – provides you with a production ready environment with integrated Deep Learning application stacks, RDMA accelerated fabric and optimized NVMe storage, eliminating the administrative burden of setting up these complex AI environments manually.”

vScaler Cloud Adopts RAPIDS Open Source Software for Accelerated Data Science

vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler Cloud management portal, the RAPIDS suite of software libraries gives users the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. “The new RAPIDS library offers Python interfaces which will leverage the NVIDIA CUDA platform for acceleration across one or multiple GPUs. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes.”

vScaler to Showcase OpenHPC & Deep Learning at Cloud Expo Europe 2017

Today vScaler announced plans to showcase their HPC cloud platform March 15-16 at the upcoming Cloud Expo Europe Conference in London. Supported by two of its strategic technology partners – Aegis Data and Global Cloud Xchange, vScaler will showcase its application specific cloud platform, with experts on hand to discuss use cases such as HPC, Broadcast & Media, Big Data, Finance and Storage, as well as data centre innovation and co-location. “We provide full application stacks for a range of verticals as well as on-demand consultancy from our expert team,” said David Power, vScaler CTO. “Our tailor-made, software-defined infrastructure cuts away time wasted on the distractions of setup and enables our users to concentrate on the task at hand.”

The Long Rise of HPC in the Cloud

“As the cloud market has matured, we have begun to see the introduction of HPC cloud providers and even the large public cloud providers such as Microsoft are introducing genuine HPC technology to the cloud. This change opens up the possibility for new users that wish to either augment their current computing capabilities or take the initial plunge and try HPC technology without investing huge sums of money on an internal HPC infrastructure.”