vScaler unlocks the benefits of Virtualization for GPU Power Users

vScaler has rolled-out native Virtual GPU (vGPU), empowering customers with the ability to split a single physical GPU up in to a number of smaller virtual GPUs. This approach offers greater cost-efficiency for those that don’t require the full power of a dedicated GPU.

vGPU differs from PCI passthrough, which is also supported by vScaler, in that PCI passthrough has typically only been suitable for workloads that can saturate the full capability of the GPU (for example, high performance computing, deep learning and machine learning algorithms). This advancement, under vScaler, broadens the GPU capability with the hypervisor license-free virtual machines vScaler provides.

vScaler’s Director of Cloud and Managed Services, Glenn Rosenberg comments ‘vScaler was created to deliver a very high-performance experience to users. vGPU enables the user to split out the immense performance of a NVIDIA GPU and assign the level of resource needed for their application, allowing other users access to the remaining GPU resources, maximising the performance and utilisation.’

By hosting NVIDIA GPU solutions within its revolutionary cloud environment, vScaler is actively helping to broaden GPU adoption from early stage development platforms to large scale production environments,” comments Alan Rogers, Enterprise Partner Business Manager, Northern Europe, NVIDIA. “With support for containerized environments such as Docker and Kubernetes, and the recent addition of vGPU support, vScaler speeds-up and simplifies the large scale deployments of GPU-accelerated applications.”

The efficiency of vScaler with vGPU support greatly improves performance, compared with traditional architectures, and allows organisations to build virtual desktop infrastructures (or VDIs) that cost-effectively scale this performance for the business.

IT administrators can manage resources centrally instead of supporting higher cost physical workstations at every single desk. This virtualisation technique means the number of users can be scaled up and down based on project and needs.

With its HPC-on-Demand offering, vScaler has long supported deep learning and machine learning research requirements, enabling users to spin up deep learning clusters with the appropriate frameworks (eg. Tensorflow, Caffe, Theano) installed and accelerated using the world’s fastest NVIDIA GPUs, purpose-built to dramatically reduce training time for AI simulations.

Utilizing the latest generation NVIDIA Tesla GPUs, vScaler provides the performance and flexibility for the most complex AI and deep learning tasks, including but not limited to medical imaging, genomics, bioInformatics, autonomous driving and many more. Now with native vGPU, we can complement the above workloads with high-end workstations, VDI and virtualised training environments, paving the way for vScaler as the platform of choice for ‘automated everything ’.

vScaler is currently offering a free trial of its GPU in the cloud offering to customers that wish to try out GPU technology before investing.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.

Sign up for our insideHPC Newsletter