Managing the GPUs of Your Cluster in a Flexible Way with rCUDA

Print Friendly, PDF & Email

In this video from the HPCAC Swiss Conference 2014, Federico Silla from the Universitat Politècnica de València presents: Managing the GPUs of your cluster in a flexible way with rCUDA.

The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream nowadays, but their adoption in current high performance computing clusters is primarily impaired by the trend of including accelerators in all the nodes of cluster, as this presents several drawbacks. First, in addition to increasing acquisition costs, the use of accelerators also increments maintenance and space costs. Second, energy consumption is also increased, as GPUs are known to be power-hungry devices. Third, GPUs in such a cluster may present a relatively low utilization rate, given that it is quite unlikely that all the accelerators in the cluster will be used all the time, as very few applications feature such an extreme data-concurrency degree. In consequences, reducing the amount GPUs installed in the cluster and virtualizing them is revealed as an appealing strategy to deal with all these drawbacks simultaneously, as those nodes equipped with GPUs become servers that provide GPU services to all the nodes in the cluster. In this talk, we introduce the rCUDA remote GPU virtualization framework, which has been shown to be the only one that supports the most recent CUDA versions, in addition to leverage the InfiniBand interconnect for the sake of performance. Furthermore, we also present the last developments within this framework, related with the use of low-power processors, enhanced job schedulers, and virtual machine environments.”

Download the Slides * See more talks at the HPCAC Swiss Conference Video Gallery.

Sign up for our insideHPC Newsletter.