insideHPC Research Report on GPUs

Print Friendly, PDF & Email

Earlier this year, insideHPC commissioned Gabriel Consulting Group to conduct the “HPC and Large Enterprise Purchasing Sentiment” survey, we asked 175 HPC and large enterprise data center folks to tell us how they’re using technology, what new technologies they’re looking at, and how they expect to spend their budget dollars over the next 18 months. This article is an excerpt from the insideHPC Research Report on GPUs. In the coming month insideHPC will be releasing additional reports on other findings from this survey.

Introduction to insideHPC Research Report on GPUs

In this research report, we reveal recent research showing that customers are feeling the need for speed—i.e. they’re looking for more processing cores. Not surprisingly, we found that they’re investing more money in accelerators like GPUs and moreover are seeing solid positive results from using GPUs. In the balance of this report, we take a look at these finding and and the newest GPU tech from NVIDIA and how it performs vs. traditional servers and earlier GPU products.

Some of what they said was surprising, like the fact that most expected their 2017 spending to rise by 8-11%. We also found that they were going to significantly increase their spending when it comes to new systems, compute accelerators, and high performance system interconnects.

When questioned more closely, the biggest need for these customers is for more compute power. In fact, as the graph shows, more than 60% of our respondents say that lack of processing cores is either a significant or large constraint. This is understandable when you consider how the compute challenges in both HPC and large enterprise IT shops have mounted over the last few years.

On the HPC side, there is always the desire to model more complex interactions and develop models with more variables and higher accuracy. Even though HPC computing capability has been increasing at a high rate, at least judging from the Top500 list, there is still unmet demand for more compute power.

Enterprises are facing much the same challenges, with the advent of Big Data and enterprise analytics giving them the ability to, for example, slice and dice customer purchasing patterns in a myriad of ways. But it takes a lot more compute power to run these analytical models, and a shortage of processing capability becomes acute when this data needed to feed real time decision making.

We also asked our survey respondents what proportion of their applications were constrained by the lack of processing cores. As you can see by the graph, roughly a third of our respondents report that 33% or more of their applications are CPU bound, with 60% saying that at least 10-33% of their apps are suffering from a lack of cores.

Given that customers are feeling an acute need for more processing power in their data centers, it’s not a surprise to see they plan to spend significantly more on compute accelerators (GPUs and co-processors) in the coming year.

How to fill the “Core Gap”?

Customers are equipping systems with accelerators in order to get more processing power without having to purchase additional servers. These accelerators have a proven track record of providing much more numerical processing power than CPU-only systems. They are also energy efficient when compared to the energy consumed by adding entire servers just to get more compute capacity

Our survey showed that GPUs (Graphical Processing Units) from NVIDIA are particularly popular options in the compute accelerator market. These processors, typically added to a system as a PCIe card, can push application speeds to 5, 10, or even, in some cases, 50x faster than a traditional CPU only system.

We asked our survey respondents if they’re using GPUs, or considering them, for their important workloads. As can be seen on the chart, a high proportion of respondents (~70%) are either already using GPUs or testing them. An additional 20% are clearly interested in exploring GPU options. We further questioned the survey respondents who reported they were either testing or using GPUs currently, asking them what kind of results they’ve been seeing with this technology.

Close to 70% said that GPUs were clearly a winner or that they were seeing strong potential for benefit. Almost 10% weren’t far along enough in the testing process to know for sure. We do see just over 20% reporting that they aren’t seeing any benefits, which brings up a good point: GPUs aren’t a panacea for every performance problem. GPUs aren’t a good fit for every application or computing situation—just like any other technology. In order to fully take advantage of GPU acceleration, applications need to be parallelized and have pointers in the code to offload the heavy numerical lifting over to the GPU. However, there are a large number of applications that are already optimized for GPU computing, including 9 of the top 10 applications, and 35 out of the top 50 applications in high performance computing and various other industries.

Source: InsideHPC Research Report on GPUs

Source: insideHPC Research Report on GPUs

The full insideHPC research report has additional graphs and findings including:

  • Spending plan for Accelerators
  • GPU applications  in the data center
  • A review of the NVIDIA P100

Download the insideHPC Research Report on GPUs