Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Cloud Compute in 2019: Workloads and Options

In this video, Oracle Sr Director Leo Leung describes cloud computing options by workload in 2019.

Transcript:

Hi, I’m Leo Leung, senior director of product management for Oracle Cloud Infrastructure. And today I’m going to talk about different kinds of workloads you can run on cloud compute instances in the cloud. So there’s a lot the variations within these families, but in general you’re starting to see three families of compute available:

  • CPU type of machine with local storage inside of that instance.
  • CPU based machine, either bare metal instances or virtual machines, with network storage (block storage).
  • GPUs, graphical processing units in bare metal or virtual machine form. These typically use network storage as well, block storage over the network.

Looking at these three families of compute, let’s talk about the workloads that they’re suited for. So when you’re looking at bare metal with storage, this really gives you the ultimate in performance in a specific package. You’re talking about millions of storage operations per second that are possible out of these instances because they typically have solid state storage on board. And what is that good for? Certainly, when you’re thinking about your classic transactional types of workloads like OLTP databases and transaction processing types of workloads. Lots of enterprise applications fit in this family, with lots of big database types of applications. You can also start to take advantage of these types of machines if you have software to basically group them together or cluster them for large high performance computing types of workloads.

Okay, so transactional workloads, HPC workloads. Sometimes people call these scale up types of workloads. When you look at this middle family, this is sort of the general purpose type of compute. So this can handle a very wide range of types of applications. This is often used for you “scale out” types of applications. Meaning you’re basically just adding additional instances that have the application code in order to make the application both more resilient and higher performance. Lots of web applications fall into this family. Again, even up to some of the most high performance enterprise applications actually still fall in this category of a web application, but very, very general purpose. This is typically what gets consumed most at cloud providers because you’re also able to independently scale your storage.

So in this type of family, you can have applications where there’s a huge amount of capacity required, but maybe not as much computing required that it’s very well suited for that. Now, this last family is becoming extremely popular for certain types of workloads like machine learning where essentially it’s very, very compute intensive where you’re separating out the workload and processing it through the many, many processors that are available on a GPU. Machine learning simulations fall into this family. More and more workloads are becoming applicable to this type of computing.

So there you have it. Three general classes of computing. Of course there’s variance when it comes to clock speeds, when it comes to the amount of memory versus storage is on board. But generally they fall in these families: CPU based instances that have storage onboard, CPU instances that require network storage, and GPU instances that require network storage. Thanks a lot for your time.

Sign up for our insideHPC Newsletter

Resource Links: