Interview: How Univa Short Jobs Brings Low Latency to Financial Services

Print Friendly, PDF & Email

Not all workloads consist of huge HPC jobs. In financial services, workloads often comprise thousands of small jobs that finish in less than a second, yet take much longer than that to get scheduled. With the launch of Univa Small Jobs add-on for Univa Grid Engine, the company offers “the world’s most efficient processing and lowest latency available for important tasks like real-time trading, transactions, and other critical applications.” To learn more, we caught up with Univa President & CEO Gary Tyreman.

insideHPC: What exactly is a short job? Is it just something that completes in a second or two?

Gary Tyreman, CEO, Univa

Gary Tyreman, President & CEO, Univa

Gary Tyreman: Yes, it’s about low latency. I think the easiest way to describe it is: a short job has a run time that’s less than the scheduling time or the dispatch time. Because if I have a long load time of one second and it runs for half a second, well then I’m still slow.

In many workloads it might be hard to find one single job that ran for half a second. But in Financial Services, you might have a swarm of 10,000 of them. And if that’s how you’ve broken down you work, or they’re working on different parts of the data to get the answer.

insideHPC: How will Univa Short Jobs help you get a foothold in the financial services market?

Gary Tyreman: Univa Short Jobs is the third of three products we’ve put together for financial markets. From the start, we’ve had Univa Grid Engine, which is already very robust, is used in a number of financial institutions. Then in April we added the Universal Resource Broker (URB), which allows people to build out their data infrastructure on top of Grid Engine and run their data size or data style applications like Spark, and Hadoop, or microservices. Now with Univa Short Jobs, we can take on low latency workloads.

The financial services market has really been a market of one, dominated by IBM Platform. There are certain applications that require that real-time near zero latency, speed of light kind of activity. But a lot don’t; in fact, probably 90-95% don’t.

insideHPC: So, how does Univa Short Jobs work?

Gary Tyreman: So what we’ve actually been able to do is get the scheduler out of the way by implementing a pull model versus a push model. That way the scheduler isn’t adding its overhead for each task.

First, we tell the scheduler this is a special type of workload. And instead of placing it across 10,000 cores or 100 cores and running a bunch of stuff, it spins up the environment at the other end. This is basically a bunch of “workers” that start immediately launching and pulling the data in and doing the computation. On Amazon EC2 benchmarks, we’re getting about 20,000 IOPS per second that we can fire off and actually finish in that second.

You can use the APIs but we’re not forcing you to go down an API-only method which is what our competition would do. So moving and integrating an application is fairly simplistic. Mapping it is very easy. When your application is running, you can actually look and start pulling information and monitoring the actual message bus. So that means it isn’t a black box, when you push something in you get something out. We’ve got a lot of interest in that because there are situations where the workload might actually be halfway done but the answer is clear. You may already have the answer, so why run the other half? So that can be managed through monitoring and that’s part of it.

insideHPC: Univa Short Jobs is targeted for Financial Services, but is it useful for some kinds of HPC workloads?

Gary Tyreman: Yes, absolutely. We’ve seen use cases in verticals like EDA, life sciences, and aerospace. So in EDA semiconductor design, they have workloads that could run for five or 10 seconds. Those are also considered short jobs because in that busy kind of environment, your scheduling time is longer. There’s a lot of those kinds of tasks. And then we’re seeing some large government contractors that are running aerospace and structural applications. Those applications are also now generating a lot of these low-latency workloads.

insideHPC: For your users, are 20,000 short jobs just an occasional kind of requirement or is this something that goes on all day?

Gary Tyreman: It’s both. Depending on if it’s an application and there’s data that’s being fed in they might run an analysis if it’s underneath the risk. If it’s more of a service and they’re trying to figure out what’s happening, they might be streaming data into it and then spawning off the jobs. So it’s a bit of both, but it really depends on the institution or the bank and what they’re trying to do. It’s going to be multiple times a day they’re going to require it, it’s going to be continuous in some circumstances. But because we can take all of the workload, not just the short job and not just the data, the customer can build a pretty advanced environment and infrastructure that can run pretty much anything and then utilization will go up.

insideHPC: So now you’ve got these three capabilities, are you now ready to go to market in Financial Services, or are there additional requirements?

Gary Tyreman: Well, our early customer test trials and proof of concepts are going well, but I do expect as always there’s going to be stuff that we haven’t really gone into or thought about. But in a broader sense to approach the market, I don’t think so.

I think we’ve got a very unique package with Univa Grid Engine, the Universal Resource Broker, its support of the frameworks for data architectures and the data environment, and pulling that in with the Short Jobs. I don’t know that there is a competitive offering out there that can do all three in one single environment. One or the other, absolutely, maybe two of the above for sure. But I think the options that we’re bringing to the table are pretty clear.

insideHPC: It’s always exciting to hear about companies like yours brining HPC technologies to new markets. It sounds like the way you bring it all together is the optimal way to demonstrate value.

Gary Tyreman: It’s like we’ve talked about, URB, the low latency support and all the trimmings around it, right? Because now we’ve got to build out the support and the monitoring framework that we have and reporting and analytics. But if you take what we do with URB then you can if you choose how to build Google-style data center. You want containers? We can support that. You want optimization, we support it. You want cloud, we support it. So depending on how you want to run it you can do it.

You’ve probably heard people talk about themselves as a datacenter operating system cloud. We wouldn’t use the terminology frankly because that might not work well when we’re sitting with Red Hat, but you can use Grid Engine URB with all the Enterprise features and capabilities to do what other people are planning to build. And I think it’s going to be a pretty interesting and exciting opportunity over the next few months when you think about all the trends and the transition that’s going to go through all of Enterprise IT.

Sign up for our insideHPC Newsletter.