Right alongside this week’s GPU Tech Conference, Wall Street Tech has an interesting article detailing how Bloomberg has converted their bond pricing infrastructure and applications to utilize the power of GPUs. Every night, Bloomberg calculates 1.3 million hard-to-price asset-backed securities. These calculations, single-factor Stochastic models, were originally run on a Linux cluster.
These models are ideal for doing things in parallel, and we did parallelize them over traditional x86 Linux computers,” says CTO Shawn Edwards.
However, as customer demand increased, the overall scalability of the pure Linux cluster did not lend itself well to realistically doing the job. One of the core software architects on Edwards’ team suggested using GPUs to solve the scalability issue. The core Stochastic models exhibited in their applications lend themselves well to parallel computing on GPUs.
It turned out that in order to compute everything within that eight-hour window, we would need to go from 800 cores to 8,000 cores,” Edwards. “That’s a lot of servers, about 1,000. We could do it, but it doesn’t scale very well. If we wanted to use it for other ideas, we were faced with having to pile on more and more computers. That’s when the idea came in for GPU computing.”
The newly minted GPU method went live in 2009. Rather than running on 1,000 traditional server nodes, the cluster shrank to 48 server/gpu pairs. Whats more amazing is the fact that they achieved an 800% performance increase.
Overall, we’ve achieved an 800% performance increase,” Edwards says. “What used to take sixteen hours we’re computing in two hours.”
We’ve all seen the various performance numbers regarding GPU-based speedup. Outside of pure speedup, also consider the power and cooling costs that Bloomberg is saving by migrating to a more consolidated architecture. For more info on, read the full article here.