A view from inside the team that built (and debugged) the Wolfram|Alpha HPC infrastructure

Print Friendly, PDF & Email

We posted in mid may about the HPC resources being used to power Wolfram’s new Alpha computational portal

When Wolfram|Alpha launches, it will be one of the most computationally intensive websites on the internet….What computing power have we gathered in these facilities for launch day? Two supercomputers, just about 10,000 processor cores, hundreds of terabytes of disks, a heck of a lot of bandwidth, and what seems like enough air conditioning for the Sahara to host a ski resort.

But what about getting it all to work before launch day? That story is chronicled by one of the guys from the inside team in this post at the Wolfram|Alpha blog. It’s interesting because it shows that these things rarely go smoothly

Given the broader audience the product was becoming viable for and given the public response that we had seen so far, what should we forecast as peak launch demand? How about being able to handle a peak of 2000 queries per second, ten times the earlier plan? Since we hadn’t even talked to a supercomputer vendor yet with about two months to go until launch, we had moved from prudent to very aggressive on both time frame and target.

Wolfram worked with R Systems and Dell to build out the two supers that serve as the primary engines behind Alpha.

We were then just days before launch, that put us with 140 nodes at our disposal, and final load testing could proceed. One cluster of the big Dell system handled 130 qps—check. Two clusters got 260 qps. We were cooking. Three clusters, 210. Uh oh. Four clusters, 120. !?@#%^. Maybe it was just a glitch. We tried it again, but round two didn’t fare any better. It was time for an emergency meeting. Everyone was on the case (Jeff and his systems engineering guys, plus Chris, Jamie, Grant, Mike, Oyvind, and many other folks), working non-stop to figure out the bottleneck. Something must have been thrashing, but what was the problem? The test rig? It checked out. The edge switch? That checked out, too. Ditto on the other end of the line. Core switch? Also fine. Was logging slowing us down? Nope. Were any of the databases saturated? Looked okay. The test log implied packet loss, as did the web server logs.

I’m skipping all the best parts — lots of late nights chasing the demo demons — to get to the climax of the story

The Wolfram|Alpha logging data was being transmitted across the auxiliary network to the main office for aggregation before being sent to the monitoring systems to make those nice visualizations you see in the video. Chris from the systems engineering team ran a ping test on the auxiliary network during a load test. Latency skyrocketed. Bingo! Not enough allowed connections, so we were saturating the proxy.
After raising the number of allowed connections to something ludicrous, we tested again. No dice. Joshua and Mike continued monitoring all of the auxiliary traffic, and in this test the logging system was saturated. It wasn’t doing that before. There weren’t enough connections to the logging database. After Joshua and Mike implemented a fix, we did one more test. One cluster: 140 qps. Two clusters: 280 qps. Three clusters: 400 qps. Then we decided to go for broke. Six clusters: 750 qps. Then for R Smarr: 160 qps, 300 qps, 500 qps, 900 qps. Eureka!

It is an engaging story, especially if you’re the type of person who’s had to set up demos at a tradeshow and nothing worked until 2 hours before the show opened. Fun read.