Wild guess that noisy chips won't lead to better weather simulations

Print Friendly, PDF & Email

Here’s an interesting idea from the NewScientist

While researchers are striving to make the models more realistic, they are limited by the processing power of the supercomputers that run climate models, Palmer says. “That determines how fine of a grid we can solve the equations on, because of the computing cost,” he says.

Adding a degree of randomness to a particular model and running it multiple times could provide a cheaper way to increase realism, Palmer and colleagues argue, as it could be a “poor man’s surrogate for high-resolution models”.

But, random number generation without special hardware can be expensive, and often not very random. The solution? Well, use special hardware…but not the way you may be thinking

A way around this could be to use cheap hardware – low-cost computer chips that generate output with some random noise due to the way electrons bounce through them. Essentially, those chips produce the necessary randomness for free.

“It’s very speculative,” Palmer says. “But if it can be made to work, it would make much more efficient use of power.” The idea of adding randomness into the models is “very interesting and might be helpful for some cases”, says Reto Knutti of the Swiss Federal Institute of Technology in Zurich, “but in my view it will not solve all problems.”

I tell you though, I’m not sure about this. Randomness in one, controlled, area of the simulation I guess could be a good thing. But if you have generally noisy chips, how do you trust any part of the calculation? How do you know your noise isn’t somewhere you don’t want it, for example? I guess I’m not alone; responses from the Twitterverse following HPCwire’s post of the original story aren’t postive

ianfoster: Sounds like nonsense to me — RT @HPCwire HPC News: Cheap and Noisy Chips Could Improve Climate Predictions

rplzzz:…I’m skeptical.If you use bit errors for “randomness”,how do you ensure that the errors are in the low bits instead of the exponent?

Comments

  1. This does seem like a bit of a crazy idea. The whole point of simulations is produce reproducible results that you can validate against actual reality. This implies you can run the simulation a second time and get the same results. This kind of system would generate random results that you can’t controllably reproduce. Seems like this would invalidate much of the results since it would be difficult or impossible to reproduce them.

    I worked for a simulation company for a number of years and we got beaten up by customers when results were inconsistent due to small software race conditions. I can’t see this approach working in the real world

  2. Rick,

    There is an important role for ensembles of model runs to play in climate modeling. Ensembles allow you to study the sensitivity of your outputs to variations in initial conditions, changes in the physics parameterizations, and so on. In this case you wouldn’t be too worried about producing bit-for-bit identical results from run to run; you would be more interested in the distribution of the outputs. This is not inconsistent with validation because typically the low-order moments of those distributions are all you can reasonably measure anyhow. People who think that every bit in every cell of their double-precision data set is significant are deluding themselves.

    The problem with the “noisy chip” idea is that you can’t just throw any old randomness into the mix and call it an ensemble. You really only want it in the initial conditions; the solvers should be as accurate as possible. You also need the right amount of randomness. An error in the last bit of the mantissa won’t even be noticeable; an error in any bit of the exponent will probably crash the calculation. (That is what I was trying to get at on twitter, but these concepts don’t translate well into 140 character packages.)

    Furthermore, on rereading the article, it sounds as if the ECMWF researchers want to use errors in the calculation as a substitute for higher resolution. The idea seems to be that although you don’t know the effects of small-scale processes, you can approximate them with random perturbations to the solvers. Even assuming you can get the “right” amount of randomness, it’s not clear that this procedure would produce better results (in the sense of a better approximation to a high-resolution calculation) than a microphysics parameterization. To make an analogy, the former represents throwing darts blindly at a target, while the latter represents a thrower of middling ability taking aim as best he can. The aiming thrower probably won’t throw a perfect round, but he’s sure to have better success than the thrower that makes no attempt to aim at all.