Floating point error is the least of my worries

“Nothing brings fear to my heart more than a floating point number.” — Gerald Jay Sussman

The context of the above quote was Sussman’s presentation We really don’t know how to compute. It was a great presentation and I’m very impressed by Sussman. But I take exception to his quote.

I believe what he meant by his quote was that he finds floating point arithmetic unsettling because it is not as easy to rigorously understand as integer arithmetic. Fair enough. Floating point arithmetic can be tricky. Things can go spectacularly bad for reasons that catch you off guard if you’re unprepared. But I’ve been doing numerical programming long enough that I believe I know where the landmines are and how to stay away from them. And even if I’m wrong, I have bigger worries.

Nothing brings fear to my heart more than modeling error.

The weakest link in applied math is often the step of turning a physical problem into a mathematical problem. We begin with a raft of assumptions that are educated guesses. We know these assumptions can’t be exactly correct, but we suspect (hope) that the deviations from reality are small enough that they won’t invalidate the conclusions. In any case, these assumptions are usually far more questionable than the assumption that floating point arithmetic is sufficiently accurate.

Modeling error is usually several orders of magnitude greater than floating point error. People who nonchalantly model the real world and then sneer at floating point as just an approximation strain at gnats and swallow camels.

In between modeling error and floating point error on my scale of worries is approximation error. As Nick Trefethen has said, if computers were suddenly able to do arithmetic with perfect accuracy, 90% of numerical analysis would remain important.

To illustrate the difference between modeling error, approximation error, and floating point error, suppose you decide that the probability of something can be represented by a normal distribution. This is actually two assumptions: that the process is random, and that as a random variable it has a normal distribution. Those assumptions won’t be exactly true, so this introduces some modeling error.

Next we have to compute something about a normal distribution, say the probability of a normal random variable being in some range. This probability is given by an integral, and some algorithm estimates this integral and introduces approximation error. The approximation error would exist even if the steps in the algorithm could be carried out in infinite precision. But the steps are not carried out with infinite precision, so there is some error introduced by implementing the algorithm with floating point numbers.

For a simple example like this, approximation error and floating point error will typically be about the same size, both extremely small. But in a more complex example, say something involving a high-dimensional integral, the approximation error could be much larger than floating point error, but still smaller than modeling error. I imagine approximation error is often roughly the geometric mean of modeling error and floating point error, i.e. somewhere around the middle of the two on a log scale.

In Sussman’s presentation he says that people worry too much about correctness. Often correctness is not that important. It’s often good enough to produce a correct answer with reasonably high probability, provided the consequences of an error are controlled. I agree, but in light of that it seems odd to be too worried about inaccuracy from floating point arithmetic. I suspect he’s not that worried about floating point and that the opening quote was just an entertaining way to say that floating point math can be tricky.

More on floating point computing

13 thoughts on “Floating point error is the least of my worries

  1. What a great post. This is an evergreen conversation topic among the finite element crowd, particularly as it interfaces with engineering practice. People labor to build “exact” models, but the driving inputs are total guesses (for instance, the peak ground acceleration a bridge will ever experience during an earthquake). How do you overcome the belief in such a community that the results of an analysis match physical reality?

  2. In lots of Quantum Chemistry calculations the approximation error is often orders of magnitude bigger than the results. You just hope that when you compute the differences that the errors are the same size (and sign) and cancel out! Some would argue that it’s the model error that’s that big, but we know the theoretical model perfectly, we just need to approximate the hell out of it if we ever want to do a calculation fast enough to ever graduate.

  3. I worried about modeling error when I worked with finite elements, but that was pretty firm ground compared to statistics. If you’re modeling, for example, fluid flow, the basic physical laws are well know, though you maybe guessing about material properties.

    Statistics rushes in where angels fear to tread. Don’t know the first thing about a complex system? Never mind. Fit a regression model and press on.

  4. A combo modeling and floating point error: I’ve seen code, recently written by veteran programmers, that using floating point for a currency value (i.e., a price) in dollars rather than an integer in cents.

  5. => (+ 0.1 0.1 0.1)
    0.30000000000000004

    No matter how well you know the landmines, imprecise floating point is a hazard.

    You can substitute “manual memory management” for “floating point” in your third paragraph, and it remains a plausible statement — which means to me that floating point error is worth worrying about (along with modeling and approximation error):

    Manual memory management can be tricky. Things can go spectacularly bad for reasons that catch you off guard if you’re unprepared. But I’ve been doing programming long enough that I believe I know where the landmines are and how to stay away from them. And even if I’m wrong, I have bigger worries.

  6. My physics professor used to say that an approximate answer to the right question is better than an exact answer to the wrong question.

    A lot of effort goes into creating software that computes an answer that has a small approximation error, and computes it as fast as is possible. Modern software can fit a linear model to 100s of variables and a billion observations, complete with p-values and estimates of standard errors, and do it in seconds.

    As long as the software is fast, people using it are happy. It is curious that there is often no consideration of whether a LINEAR model is in any way justified by the data.

  7. The problem is that floating point, being part of the model itself, is often the part of the whole model we least understand. And that is the dangerous part of it.

  8. I agree with John, most of the times the translation of reality into a good mathematical problem is the key issue. If your model is unstable with float, double or long double, look for the problem somewhere outside the numeric representation.

    In quantum chemistry, and in lot of physics, the simplifying assumptions are very common so that the model and the theory are not strictly equivalent. There are suites to test the quality of your floating point resources. Only to mention one, try http://en.wikipedia.org/wiki/SPECfp including the solution of some quantum chemistry and physics problems.

  9. I find the changes made by a decent optimising compiler to account for significantly more unpredictability than the standard implications of IEEE754 arithmetic. When you add in inlining and link-time-code-generation, it gets even more interesting. Of course you can turn on “strict math” modes, but you end up throwing away quite a lot of performance. The end message is the same – floating point is imprecise – but it’s usually more imprecise than just applying the IEEE rules would imply. Or to say it another way – you usually don’t even know how much you don’t know.

  10. Hamilton-Lovecraft

    @Chas Emerick: The entire point of this post is that you’re making a big deal out of the 4e-17 without giving any thought at all to the accuracy and provenance of your 0.1s.

  11. “Nothing brings fear to my heart more than modeling error.”

    Floating-point *is* a model. It’s often a pretty good one, and occasionally a bad one.

    It’s an especially tricky one, because it looks exactly like the arithmetic we all learned in middle school, but often it doesn’t behave quite the same.

Comments are closed.