Minimizing worst case error

It’s very satisfying to know that you have a good solution even under the worst circumstances. Worst-case thinking doesn’t have to be concerned with probabilities, with what is likely to happen, only with what could happen. But whenever you speak of what could happen, you have to limit your universe of possibilities.

Suppose you ask me to write a program that will compute the sine of a number. I come up with a Chebyshev approximation for the sine function over the interval [0, 2π] so that the maximum approximation for any point in that interval is less than 10-8. So I proudly announce that the in the worst case the error is less than 10-8.

Then you come back and ask “What if I enter a number larger than 2π?” Since my approximation is a polynomial, you can make it take on as large a value as you please by sticking in large enough values. But sine takes on values between -1 and 1. So the worst case error is unlimited.

So I go back and do some range reduction and I tell you that my program will be accurate to within 10-8. And because I got burned last time by not specifying limits on the input, I make explicit my assumption that the input is a valid, finite IEEE 754 64-bit floating point number. Take that!

So then you come back and say “What if I make a mistake entering my number?” I object that this isn’t fair, but you say “Look, I want to minimize the worst case scenario, not just the worst scenario that fits into your frame of thinking.”

So then I rewrite my program to always return 0. That result can be off by as much as 1, but never more than that.

This is an artificial example, but it illustrates a practical point: worst-case scenarios minimize the worst outcome relative to some set of scenarios under consideration. With more imagination you can always come up with a bigger set. Maybe the initial set left out some important and feasible scenarios. Or maybe it was big enough and only left out far-fetched scenarios. It may be quite reasonable to exclude far-fetched scenarios, but when you do, you’re implicitly appealing to probability, because far-fetched means low probability.

Related post: Sine of a googol

3 thoughts on “Minimizing worst case error

  1. The “worst case” in systems affecting physical reality often reduces, at least in part, to making the system “fail safe”. As an extreme that’s a meme, this particularly applies to nuclear power plants: No matter what goes wrong, you always want to be able to shut it down.

    I’m presently working on a system design whose predecessor already had a “worst case” event in the field: What appeared to be bog-standard design rules for a PCB, that was manufactured with all the tolerances well within limits, but stacking the wrong way, was installed in a chassis with fasteners that were ever so slightly over-torqued, so that when one of the screws holding the chassis in the rack loosened, the chassis canted ever so slightly, which flexed the circuit board ever so slightly, which took up all the tolerances, shorting a power supply to ground.

    This was a very robust power supply. Two of them in parallel, in fact. The resulting flame burned for over two minutes until external power was removed, and would have otherwise kept burning until all the resin in the fiberglass PCB had not just burned, turning to carbon, but had arced away as gas and particles.

    So, now we’re making the next generation system. Which must cost less than its predecessor while doing far more. Which means no money for fancy safety systems: Safety must be designed in right from the start.

    Of course, the first question was how best to get the power supplies turned off when the “On Fire” bit gets set (how to set that bit is an entirely different discussion). Engineers designed ever more elaborate circuits after the simplest one had failure modes all its own that could result in the supplies remaining on while the “On Fire” bit was set.

    In essence, the engineers were trying to “force” the system into a safe state. Which, when the worst case is already happening, often means you likely also lack the means to force much of anything.

    The solution becomes obvious when “fail safe” is taken to the limit: The “real” problem is that applying power in the first place makes the system “less safe”! That is, the “worst case” can’t happen when there’s no power. So the goal becomes to make the system difficult to turn on, and difficult to keep on, so that “doing nothing” causes it to turn off.

    Basically, when it all goes to hell, just return zero.

  2. Or, as I’ve heard it put a bit more crudely, “Any time you idiot-proof something, the universe gives you a worse idiot.”

Comments are closed.