What’s so hard about finding a hypotenuse?

Numerical libraries usually have a function for finding the hypotenuse of a right triangle. Isn’t this trivial? If the sides of a triangle are x and y, just write this:

    sqrt(x*x + y*y)

That works in theory, but in practice it may fail. If x is so large that x*x overflows, the code will produce an infinite result.

We’d like to be able to say to the computer “Now x*x and y*y might be too big, but just wait a minute. I’m going to take a square root, and so the numbers will get smaller. I just need to borrow some extra range for a minute.” You can do something like that in environments that have extended precision, but that would be inefficient and unnecessary. And of course it would fail completely if you’re already using the largest numeric type available.

Here’s how to compute sqrt(x*x + y*y) without risking overflow.

  1. max = maximum(|x|, |y|)
  2. min = minimum(|x|, |y|)
  3. r = min / max
  4. return max*sqrt(1 + r*r)

Notice that the square root argument is between 1 and 2 and so there is no danger of overflow in taking the square root. The only way the algorithm could overflow is if the result itself is too large to represent. In that case the fault lies with the problem, not the algorithm: the algorithm was asked to compute a quantity larger than the largest representable number, and so it does the correct thing by overflowing and returning an infinite value. However, the algorithm will not unnecessarily return an infinite value due to an overflow in an intermediate computation.

To see how the algorithm above may succeed when the naive implementation would fail, let M be the largest representable number. For IEEE double precision, M is on the order of 10^308. All that matters for this example is that M is greater than 4. Let x and y equal 0.5 M. The algorithm above would return 0.707 M. But the naive algorithm would compute x*x and overflow since 0.25 M2 > M. Then x*x + y*y would evaluate to infinity and the square root would evaluate to infinity.

A hypotenuse function is included in math.h on every system that I am aware of. It may be named hypot or _hypot.

More floating point posts

25 thoughts on “What’s so hard about finding a hypotenuse?

  1. Dimitris Leventeas

    Nice post. Overflows is one of the most common reasons why an implementation of binary search may be buggy. I think that almost every programmer should know the basics of numerical computation.

    If someone is more interested, IEEE 754 is good place to start (at least as a reference).

  2. Nice. Although there are still cases where this approach loses precision. Specifically, if r is small, 1+r^2 may round to 1 when sqrt(1+r^2) is still representable as something a little bigger than 1. I believe this can result in something like half of the bits of the mantissa being wrong.

    Here is how the C library actually implements hypot() to avoid losing precision regardless of input. No, I do not completely follow it. Although it does start by taking the min/max :-).

    Incidentally, the hypot function in math.h is part of the POSIX standard, and is mandatory as of Issue 6 (2004).

  3. wouldn’t using something like BigDecimal solve the problem?
    And if x*x is a problem, what do you call getting the binomial of it? ;)

  4. Nemo: Evaluating sqrt(1 + r^2) is not a problem for small r. The Taylor series of sqrt(1 + r^2) is 1 + r^2/2 + O(r^3). So if r^2 is so small that 1 + r^2 = 1 to machine precision, sqrt(1 + r^2) is also 1 to machine precision.

    Thomas: It’s true that arbitrary precision types like BigDecimal would solve the problem, but this would be less efficient.

  5. Nemo: It’s not obvious. I thought there might be a problem with small r until I calculated the Taylor series.

  6. Preston L. Bannister

    Bit of a problem with (3) and if x were large, and y very small. Could overflow r. Need a test to choose the algorithm variant.

  7. Preston —

    How can the quotient min/max overflow, when it is always less than or equal to 1?

    It can underflow, but that is not actually a problem, as John points out above.

  8. Preston L. Bannister

    I will admit to being fuzzy about IEEE math. For the sake of illustration, say the range of your floating point number was 10^10 to 10^-10. Given x = 10^6 and y = 10^-6 (i.e. a very large and skinny triangle) using the above approach at (3) gives r = 10^-12 (and underflow).

    Does (1 + underflow) yield 1 or an error? We will get a correct result from (4) if the underflow is ignored. Is this result general, or will some languages or libraries report an error? Do we need a step (3b) to explicitly ignore underflow?

    To my reading, it looks like Java ignores underflow (“gradual underflow as defined by IEEE 754” but without any indication … slightly dubious), but that other(?) languages or libraries may choose to signal or record an error.

    In general, if doing Physics calculations, I would always want an indication that underflow occurred. In this particular case an underflow in (3) is acceptable, and can be ignored … but I would rather this was explicit.


  9. The complex case is more complicated. For example, you have to specify a branch of the square root function. But I suppose the basic idea of the algorithm would work.

  10. In Haskell the magnitude of a complex number is implemented using similar reasoning, although for precision purposes, we choose to slide the exponents rather than divide.

    {-# SPECIALISE magnitude :: Complex Double -> Double #-}
    magnitude :: RealFloat a => Complex a -> a
    magnitude (x:+y) = scaleFloat k
    (sqrt (sqr (scaleFloat mk x) + sqr (scaleFloat mk y)))
    k = max (exponent x) (exponent y)
    mk = – k
    sqr x = x * x

  11. Guilty of making me think. I’m embarrassed how long it took me to derive that method from sqrt(a*a+b*b), but I’m pleased to have done it.

  12. Another benefit of a hypot() function is that using one helps ensure that the arguments x and y are only evaluated once. Otherwise, temporary variables or a high enough optimisation level (subexpression elimination) would be required.

  13. Nice try. But what happens if both numbers are equal to 0? You’ll be solving e overflowing problem, but…

  14. [warning: shameless self-promotion] Folks might also like a blog recently wrote a blog post the generalization of the “hypot trick” to p-norms, including an interesting connection to the infamous log-sum-exp trick.

  15. Benito Díaz is correct, and I’m surprised that the author never corrected his post. An efficient implementation:

    1. absx = abs(x)
    2. absy = abs(y)
    3. if (absx < absy)
    min = absx, max = absy;
    max = absx, min = absy;
    4. if (max == 0) return 0
    5. r = min / max
    6. return max*sqrt(1 + r*r)

  16. The implementation linked-to by Nemo uses quite a different method , it uses techniques involving direct access to the mantissa and exponent.
    To start with, there are no divisions; if the numbers are so large or small as to be in danger of over/underflow, they are first scaled by a common power of two, and the result compensated at the end (such operations do not produce any rounding error, although an overflow can occur at the end if the result is actually out of range).

    Some of the strangeness in there is to reduce the precision loss while finding x^2 + y^2 to a bare minimum. After making sure x >= y >0, it does the following, in the case where y 0.5*x, the y^2 term would become too large relative to the first term, so a different method is used based on (2*x)*y + (x-y)^2; here the first term is at least 4x the second, and is evaluated in two parts in a similar way.

    There still needs to be a sqrt after that. But the sqrt function is very well conditioned for errors: it always produces the closest possible result based on the input, and if its input is changed by +/-1 code, the result is changed at most +/-1 code, and often not at all (it is a surjective mapping), so the overall result is very accurate (the comment says that the hypot result will be be <1 ulp away from the 'perfect' result for the given inputs).

    If you try comparing sqrt(x*x + y*y) to hypot(x,y), you'll find they are identical in the majority of cases; when they differ, hypot is just a wee bit more accurate. This goes to show how much effort is put into these libraries for precision.

  17. Iurii Selinnyi, here’s one way. If you have a 3D Vector comprised of x, y, z, and if you have written a function hyp(a, b) based on this article, then just do this:
    hyp(hyp(x, y), z);

    This gets the xy hypotenuse, and uses that length as a leg along with z, to get the xyz hypotenuse.

Comments are closed.