Numerical libraries usually have a function for finding the hypotenuse of a right triangle. Isn’t this trivial? If the sides of a triangle are `x`

and `y`

, just write this:

`sqrt(x*x + y*y)`

That works in theory, but in practice it may fail. If `x`

is so large that `x*x`

overflows, the code will produce an infinite result. ` `

We’d like to be able to say to the computer “Now `x*x`

and `y*y`

might be too big, but just wait a minute. I’m going to take a square root, and so the numbers will get smaller. I just need to borrow some extra range for a minute.” You can do something like that in environments that have extended precision, but that would be inefficient and unnecessary. And of course it would fail completely if you’re already using the largest numeric type available.

Here’s how to compute `sqrt(x*x + y*y)`

without risking overflow.

- max = maximum(|x|, |y|)
- min = minimum(|x|, |y|)
- r = min / max
- return max*sqrt(1 + r*r)

Notice that the square root argument is between 1 and 2 and so there is no danger of overflow in taking the square root. The only way the algorithm could overflow is if the result itself is too large to represent. In that case the fault lies with the problem, not the algorithm: the algorithm was asked to compute a quantity larger than the largest representable number, and so it does the correct thing by overflowing and returning an infinite value. However, the algorithm will not *unnecessarily* return an infinite value due to an overflow in an intermediate computation.

To see how the algorithm above may succeed when the naive implementation would fail, let M be the largest representable number. For IEEE double precision, M is on the order of 10^308. All that matters for this example is that M is greater than 4. Let x and y equal 0.5 M. The algorithm above would return 0.707 M. But the naive algorithm would compute `x*x`

and overflow since 0.25 M^{2} > M. Then `x*x + y*y`

would evaluate to infinity and the square root would evaluate to infinity.

A hypotenuse function is included in `math.h`

on every system that I am aware of. It may be named `hypot`

or `_hypot`

.

**Related posts**:

Nice post. Overflows is one of the most common reasons why an implementation of binary search may be buggy. I think that almost every programmer should know the basics of numerical computation.

If someone is more interested, IEEE 754 is good place to start (at least as a reference).

Nice. Although there are still cases where this approach loses precision. Specifically, if r is small, 1+r^2 may round to 1 when sqrt(1+r^2) is still representable as something a little bigger than 1. I believe this can result in something like half of the bits of the mantissa being wrong.

Here is how the C library actually implements hypot() to avoid losing precision regardless of input. No, I do not completely follow it. Although it does start by taking the min/max :-).

Incidentally, the hypot function in math.h is part of the POSIX standard, and is mandatory as of Issue 6 (2004).

wouldn’t using something like BigDecimal solve the problem?

And if x*x is a problem, what do you call getting the binomial of it?

Nemo: Evaluating sqrt(1 + r^2) is not a problem for small r. The Taylor series of sqrt(1 + r^2) is 1 + r^2/2 + O(r^3). So if r^2 is so small that 1 + r^2 = 1 to machine precision, sqrt(1 + r^2) is also 1 to machine precision.

Thomas: It’s true that arbitrary precision types like BigDecimal would solve the problem, but this would be less efficient.

John: Good point. I stand corrected.

Nemo: It’s not obvious. I thought there might be a problem with small r until I calculated the Taylor series.

Beautiful post.

Apparently, this is part of ISO/IEC 9899:1999.

Bit of a problem with (3) and if x were large, and y very small. Could overflow r. Need a test to choose the algorithm variant.

Preston —

How can the quotient min/max overflow, when it is always less than or equal to 1?

It can underflow, but that is not actually a problem, as John points out above.

I will admit to being fuzzy about IEEE math. For the sake of illustration, say the range of your floating point number was 10^10 to 10^-10. Given x = 10^6 and y = 10^-6 (i.e. a very large and skinny triangle) using the above approach at (3) gives r = 10^-12 (and underflow).

Does (1 + underflow) yield 1 or an error? We will get a correct result from (4) if the underflow is ignored. Is this result general, or will some languages or libraries report an error? Do we need a step (3b) to explicitly ignore underflow?

To my reading, it looks like Java ignores underflow (“gradual underflow as defined by IEEE 754″ but without any indication … slightly dubious), but that other(?) languages or libraries may choose to signal or record an error.

In general, if doing Physics calculations, I would

alwayswant an indication that underflow occurred. In this particular case an underflow in (3) is acceptable, and can be ignored … but I would rather this was explicit.Interesting.

Nice, I love these math/numeric programming posts. Thanks, John.

Is there something similar when x and y for complex numbers?

The complex case is more complicated. For example, you have to specify a branch of the square root function. But I suppose the basic idea of the algorithm would work.

In Haskell the magnitude of a complex number is implemented using similar reasoning, although for precision purposes, we choose to slide the exponents rather than divide.

{-# SPECIALISE magnitude :: Complex Double -> Double #-}

magnitude :: RealFloat a => Complex a -> a

magnitude (x:+y) = scaleFloat k

(sqrt (sqr (scaleFloat mk x) + sqr (scaleFloat mk y)))

where

k = max (exponent x) (exponent y)

mk = – k

sqr x = x * x

Guilty of making me think. I’m embarrassed how long it took me to derive that method from sqrt(a*a+b*b), but I’m pleased to have done it.

Thanks a lot !!!

Another benefit of a hypot() function is that using one helps ensure that the arguments x and y are only evaluated once. Otherwise, temporary variables or a high enough optimisation level (subexpression elimination) would be required.

Interesting post, thank you.

For those curious about why this method works and how it is derived, Wikipedia shows how it is done:

http://en.wikipedia.org/wiki/Hypot#Implementation

Nice try. But what happens if both numbers are equal to 0? You’ll be solving e overflowing problem, but…

[warning: shameless self-promotion] Folks might also like a blog recently wrote a blog post the generalization of the “hypot trick” to p-norms, including an interesting connection to the infamous log-sum-exp trick.