Numerical problem with an interest calculation

The previous post looked at the difference between continuously compounded interest and interest compounded a large discrete number of times. This difference was calculated using the following Python function.

    def f(P, n, r) : return P*(exp(r) - (1 + r/n)**n)

where the function arguments are principle, number of compoundings, and interest rate.

When I was playing around with this I noticed

    f(1000000, 365*24*60*60, 0.12)

returned a negative value, −0.00066. If this were correct, it would mean that compounding a 12% loan once a second results in more interest than continuous compounding. But this cannot happen. Compounding more often can only increase the amount of interest.

The problem with the Python function is that when n is very large, exp(r) and (1 + r/n)n are so nearly equal that their subtraction results in a complete loss of precision, a phenomenon known as catastrophic cancellation. The two terms are indistinguishable within the limits of floating point calculation because the former is the limit of the latter as n goes to infinity.

One way to calculate

exp(r) − (1 + r/n)n

for moderately small r (such as typical interest rates) and very large n (such as the number of seconds in a year) would be to write out the power series for exp(r), expand (1 + r/n)n using the binomial theorem, and subtract.

Then we find that

exp(r) − (1 + r/n)nr² / 2n

is a good approximation.

When n = 365 × 24 × 60 × 60 and r = 0.12, the approximation gives 2.28 × 10−10 and the correct result is 2.57 × 10−10. The approximation is only good to one significant figure, but it gets the sign and the order of magnitude correct. You could retain more series terms for more accuracy.

3 thoughts on “Numerical problem with an interest calculation

  1. Hi John,

    Thanks for sharing the fascinating problem! Two questions and an observation.

    Are you sure about the expected correct result of 2.57e-10? According to Mathematica, the correct result is approximately 0.000257:

    `N[P (Exp[r] – (1 + r/n)^n) /. {P -> 1000000, n -> 365 24 60 60, r -> 12/100}, 79]`

    gives

    0.000257419371824303822825780738107732235578342913009226112743770044.

    Also, could it be that the main floating-point problem isn’t cancelation on subtraction but that `(1 + r/n)**n` loses a lot of precision when n is large?

    When I reimplement the formula as

    `P * (exp(r) – exp(n * log1p(r/n)))`,

    Python’s result agrees with Mathematica’s to 9 places: 0.00025741941911405775.

    Thanks again for the fun article :-)

    Cheers,
    Tom

    P.S. I should have known to try https://herbie.uwplse.org/ first. When I gave it your initial implementation, it nearly instantly identified the same rewrite that I spent yesterday’s lunch working out:

    `P * (exp(r) – exp((log1p((r / n)) * n)))`

  2. Is there a library implementing expm1(x) for x near zero, such that expm1(x) === exp(x) – 1 but without the loss of precision? Ditto for log1p(x) === log(1 + x)

  3. Yes, that’s what expm1 and log1p do, where the right hand side is interpreted mathematically rather than as a floating point operation.

Comments are closed.