Computing large Fibonacci numbers

The previous post discussed two ways to compute the nth Fibonacci number. The first is to compute all the Fibonacci numbers up to the nth iteratively using the defining property of Fibonacci numbers

Fn + 2 = Fn + Fn + 1

with extended integer arithmetic.

The second approach is to use Binet’s formula

Fn = round( φn / √ 5 )

where φ is the golden ratio.

It’s not clear which approach is more efficient. You might naively say that the iterative approach has run time O(n) while Binet’s formula is O(1). That doesn’t take into account how much work goes into each step, but it does suggest that eventually Binet wins.

The relative efficiency of each algorithm depends on how it is implemented. In this post I will compare using Python’s integer arithmetic and the mpmath library for floating point. Here’s my code for both methods.

from math import log10
import mpmath as mp

def fib_iterate(n):
    a, b = 0, 1
    for _ in range(n):
        a, b = b, a + b
    return a

def digits_needed(n):

    phi = (1 + 5**0.5) / 2
    return int(n*log10(phi) - 0.5*log10(5)) + 1

def fib_mpmath(n, guard_digits=30):

    digits = digits_needed(n)

    # Set decimal digits of precision
    mp.mp.dps = digits + guard_digits

    sqrt5 = mp.sqrt(5)
    phi = (1 + sqrt5) / 2
    x = (phi ** n) / sqrt5

    return int(mp.nint(x))

Next, here’s some code to compare the run times.

def compare(n):
    
    start = time.perf_counter()
    x = fib_iterate(n)
    elapsed = time.perf_counter() - start
    print(elapsed)

    start = time.perf_counter()
    y = fib_mpmath(n)
    elapsed = time.perf_counter() - start
    print(elapsed)

    if (x != y):
        print("Methods produced different results.")

This code shows that the iterate approach is faster for n = 1,000 but Binet’s method is faster for n = 10,000.

>>> compare(1_000)
0.0002502090001144097
0.0009207079999669077
>>> compare(10_000)
0.0036547919999065925
0.002145750000181579

For larger n, the efficiency advantage of Binet’s formula becomes more apparent.

>>> compare(1_000_000)
11.169050417000108
2.0719056249999994

Guard digits and correctness

There is one unsettling problem with the function fib_mpmath above: how many guard digits do you need? To compute a number correctly to 100 significant figures, for example, requires more than 100 digits of working precision. How many more? It depends on the calculation. What about our calculation?

If we compute the 10,000th Fibonacci number using fib_mpmath(10_000, 2), i.e. with 2 guard digits, we get a result that is incorrect in the last digit. To compute the 1,000,000th Fibonacci number correctly, we need 5 guard digits.

We don’t need many guard digits, but we’re guessing at how many we need. How might we test whether we’ve guessed correctly? One way would be to compute the result using fib_iterate and compare results, but that defeats the purpose of using the more efficient fib_mpmath.

If floating point calculations produce an incorrect result, the error is likely to be in the least significant digits. If we knew that the last digit was correct, that would give us more confidence that all the digits are correct. More generally, we could test the result mod m. I discussed Fibonacci numbers mod m in this post.

When m = 10, the last digits of the Fibonacci numbers have a cycle of 60. So the Fibonacci numbers with index n and with index n mod 60 should be the same.

The post I just mentioned links to a paper by Niederreiter that says the Fibonacci numbers ore evenly distributed mod m if and only if m is a power of 5, in which case the cycle length is 4m.

The following code could be used as a sanity check on the result of fig_mpmath.

def mod_check(n, Fn):
    k = 3
    base = 5**k
    period = 4*base
    return Fn % base == fib_iterate(n % period) % base

With k = 3, we are checking the result of our calculation mod 125. It is unlikely that an incorrect result would be correct mod 125. It’s hard to say just how unlikely. Naively we could say there’s 1 chance in 125, but that ignores the fact that the errors are most likely in the least significant bits. The chances of an incorrect result being correct mod 125 would be much less than 1 in 125. For more assurance you could use a larger power of 5.

3 thoughts on “Computing large Fibonacci numbers

  1. Good post! Wonder how it compares to taking the matrix ((1 1) (1 0)) to the power of n via binary exponentiation (one of the resulting elements would be F_n)

  2. Binet’s method is not O(1), for three reasons. First, if you want to compute the actual value rather than just a fixed-precision approximation to it then the number of digits you need is O(n). Second, you therefore need the golden ratio to a number of digits proportional to n. Third, for fixed x computing x^n isn’t O(1).

    Working with integers you can do better than apply the recurrence directly. E.g., if you have F_n and F_n+1 then you can compute F_2n and F_2n+1 from those with a few arithmetic operations, so you can get to F_n using O(log n) arithmetic operations.

    (That doesn’t mean we can compute F(n) in O(log n) _time_; the integers involved are O(n) digits long so additions and subtractions will take that long and multiplications will take longer, how much longer depending on how fancy the multiplication algorithms you use.)

    On the machine I’m on right now, the iterative approach computes F(10^3,10^4,10^5) in about 37, 865, 56600 us while the recursive approach takes about 3, 21, 773 us respectively. (Simple-minded implementations in Python.) Recursive with n=10^6 takes about 32ms which is much faster than the 2s you got from Binet, but I’m not sure how much to trust this because for the iterative approach where we are running essentially the same code my times are (1) much faster than yours but (2) not proportional to yours. (7x faster for 10^3, 4x faster for 10^4, 2x faster for 10^6. Doesn’t seem like what you’d expect from any combination of faster machine and different ways of computing the timings — I used the timeit module — so probably there’s some other thing going on, like a difference in integer arithmetic algorithms between Python versions.)

Comments are closed.