The previous post discussed two ways to compute the nth Fibonacci number. The first is to compute all the Fibonacci numbers up to the nth iteratively using the defining property of Fibonacci numbers
Fn + 2 = Fn + Fn + 1
with extended integer arithmetic.
The second approach is to use Binet’s formula
Fn = round( φn / √ 5 )
where φ is the golden ratio.
It’s not clear which approach is more efficient. You could say that the iterative approach has run time O(n) while Binet’s formula is O(1). That doesn’t take into account how much work goes into each step, but it does suggest that eventually Binet wins.
The relative efficiency of each algorithm depends on how it is implemented. In this post I will compare using Python’s integer arithmetic and the mpmath library for floating point. Here’s my code for both methods.
from math import log10
import mpmath as mp
def fib_iterate(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a
def digits_needed(n):
phi = (1 + 5**0.5) / 2
return int(n*log10(phi) - 0.5*log10(5)) + 1
def fib_mpmath(n, guard_digits=30):
digits = digits_needed(n)
# Set decimal digits of precision
mp.mp.dps = digits + guard_digits
sqrt5 = mp.sqrt(5)
phi = (1 + sqrt5) / 2
x = (phi ** n) / sqrt5
return int(mp.nint(x))
Next, here’s some code to compare the run times.
def compare(n):
start = time.perf_counter()
x = fib_iterate(n)
elapsed = time.perf_counter() - start
print(elapsed)
start = time.perf_counter()
y = fib_mpmath(n)
elapsed = time.perf_counter() - start
print(elapsed)
if (x != y):
print("Methods produced different results.")
This code shows that the iterate approach is faster for n = 1,000 but Binet’s method is faster for n = 10,000.
>>> compare(1_000) 0.0002502090001144097 0.0009207079999669077 >>> compare(10_000) 0.0036547919999065925 0.002145750000181579
For larger n, the efficiency advantage of Binet’s formula becomes more apparent.
>>> compare(1_000_000) 11.169050417000108 2.0719056249999994
Guard digits and correctness
There is one unsettling problem with the function fib_mpmath above: how many guard digits do you need? To compute a number correctly to 100 significant figures, for example, requires more than 100 digits of working precision. How many more? It depends on the calculation. What about our calculation?
If we compute the 10,000th Fibonacci number using fib_mpmath(10_000, 2), i.e. with 2 guard digits, we get a result that is incorrect in the last digit. To compute the 1,000,000th Fibonacci number correctly, we need 5 guard digits.
We don’t need many guard digits, but we’re guessing at how many we need. How might we test whether we’ve guessed correctly? One way would be to compute the result using fib_iterate and compare results, but that defeats the purpose of using the more efficient fib_mpmath.
If floating point calculations produce an incorrect result, the error is likely to be in the least significant digits. If we knew that the last digit was correct, that would give us more confidence that all the digits are correct. More generally, we could test the result mod m. I discussed Fibonacci numbers mod m in this post.
When m = 10, the last digits of the Fibonacci numbers have a cycle of 60. So the Fibonacci numbers with index n and with index n mod 60 should be the same.
The post I just mentioned links to a paper by Niederreiter that says the Fibonacci numbers ore evenly distributed mod m if and only if m is a power of 5, in which case the cycle length is 4m.
The following code could be used as a sanity check on the result of fig_mpmath.
def mod_check(n, Fn):
k = 3
base = 5**k
period = 4*base
return Fn % base == fib_iterate(n % period) % base
With k = 3, we are checking the result of our calculation mod 125. It is unlikely that an incorrect result would be correct mod 125. It’s hard to say just how unlikely. Naively we could say there’s 1 chance in 125, but that ignores the fact that the errors are most likely in the least significant bits. The chances of an incorrect result being correct mod 125 would be much less than 1 in 125. For more assurance you could use a larger power of 5.