Statistical computing often involves working with ratios of factorials. These factorials are often too big to fit in a floating point number, and so we work with logarithms. So if we need to compute log(a! / b!), we call software that calculates log(a!) and log(b!) directly without computing a! or b! first. More on that here. But sometimes this doesn’t work.
Suppose a = 10^40 and b = a – 10^10. Our first problem is that we cannot even compute b directly. Since a – b is 30 orders of magnitude smaller than a, we’d need about 100 bits of precision to begin to tell a and b apart, about twice as much as a standard floating point number. (Why 100? That’s the log base 2 of 10^30. And how much precision does a floating point number have? See here.)
Our next problem is that even if we could accurately compute b, the log gamma function is going to be approximately the same for a and b, and so the difference will be highly inaccurate.
So what do we do? We could use some sort of extended precision package, but that is not necessary. There’s an elegant solution using ordinary precision.
Let f(x) = log(Γ(x)). The mean value theorem says that
f(a+1) – f(b+1) = (a–b) f‘(c+1)
for some c between a+1 and b+1. We don’t need to compute b, only a – b, which we know is 10^10. The derivative of the log of the gamma function is called the digamma function, written ψ(x). So
log(a! / b!) = 10^10 ψ(c+1).
But what is c? The mean value theorem only says that the equation above is true for some c. What c do we use? It doesn’t matter. Since a and b are so relatively close, the digamma function will take on essentially the same value for any value between a+1 and b+1. Therefore
log(a! / b!) ≈ 10^10 ψ(a).
We could compute this in two lines of Python:
from scipy.special import digamma