Series for the reciprocal of the gamma function

Stirling’s asymptotic series for the gamma function is

\Gamma(z) \sim (2\pi)^{1/2} z^{z - 1/2} e^{-z} \sum_{n=0}^\infty (-1)^n \frac{\gamma_n}{z^n}

Now suppose you’d like to find an asymptotic series for the function 1/Γ(z).

Since the series for Γ has the form f(z) times an infinite sum, it would make sense to look for a series for 1/Γ of the form 1/f(z) times an infinite sum. The hard part would be finding the new infinite sum. In general the series for function and the series for its reciprocal look nothing alike.

Here’s where we have a pleasant surprise: the coefficients in the series for 1/Γ are exactly the same as the coefficients in the series for Γ, except the signs don’t alternate.

\frac{1}{\Gamma(z)} \sim (2\pi)^{-1/2} z^{-z +1/2} e^{z} \sum_{n=0}^\infty \frac{\gamma_n}{z^n}

Illustration

The following is not a proof, but it shows that the result is at least plausible.

Define Γ* to be Γ divided by the term in front of the infinite series:

\Gamma^*(z) = (2\pi)^{-1/2} z^{-z +1/2} e^{z} \Gamma(z)

Then the discussion above claims that Γ* and 1/Γ* have the same asymptotic series, except with alternating signs on the coefficients. So if we multiply the first few terms of the series for Γ* and 1/Γ* we expect to get something approximately equal to 1.

Now

\Gamma^*(z) = 1 + \frac{1}{12z} + \frac{1}{288z^2} - \frac{139}{51840z^3} - \cdots

and we claim

\frac{1}{\Gamma^*(z)} = 1 - \frac{1}{12z} + \frac{1}{288z^2} + \frac{139}{51840z^3} - \cdots

So if we multiply the terms up to third order we expect to get 1 and some terms involving powers of z in the denominator with exponent greater than 3. In fact the product equals

1 + \frac{571}{1244160 z^4} -\frac{19321}{2687385600 z^6}

which aligns with our expectations.

Simple error function approximation

I recently ran across the fact that

\int_0^x \exp(-t^2)\, dt \approx \sin(\sin(x))

is a remarkably good approximation for −1 ≤ x ≤ 1.

Since the integral above defines the error function erf(x), modulo a constant, this says we have a good approximation for the error function

\text{erf}(x) \approx \frac{2}{\sqrt{\pi}} \sin( \sin(x) )

again provided −1 ≤ x ≤ 1.

The error function is closely related to the Gaussian integral, i.e. the normal probability distribution CDF Φ. The relation between erf and Φ is simple but error-prone. I wrote up some a page notes for myself a few years ago so I wouldn’t make a mistake again moving between these functions and their inverses.

Update: This post makes the connection to probability explicit.

You can derive the approximation by writing out the power series for exp(t), substituting −t² for t, and integrating term-by-term from 0 to x. You’ll see that the result is the same as the power series for sine until you get to the x5 term, so the error is on the order of x5. Here’s a plot of the error.

The error is extremely small near 0, which is what you’d expect since the error is on the order of x5.

Chebyshev polynomials as distorted cosines

Forman Acton’s book Numerical Methods that Work describes Chebyschev polynomials as

cosine curves with a somewhat disturbed horizontal scale, but the vertical scale has not been touched.

The relation between Chebyshev polynomials and cosines is

Tn(cos θ) = cos(nθ).

Some sources take this as the definition of Chebyshev polynomials. Other sources define the polynomials differently and prove this equation as a theorem.

It follows that if we let x = cos θ then

Tn(x) = cos(n arccos x).

Now sin x = cos(π/2 − x) and for small x, sin x ≈ x. This means

arccos(x) ≈ π/2 − x

for x near 0, and so we should expect the approximation

Tn(x) ≈ cos(n(π/2 − x)).

to be accurate near the middle of the interval [−1, 1] though not at the ends. A couple plots show that this is the case.

Mote Chebyshev posts

Interpolating the gamma function

Suppose you wanted to approximate Γ(10.3). You know it’s somewhere between Γ(10) = 9! and Γ(11) = 10!, and linear interpolation would give you

Γ(10.3) ≈ 0.7 × 9! + 0.3 × 10! = 1342656.

But the exact value is closer to 716430.69, and so our estimate is 53% too high. Not a very good approximation.

Now let’s try again, applying linear interpolation to the log of the gamma function. Our approximation is

log Γ(10.3) ≈ 0.7 × log 9! + 0.3 × log 10! = 13.4926

while the actual value is 13.4820, an error of about 0.08%. If we take exponentials to get an approximation of Γ(10.3), not log Γ(10.3), the error is larger, about 1%, but still much better than 53% error.

The gamma function grows very quickly, and so the log gamma function is usually easier to work with.

As a bonus, the Bohr–Mollerup theorem says that log gamma is a convex function. This tells us that not only does linear interpolation give an approximation, it gives us an upper bound.

The Bohr–Mollerup theorem essentially says that the gamma function is the only function that extends factorial from a function on the integers to a log-convex function on the real numbers. This isn’t quite true since it’s actually &Gamma(x + 1) that extends factorial. Showing the gamma function is unique is the hard part. In the preceding paragraph we used the easy direction of the theorem, saying that gamma is log-convex.

Related posts

Evaluating a class of infinite sums in closed form

The other day I ran across the surprising identity

\sum_{n=1}^\infty \frac{n^3}{2^n} = 26

and wondered how many sums of this form can be evaluated in closed form like this. Quite a few it turns out.

Sums of the form

\sum_{n=1}^\infty \frac{n^k}{c^n}

evaluate to a rational number when k is a non-negative integer and c is a rational number with |c| > 1. Furthermore, there is an algorithm for finding the value of the sum.

The sums can be evaluated using the polylogarithm function Lis(z) defined as

\text{Li}_s(z) = \sum_{n=1}^\infty \frac{z^n}{n^s}

using the identity

\sum_{n=1}^\infty \frac{n^k}{c^n} = \text{Li}{-k}\left(\frac{1}{c}\right)

We then need to have a way to evaluate Lis(z). This cannot be done in closed form in general, but it can be done when s is a negative integer as above. To evaluate Lik(z) we need to know two things. First,

Li_1(z) = -\log(1-z)

and second,

\text{Li}_{s-1}(z) = z \frac{d}{dz} \text{Li}_s(z)

Now Li0(z) is a rational function of z, namely z/(1 − z). The derivative of a rational function is a rational function, and multiplying a rational function of z by z produces another rational function, so Lis(z) is a rational function of z whenever s is a non-positive integer.

Assuming the results cited above, we can prove the identity

\sum_{n=1}^\infty \frac{n^3}{2^n} = 26

stated at the top of the post.The sum equals Li−3(1/2), and

\text{Li}_{-3}(z) = \left(z \frac{d}{dz}\right)^3 \frac{z}{1-z} = \frac{z(1 + 4z + z^2)}{(1-z)^4}

The result comes from plugging in z= 1/2 and getting out 26.

When k and c are positive integers, the sum

\sum_{n=1}^\infty \frac{n^k}{c^n}

is not necessarily an integer, as it is when k = 3 and c = 2, but it is always rational. It looks like the sum is an integer if c= 2; I verified that the sum is an integer for c = 2 and k = 1 through 10 using the PolyLog function in Mathematica.

Update: Here is a proof that the sum is an integer when n = 2. From a comment by Theophylline on Substack.

The sum is occasionally an integer for larger values of c. For example,

\sum_{n=1}^\infty \frac{n^4}{3^n} = 15

and

\sum_{n=1}^\infty \frac{n^8}{3^n} = 17295

Related posts

q-analog of rising powers

The previous post looked at the probability that a random n by n matrix over a finite field of order q is invertible. This works out to be

\prod_{i=1}^n \left(1 - \frac{1}{q^i}\right)

This function of q and n comes up in other contexts as well and has a name that we will get to shortly.

Pochhammer symbols

Leo August Pochhammer (1841–1920) defined the kth rising power of x by

(x)_k = x(x + 1)(x + 2)\cdots(x + k - 1)

Rising and falling powers come up naturally in combinatorics, finite difference analogs of calculus, and in the definition of hypergeometric functions.

q-Pochhammer symbols

The q-analog of the Pochhammer symbol is defined as

(a;q)_n = \prod_{k=0}^{n-1} (1-aq^k)=(1-a)(1-aq)(1-aq^2)\cdots(1-aq^{n-1})

Like the Pochhammer symbol, the q-Pochhammer symbol also comes up in combinatorics.

In general, q-analogs of various concepts are generalizations that reduce to the original concept in some sort of limit as q goes to 1. The relationship between the q-Pochhammer symbol and the Pochhammer symbol is

\lim_{q\to 1} \frac{(q^x;q)_n}{(1-q)^n} = (x)_n
For a simpler introduction to q-analogs, see this post on q-factorial and q-binomial coefficients.

Back to our probability problem

The motivation for this post was to give a name to the function that gives probability a random n by n matrix over a finite field of order q is invertible. In the notation above, this function is (1/q; 1/q)n.

There’s a confusing notational coincidence here. The number of elements in a finite field is usually denoted by q. The q in q-analogs such as the q-Pochhammer symbol has absolute value less than 1. It’s a coincidence that they both use the letter q, and the q in our application of q-Pochhammer symbols is the reciprocal of the q representing the order of a finite field.

I mentioned in the previous post that the probability of the matrix being invertible is a decreasing function of n. This probability decreases to a positive limit, varying with the value of q. This limit is (1/q; 1/q). Here the subscript ∞ denotes that we take the limit in (1/q; 1/q)n as n goes to infinity. There’s no problem here because the infinite product converges.

Mathematica and plotting

The q-Pochhammer symbol (a; q)n is implemented in Mathematica as QPochhammer[a, q, n] and the special case (q; q) is implemented as QPochhammer[q]. We can use the latter to make the following plot.

Plot[QPochhammer[q], {q, -1, 1}]

Recall that the q in our motivating application is the reciprocal of the q in the q-Pochhhammer symbol. This says for large fields, the limiting probability that an n by n matrix is invertible as n increases is near 1, but that for smaller fields the limiting probability is also smaller. For q = 2, the probability is 0.288788.

Plot[QPochhammer[1/q], {q, 2, 100}, PlotRange -> All]

Related posts

The Clausen function

I ran across the Clausen function the other day, and when I saw a plot of the function my first thought was that it looks sorta like a sawtooth wave.

Plot of Clausen function Cl_2

I wondered whether it also sounds like a sawtooth wave. More on that shortly.

The Clausen function can be defined in terms of its Fourier series:

\text{Cl}_2(x) = \sum_{k=1}^\infty \frac{\sin(kx)}{k^2}

The function commonly known as the Clausen function is one of a family of functions, hence the subscript 2. The Clausen functions for all non-negative integers n are defined by replacing 2 with n on both sides of the defining equation.

The Fourier coefficients decay quadratically, as do those of a triangle wave or sawtooth wave, as discussed here. This implies the function Cl2(x) cannot have a continuous derivative. In fact, the derivative of Cl2(x) is infinite at 0. This follows quickly from the integral representation of the function.

\text{Cl}_2(x)=-\int_0^x\log \left|2\sin\frac{t}{2} \right|\, dt

The fundamental theorem of calculus shows that the derivative

\text{Cl}'_2(x)=-\log \left|2\sin\frac{x}{2} \right|

blows up at 0.

How does it sound?

What does it sound like if we create music with Clausen waves rather than sine waves? I initially thought it sounded harsh, but that turned out to be an artifact of how I’d make the audio file. A reader emailed me a better recording using the first few notes of a famous hymn. It’s a much more pleasant sound than I had expected.

ein_feste_burg.wav

 

Related posts

Computing Γ(z) for complex z with tables

In the previous post I mentioned that the general strategy for computing a mathematical function using tables is to first reduce the function argument to be within the range of the tabulated values, and then to use interpolation to compute the function at values that are not directly tabulated.

The second step is always the same, applying Lagrange interpolation with enough points to get the accuracy you need. But the first step, range reduction, depends on the function being evaluated. And as the previous post concluded, evaluating more advanced functions generally requires more advanced range reduction.

Real argument

For the gamma function, the identity

\Gamma(x + 1) = x\, \Gamma(x)

can be used to reduce the problem of computing Γ(x) for any real x to the problem of computing Γ(x) for x in the interval [1, 2]. If x is large, the identity will have to be applied many times and so this would be a lot of work. However, the larger x is, the more accurate Stirling’s approximation becomes.

Complex argument

Computing Γ(x + iy) is more complex, pardon the pun. We can still use the identity above to reduce the real part x of the argument to lie in the interval [1, 2], but what about the complex part y?

Abramowitz and Stegun, Table 6.7, tabulates the principal branch of log Γ(x + iy) for x from 1 to 2 and for y from 0 to 10, both in increments of 0.1. Generally the logarithm of the gamma function is more useful in computation than the gamma function itself. It is also easier to interpolate, which I imagine is the main reason A&S tabulate it rather than the gamma function per se. A note below Table 6.7 says that linear interpolation gives about 3 significant figures, and eight-point interpolation gives about 8 figures.

By the Schwarz reflection principle,

\Gamma(\bar{z}) = \overline{\Gamma(z)}

and with this we can extend our range on y to [−10, 10].

Large imaginary part

What about larger y? We have two options: the duplication formula and Stirling’s approximation.

The duplication formula

\Gamma(2z) = \frac{1}{\sqrt{2\pi}}2^{2z - \frac{1}{2}} \Gamma(z) \Gamma\left(z + \tfrac{1}{2} \right )

lets us compute Γ(2z) if we can compute Γ(z) and Γ(z + ½).

Stirling’s approximation for Γ(z) is accurate when |z| is large, and |x + iy| is large when |y| is large.

More posts on using tables

Solutions to tan(x) = x

I read something recently that said in passing that the solutions to the equation tan x = x are the zeros of the Bessel function J3/2. That brought two questions to mind. First, where have I seen the equation tan x = x before? And second, why should its solutions be the roots of a Bessel function.

The answer to the first question is that I wrote about the local maxima of the sinc function three years ago. That post shows that the derivative of the sinc function sin(x)/x is zero if and only if x is a fixed point of the tangent function.

As for why that should be connected to zeros a Bessel function, that one’s pretty easy. In general, Bessel functions cannot be expressed in terms of elementary functions. But the Bessel functions whose order is an integer plus ½ can.

For integer n,

J_{n+{\frac{1}{2}}}(x)= (-1)^n \left(\frac{2}{\pi}\right)^{\frac{1}{2}} x^{n+{\frac{1}{2}}} \left(\frac{1}{x}\frac{d}{dx}\right)^n\left(\frac{\sin x}{x}\right)

So when n = 1, we’ve got the derivative of sinc right there in the definition.

Hypergeometric function of a large negative argument

It’s occasionally necessary to evaluate a hypergeometric function at a large negative argument. I was working on a project today that involved evaluating F(a, b; c; z) where z is a large negative number.

The hypergeometric function F(a, b; c; z) is defined by a power series in z whose coefficients are functions of a, b, and c. However, this power series has radius of convergence 1. This means you can’t use the series to evaluate F(a, b; c; z) for z < −1.

It’s important to keep in mind the difference between a function and its power series representation. The former may exist where the latter does not. A simple example is the function f(z) = 1/(1 − z). The power series for this function has radius 1, but the function is defined everywhere except at z = 1.

Although the series defining F(a, b; c; z) is confined to the unit disk, the function itself is not. It can be extended analytically beyond the unit disk, usually with a branch cut along the real axis for z ≥ 1.

It’s good to know that our function can be evaluated for large negative x, but how do we evaluate it?

Linear transformation formulas

Hypergeometric functions satisfy a huge number of identities, the simplest of which are known as the linear transformation formulas even though they are not linear transformations of z. They involve bilinear transformations z, a.k.a. fractional linear transformations, a.k.a. Möbius transformations. [1]

One such transformation is the following, found in A&S 15.3.4 [2].

F(a, b; c; z) = (1-z)^{-a} F\left(a, c-b; c; \frac{z}{z-1} \right)

If z < 1, then 0 < z/(z − 1) < 1, which is inside the radius of convergence. However, as z goes off to −∞, z/(z − 1) approaches 1, and the convergence of the power series will be slow.

A more complicated, but more efficient, formula is A&S 15.3.7, a linear transformation formula relates F at z to two other hypergeometric functions evaluated at 1/z. Now when z is large, 1/z is small, and these series will converge quickly.

\begin{align*} F(a, b; c; z) &= \frac{\Gamma(c) \Gamma(b-a)}{\Gamma(b) \Gamma(c-a)} \,(-z)^{-a\phantom{b}} F\left(a, 1-c+a; 1-b+a; \frac{1}{z}\right) \\ &+ \frac{\Gamma(c) \Gamma(a-b)}{\Gamma(a) \Gamma(c-b)} \,(-z)^{-b\phantom{a}} F\left(\,b, 1-c+b; 1-a+b; \,\frac{1}{z}\right) \\ \end{align*}

Related posts

[1] It turns out these transformations are linear, but not as functions of a complex argument. They’re linear as transformations on a projective space. More on that here.

[2] A&S refers to the venerable Handbook of Mathematical Functions by Abramowitz and Stegun.