Planning a world record calculation

Before carrying out a big calculation, you want to have an idea how long various approaches would take. This post will illustrate this by planning an attempt to calculate Apéry’s constant

\zeta(3) = \sum_{n=1}^\infty \frac{1}{n^3}

to enormous precision. This constant has been computed to many decimal places, in part because it’s an open question whether the number has a closed form. Maybe you’d like to set a new record for computing it.

This post was motivated by a question someone asked me in response to this post: What the advantage is in having an approximation for binomial coefficients when we can just calculate them exactly? This post will show these approximations in action.

Apéry’s series

First of all, calculating ζ(3) directly from the definition above is not the way to go. The series converges too slowly for that. If you compute the sum of the first N terms, you get about 2 log10 N decimal places. This post explains why. We can do much better, with the number of decimal places achieved being proportional to N rather than log N.

A couple days ago I wrote about a sequence rapidly converging series for ζ(3), starting with Apéry’s series.

\zeta(3) = \frac{5}{2} \sum_{n=1}^\infty (-1)^{n-1} \frac{1}{\dbinom{2n}{n} n^3}

Suppose you want to compute ζ(3) to a million decimal places using this series. How would you do this?

Since this is an alternating series, the truncation error from summing only the first N terms is bounded by the N+1 term. So you could sum terms until you get a term less than

10-106.

Great, but how long might that take?

Without some estimate of binomial coefficients, there’s no way to know how long this would take without computing a bunch of binomial coefficients. But with a little theory, you can do a back-of-the-envelope estimate.

Estimating how many terms to use

For large n, the binomial coefficient in the denominator of the nth term is approximately 4n/√(πn). So when you include the n³ term, you can see that the denominator of the nth term in the series is greater than 4n = 22n. In other words, each term gives you at least 2 bits of precision.

If you want 1,000,000 digits, that’s approximately 3,000,000 bits. So to get a million digits of ζ(3) you’d need to sum about 1,500,000 terms of Apéry’s series. By comparison, summing 1,500,000 terms in the definition of ζ(3) would give you between 12 and 13 digits precision.

More efficient series

Apéry’s series is only the first of a sequence of rapidly converging series for ζ(3). The series get more complicated but more efficient as a parameter s increases. Is it worthwhile to use a larger value of s? If you were planning to devote, say, a month of computing resources to try to set a record, it would be a good idea to spend a few hours estimating whether another approach would get more work done in that month.

By combining information here and here you could find out that the series corresponding to s = 2 has terms whose denominator is on the order of 33n. To estimate how many terms you’d need for a million digits of precision, you’d need to solve the equation

33n = 10106,

and by taking logs you find this is 1,047,951. In other words, about 1/3 fewer terms than using Apéry’s series. That was a win, so maybe you go on to estimate whether you should go up to s = 3. The key information you need in these calculations is the estimate on binomial coefficients given here.

Incidentally, computing ζ(3) to a million decimal places would have given you the world record in 1997. At the moment, the record is on the order of a trillion digits.

Update: The next post shows how to actually carry out (on a smaller scale) the calculations described here using the Unix utility bc.

3 thoughts on “Planning a world record calculation

  1. Over dinner a few short decades ago, Bill Gosper was excitedly describing a general method for accelerating series convergence. I didn’t follow his spoken-word math then, and I only skimmed his ink-on-paper now. But you might be interested in his 1974 writeup at https://dspace.mit.edu/handle/1721.1/6088. He takes on zeta(3) around page 40, concluding at around 12 bits per term. Not fast enough for Wedeniwski or Seungmin Kim’s 1.2T digits, though.

    Gosper once described that era of his research as computer-assisted “Strip Mining in the Abandoned Orefields of Nineteenth Century Mathematics” (https://www.tweedledum.com/rwg/Gosper%201990%20Strip%20Mining.pdf) where he touches on zeta(3) once again.

Comments are closed.