In 1977, Isaac Asimov [1] asked how many terms of the slowly converging series

π = 4 – 4/3 + 4/5 – 4/7 + 4/9 – …

would you have to sum before doing better than the approximation

π ≈ 355/113.

A couple years later Richard Johnsonbaugh [2] answered Asimov’s question in the course of an article on techniques for computing the sum of series. Johnsonbaugh said you would need at least *N* = 3,748,630 terms.

Johnsonbaug’s answer is based on exact calculations. I wondered how well you’d do with *N* terms using ordinary floating point arithmetic. Would there be so much rounding error that the result is terrible?

I wrote the most direct implementation in Python, with no tricks to improve the accuracy.

from math import pi s = 0 N = 3748630 for n in range(1, N+1): s += (-1)**(n+1) * 4/(2*n - 1)

I intended to follow this up by showing that you could do better by summing all the positive and negative terms separately, then doing one subtraction at the end. But the naive version actually does quite well. It’s essentially as accurate as 355/113, with both approximations having an error of 2.66764 × 10^{-7}.

## Extended precision with bc

Next, I translated my program to `bc`

[3] so I could control the precision. `bc`

lets you specify your desired precision with its `scale`

parameter.

scale = 20 pi = 4*a(1) s = 0 m = 3748630 for (n = 1; n <= m; n++) { s += (-1)^(n+1) * 4/(2*n - 1) }

Floating point precision is between 15 and 16 decimal places. I added more precision by setting set `scale`

to 20, i.e. carrying out calculations to 20 decimal places, and summed the series again.

The absolute error in the series was less than the error in 355/113 in the 14th decimal place. When I used one less term in the series, its error was larger than the error in 355/113 in the 14th decimal place. In other words, the calculations suggest Johnsonbaugh found exactly the minimum number of terms needed.

I doubt Johnsonbaugh ever verified his result computationally. He doesn’t mention computer calculations in his paper [4], and it would have been difficult in 1979 to have access to the necessary hardware and software.

If he had access to an Apple II at the time, it would have run at 1 MHz. My calculation took around a minute to run on a 2 GHz laptop, so I’m guessing the same calculation would have taken a day or two on an Apple II. This assumes he could find extended precision software like `bc`

on an Apple II, which is doubtful.

The `bc`

programming language had been written four years earlier, so someone could have run a program like the one above on a Unix machine somewhere. However, such machines were hard to find, and their owners would have been reluctant to give up a couple days of compute time for a guest to run a frivolous calculation.

## Related posts

[1] Isaac Asimov, Asimov on Numbers, 1977.

[2] Richard Johnsonbaugh, Summing an Alternating Series. The American Mathematical Monthly, Vol 86, No 8, pp.637–648.

[3] Notice that *N* from the Python program became *m* in `bc`

. I’ve used `bc`

occasionally for years, and didn’t know until now that you cannot use capital letters for variables in standard *bc*. I guess I never tried before. The next post explains why `bc`

doesn’t allow capital letters in variable names.

[4] Johnsonbaugh’s paper does include some numerical calculations, but he only sums up 500 terms, not millions of terms, and it appears he only uses ordinary precision, not extended precision.

A bound based on an alternating series error estimate suggests less than 7497259 terms; Its curious that this is almost exactly twice the actual answer.

All Richard Johnsonbaugh had to do was performing a simple calculation to at least 14 decimal places and round the result to the next integer:

1/(355/113 – π) = 3,748,629.093539988

For convenience, I used a 16-digit calculator, but a guess he did it by hand in 1979.

This slowly convergent series can be sped up with help of a continued fraction that yields 3*n/2 correct digits of π, where n is the number of terms of the series and the number of terms of the continued fraction minus one.

Thus, in order to have something better than the seven significant digits given by 355/113, we have to choose n = 5, that is, 14/3 rounded to the next integer, per the following calculation:

3*n/2 > 7 => n > 14/3

We can now compute the new approximation as

4 – 4/3 + 4/5 -4/7 + 4/9 – 4/(20 + 1/(5 + 4/(20 + 9/(5 + 16/(20 + 25/5))))) = 7,011,328/2,231,775 = 3.141592678473412

The alternating denominators of the continued fraction in the expression above, 20 and 5, are 4*n and n, respectively. The first numerator of the continued fraction is always 4 and the next ones are the squares of 1, 2, 3 and so on up to n.

There were extended precision floating point packages in 1979. He was a college professor. Perhaps he had access to Macsyma or Maclisp. There were several “flonum” implementations around. I’m pretty sure there were other systems available on college campuses that also provided such extended precision. I remember seeing the specification of one – it look god awful to use – that worked with Fortran on OS \ 360.