Suppose you are visited by aliens from halfway across the galaxy. After asking you a lot of questions, they give you a parting gift, little black boxes can compute
x – x²/2 + x³/3 – …
with unbelievable speed and accuracy. You say thank you and your visitors vanish.
You get back home and wonder what you can do with your black boxes. The series the boxes compute looks vaguely familiar, but you can’t remember what it is. You call up a friend and he tells you that it’s the Taylor series for log(1 + x).
OK, so now what?
Your friend tells you he can predict what the boxes will output. He tells you, for example, that if you enter 0.5 it will output 0.4054651081081644. You try it, and your friend is right. At least partly. If you ask your box to give you only 16 digits, it will give you exactly what your friend said it would. But you could ask it for a thousand digits or a million digits or a billion digits, something your friend cannot do, at least not quickly.
Then you realize your friend has things backward. The way to exploit these boxes is not to compute logs on your laptop to predict their output, but to use the boxes instead of your laptop.
So you have a way to compute logs. You can bootstrap that to compute inverse logs, i.e. exp(x). And you can bootstrap that to compute sines and cosines. You try to compute anything you can starting from logs.
Enter the AGM
The preceding story was an introduction to the AGM, the arithmetic-geometric mean. It is the limit of alternatingly taking ordinary and geometric means. More on that here. What I want to focus on here is that the AGM can be computed extremely quickly.
Each iteration in the process of computing the AGM doubles the number of correct figures in the answer. Suppose you want to compute its output to a billion decimal places, and you’ve calculated a million decimal places. You need to compute 999,000,000 more decimal places, but you’re nearly there! Ten more steps and you’ll have all billion decimal places.
If you want to compute something to millions of digits, it would make sense to try to compute it in terms of the AGM. This was the research program of the brothers Jonathan and Peter Borwein. Much of this research was codified in their book Pi and the AGM. They used the AGM to compute π to crazy precision, but that wasn’t their goal per se.
Computing π was a demonstration project for a deeper agenda. While describing the work of the Borwein brothers, Richard Brent said
… theorems about π are often just the tips of “mathematical icebergs”—much of interest lies hidden beneath the surface.
The AGM of x and y equals
where K is the “complete elliptic integral of the first kind.” [1] You might reasonably think “Great. I’ll keep that in mind if I ever need to compute the compute elliptic integral of the first kind, whatever that is.” But K is like the log function in the story above, something that can be bootstrapped to compute other things.
The aliens Gauss, Lagrange, and Legendre gave the Borweins the AGM black box, and the Borweins figured out how to use it to compute π, but also log, exp, cos, etc. The Borwein algorithms may not be the most efficient if you only want, say, 16 digits of precision. But as you need more precision, eventually they become the algorithms to use.
See the next post for an example using the AGM to compute logarithms to 1000 digits.
And see the one after that for a way to compute π with the AGM.
Related posts
[1] This assumes the definition
As described at the bottom of this post you would need to square the argument to K before evaluating it in Mathematica or SciPy due to differences in conventions in defining K.
Hi John,
I’m intrigue by your post.
Can you show in a future post on where AGM would be optimal/appropriate? Thanks.