# Complex AGM

The arithmetic-geometric mean (AGM) of two non-negative real numbers a and b is defined as the limit of the iteration starting with a0 = a and b0 = b and

an+1 = ½ (an + bn)
bn+1 = √(an bn)

for n > 0. This sequence converges very quickly and is useful in numerical algorithms. The limit can be expressed in terms of an elliptic function, and that elliptic function can then be related to other functions. See, for example, this post for how the AGM can be used to compute logarithms to arbitrary precision.

Since the AGM is useful in computing special functions, and we’re often interested in evaluating special functions at complex values, it’s natural to want to evaluate the AGM for complex numbers.

But we immediately run into a difficulty: which square root do we pick to find the new b?

For a non-negative real number x, √x is defined to be the non-negative real number y such that y² = x. But for more general values of x we have to choose a branch of the square root function. Which branch should we use in the AGM?

Often when we need to take the square root of complex numbers we can use the “principal branch,” the branch that gives positive values for positive real inputs and extends to the rest of the complex plane with the negative axis removed. If you compute the square root of a complex number in software, as we’ll do below, this is probably the value you’ll get by default.

But it turns out we cannot simply pick the principal branch. Gauss discovered two centuries ago that the right branch to take could vary at each step . What does that even mean? How do we know we’ve made the “right” choice?

For one thing, we want our iteration to converge. And for another, we’d like it to converge to something non-zero if we start with non-zero inputs . The right choice will guarantee this .

So what is this right choice? We provisionally update a and b as above, using either square root for b, and keep the value of b if

|ab| ≤ |a + b|

or

|ab| = |a + b|

and the imaginary part of b/a is positive. In other words, chose the possible value of b that’s closer to a, and use the imaginary part of b/a as a tie breaker.

Here is the AGM implemented in Python, using the right square root at each step.

```    def right(a, b):
d = abs(a + b) - abs(a - b)
return d > 0 or d == 0 and (b/a).imag > 0

def agm(a, b):
while abs(a-b) > 1e-14:
a1 = 0.5*(a+b)
b1 = np.sqrt(a*b)
if not right(a1, b1):
b1 = -b1
a, b = a1, b1
return a1
```

The test `d == 0` should make you concerned. Testing floating point numbers for exact equality with zero is seldom the right thing to do, but we’ll gloss over numerical issues for this post.

Here’s an example, giving the a values of the iteration starting with 7 + 30i and 20 + 22i. The iteration converges in four steps.

```    13.500000000000000 + 26.000000000000000j
13.784944719026262 + 26.397404494892115j
13.783557503026870 + 26.395953326186888j
13.783557473769877 + 26.395953309190112j
```

In this example, the right root is the principal root every time. To find an example where the other root is chosen, I generated random starting points until I found one that took an alternate root.

Here are the values of b starting with -1.654 – 1.178i and 2.244 – 1.956i. An asterisk indicates that the principal root was not the right root.

```     0.2328790598285062 - 0.728412421988127j
-0.6829999999999998 - 0.589000000000000j
-0.2254063569280081 - 0.799311791579126j *
-0.2250604700857468 - 0.658706210994063j
-0.2261796153095098 - 0.725905503054624j *
-0.2252334135068774 - 0.729009001286595j
-0.2257078598391289 - 0.727456168426402j *
-0.2257065144081936 - 0.727457252170610j
-0.2257071871240875 - 0.727456710298747j *
-0.2257071871236612 - 0.727456710298506j
-0.2257071871238743 - 0.727456710298626j *
-0.2257071871238744 - 0.727456710298626j
```

## More AGM posts

 David Cox. The Arithmetic-Geometric Mean of Gauss. L’Énseignement Mathématique, 30 (1984), p. 275–330.

 If a = –b we get zero immediately. But if a and b are both not zero, and a does not equal –b, then taking the right square root at each iteration gives us what we want.

 Interestingly, you could make a finite number of wrong choices and still end up with something that might converge, albeit to a different value. This gives you a different branch of the AGM.