Random Blaschke products and Mathematica binding

A Blaschke product is a function that is the product of Blaschke factors, functions of the form

b(z; a) = |a|  (a – z) / a (1 – a*z)

where the complex number a lies inside the unit circle and a* is the complex conjugate of a.

I wanted to plot Blaschke products with random values of a using Mathematica, and I ran into a couple items of interest.

First, although Mathematica has a function RandomComplex for returning complex numbers chosen by a pseudorandom number generator, this function selects complex numbers uniformly over a rectangle and I wanted to select uniformly over a disk. This is easy enough to get around. I wrote my own function by selecting a magnitude and phase at random.

    rc[] := Sqrt[RandomReal[]] Exp[-2 Pi I RandomReal[]]

Now if I reuse the definition of a Blaschke factor from an earlier post

    b[z_, a_] := (Abs[a]/a) (a - z)/(1 - Conjugate[a] z)

I can define a product of two random Blaschke factors as follows.

    f[z_] := b[z, rc[]]  b[z, rc[]]

However, this may not do what you expect. If you plot the function twice, you’ll get different results! It’s a matter of binding order. At what point in the process are the two random values of a chosen and fixed? The answer is some time between the definition of f and executing the plotting function. If the plotting function generated new values of a every time it needed to evaluate f the result would be a hot mess.

On the other hand, if we leave out the colon then the function f behaves as expected.

    f[z_] = b[z, rc[]]  b[z, rc[]]

Now random values are generated by each call to rc, and these values are frozen in the definition of f.

Here’s a Blaschke product with 10 random parameters.

    f[z_] = Product[b[z, rc[]], {i, 1, 10}]

The code

    ComplexPlot[f[z], {z, -1 - I, 1 + I}]

produces the following plot.

If I plot this function again, say changing the plot range, I’ll plot the same function. But if I execute the line of code defining f again, and call ComplexPlot again, I’ll generate a new function.

Incidentally, if I plot this same function using ComplexPlot3d I get the following.

Why does it look like a bowl? As explained in the post on Blaschke factors, each factor is zero at its parameter a and has a pole at the reflection of a in the unit disk.

The function plotted above has zeros inside the unit disk near the boundary. That means it also has poles outside the unit disk near the boundary on the other side.

Fixed points of bilinear transformations

Introduction

I was puzzled the first time I saw bilinear transformations, also known as Möbius transformations. I was in a class where everything had been abstract and general, and suddenly thing got very concrete and specific. I wondered why we had changed gears, and I wondered how there could be much to say about something so simple.

A bilinear transformation f has the form

f(z) = \frac{az + b}{cz + d}

where adbc ≠ 0.

The answer to my questions, which I did not realize at the time, was that bilinear transformations come up very often in applications, if not directly then indirectly as useful tools. For example, the electrical engineer’s Smith chart is a bilinear transformation. Also, most of the mental math tricks given here amount to bilinear approximations. And the Blaschke factors I wrote about yesterday are bilinear transformations.

Fixed points

For this post I want to look at fixed points of bilinear transformations, solutions to f(x) = x where f is as above. This amounts to solving a quadratic equation.

The locations of the fixed points are

\frac{a - d \pm \sqrt{\Delta}}{2c}

where

\Delta = (a + d)^2 - 4(ad-bc)

The trace of the transformation is a + d, and the trace classifies the transformation into one of three categories: parabolic, elliptic, or loxodromic.

The trace could be any number in the complex plane, and all values of the trace correspond to the loxodromic case except when the trace is a real number in the interval [-2, 2].

In the loxodromic case there are two distinct fixed points: one attractive and one repulsive.

Example

In this example I chose a = 1, b = i, c = 2, and d = 3. The two fixed points are at 0.136 + 0.393i and −1.136 − 0.393i.

I chose 200 starting points at random in the unit square and iterated the bilinear function. This produced the following graph.

So apparently 0.136 + 0.393i is the attracting fixes point.

Next I started at 200 random starting points in a tight neighborhood of −1.136 − 0.393i., all within 0.00001 of the fixed point. This produced the following graph.

If you start exactly on the repelling fixed point, you’ll stay there forever. But if you start a tiny distance away from this point you’ll end up at the attracting fixed point.

 

Blaschke factors

Blaschke factors are complex functions with specified zeros inside the unit disk. Given a complex number a with |a| < 1, the Blaschke factor associated with a is the function

b(z; a) = \frac{|a|}{a} \frac{a-z}{1 -\bar{a}z}

Notice the semicolon in b(z; a). This is a convention that a few authors follow, and that I wish more would adopt. From a purely logical perspective the semicolon serves no purpose because the expression on the right is a function of two variables, z and a. But the semicolon conveys how we think about the function: we think of it as a function of z that depends on a parameter a.

So, for example, if we were to speak of the derivative of b, we would mean the derivative with respect to z. And we could say that b is an analytic function inside the unit disk. It is not an analytic function if we think of it as a function of a because of taking the conjugate of a in the denominator.

The function b(z; a) has a zero at a and a pole at the reciprocal of the conjugate of a. So, as I wrote about yesterday, this means that the zero and the pole are inversions of each other in the unit circle. If you punch a hole in the complex plane at the origin and turn the plane inside-out, without moving the unit circle, the zero and pole will swap places.

Here’s a plot of b(z; 0.3 + 0.3i).

Notice there’s a zero at 0.3 + 0.3i and a pole at

1/(0.3 – 0.3i) = 5/3 + 5/3i

which is the inversion of 0.3 + 0.3i in the unit circle. You can tell that one is a zero and the other is a pole because the colors rotate in opposite directions.

The plot above was made with the following Mathematica code.

    b[z_, a_] := (Abs[a]/a) (a - z)/(1 - Conjugate[a] z)
    ComplexPlot[b[z, .3 + .3 I], {z, -1.2 - 1.2 I, 2 + 2 I}, 
        ColorFunction -> "CyclicLogAbs"]

Because the zero and pole locations are inversions of each other, as the zero moves closer to the unit circle from the inside, the pole moves closer to the unit circle from the outside. Here’s a plot with a = 0.5 + 0.5i.

And the closer the zero gets to the origin, the further out the pole moves. Here’s a plot with a = 0.2 + 0.2i, this time on a larger scale, this time with the real and imaginary axes going up to 3 rather than 2.

Blaschke factors are the building blocks of Blaschke products, something I intend to write about in the future. By multiplying Blaschke factors together, you can create function that is analytic in the unit disk with specified zeros and with other nice properties.

Elias Wegert [1] says “In some sense, Blaschke products can be considered to be ‘hyperbolic counterparts’ of polynomials.” That’s something I’d like to explore further.

Related posts

[1] Elias Wegert. Visual Complex Functions: An Introduction with Phase Portraits. Birkhäuser.

Inversion in a circle

Inversion in the unit circle is a way of turning the circle inside-out. Everything that was inside the circle goes outside the circle, and everything that was outside the circle comes in.

Not only is the disk turned inside-out, the same thing happens along each ray going out from the origin. Points on that ray that are inside the circle go outside and vice versa. In polar coordinates, the point (r, θ) goes to (1/r, θ).

Complex numbers

In terms of complex numbers, inversion in the unit circle amounts to taking the reciprocal and the conjugate (in either order, because these operations commute). This is the same as dividing a complex number by the square of its magnitude. Proof:

z \bar{z} = |z|^2 \implies \frac{1}{\bar{z}} = \frac{z}{|z|^2}

There are two ways to deal with the case z = 0. One is to exclude it, and the other is to say that it maps to the point at infinity. This can be made rigorous by working on the Riemann sphere rather than the complex plane. More on that here.

Inverting a hyperbola

The other day Alex Kantorovich pointed out on Twitter that “the perfect figure 8 (or infinity) is simply the inversion through a circle of a hyperbola.” We’ll demonstrate this with Mathematica.

What happens to a point on the hyperbola

x^2 - y^2 = 1

when you invert it through a circle? If we think of x and y as the real and imaginary parts of a complex number, the discussion above shows that the point (x, y) goes to the same point divided by its length squared.

(x, y) \mapsto \left( \frac{x}{x^2 + y^2}, \frac{y}{x^2 + y^2} \right)

Let (u, v) be the image of the point (x, y).

\begin{align*} u &= \frac{x}{x^2 + y^2} \\ v &= \frac{y}{x^2 + y^2} \end{align*}

Then we have

u^2 + y^2 = \frac{x^2 + v^2}{(x^2 + y^2)^2} = \frac{1}{x^2 + y^2}

and

u^2 - v^2 = \frac{x^2 - y^2}{(x^2 + y^2)^2} = \frac{1}{(x^2 + y^2)^2}

because x² − y² = 1. Notice that the latter is the square of the former, i.e.

u^2 - v^2 = (u^2 + v^2)^2

Now we have everything to make our plot. The code

ContourPlot[{
    x^2 + y^2 == 1, 
    x^2 - y^2 == 1, 
    x^2 - y^2 == (x^2 + y^2)^2}, 
    {x, -3, 3}, {y, -3, 3}]

produces the following plot.

The blue circle is the first contour, the orange hyperbola is the second contour, and the green figure eight is the third contour.

Related posts

Visual integration

The plot below is of a meromorphic function f(z). That is, the function f(z) is analytic except possibly at poles, and the colors represent the phase angles, the values of θ if you write the function values in polar form.

What is the value of the integral

\frac{1}{2\pi i} \int_C \frac{f'(z)}{f(z)}\, dz

where C is the perimeter of the square?

This hardly seems like enough information to go on. I haven’t even said what the function is. And yet all the information you need is there.

There are three important points inside the plot, the three points where the color swirls come to a point. The colors rotate around the point near the top right corner twice, and they rotate around the other two points once each, and in the opposite order.

This means that our function either has two zeros and a double pole, or two poles and a double zero. In order to know which is which we need to know the color convention. The plot was made with Mathematica, and colors go through ROYGBIV order counterclockwise as the angle increases. So we have the former case: two single zeros and one double pole. However, it turns out the integral would be the same in either case.

By the argument principle, the integral above counts the number of zeros minus the number of poles inside the region, counted with multiplicity. Each zero contributes +1 and the double pole contributes −2. The integral is zero.

Now suppose we cut the region along a diagonal from the top left to the bottom right. Let C be the perimeter of the right triangle below the diagonal and integrate around C counterclockwise. What’s the value of the integral?

It would be 2, because there are two zeros inside the triangle.

***

To learn more about interpreting contour plots, see this post. For more on the argument principle, read this post then this post.

In my previous posts on the argument principle, I only mentioned zeros. Everything holds for poles as well, if you flip the sign.

Sum the zeros of an analytic function without finding them first

A couple days ago I wrote about how Vieta’s formulas let you sum the zeros of a polynomial without having to first compute the zeros. This is especially handy for high-order polynomials since there is no explicit formula for the zeros.

Most functions that arise in applications are not polynomials. How could you find the sum of the zeros of an analytic function in some region without having to locate each of the zeros first? What if you don’t even know how many zeros are in the region?

Here’s a plot of the complex function we’re going to use as an example.

Complex 3D plot of Bessel function Y_1

As with yesterday’s post, the key is to use the argument principle, but this time a more general form.

Generalized argument principle

Suppose we have a simple closed curve C and a function f(z) that is analytic in and on C. In particular we’re assuming f has no poles in or on C. Assume f has n zeros labeled z1 through zn. We’re going to integrate around C to add up the zeros of f inside.

Let g(z) be a function that we want to apply to the zeros of f. We assume it is also analytic in and on C. Then

\frac{1}{2\pi i}\int_C g(z)\, \frac{f'(z)}{f(z)}\, dx = \sum_{i=1}^n g(z_i)

In words, we can sum the values of g applied to each of the zeros of f by computing the integral on the left.

In the previous post, we had g(z) = 1. That is, we were simply counting the zeros. To sum the zeros, we set g(z) = z.

Example

The function plotted above is the Bessel function Y1. Here’s a flattened plot, just showing the phase of the values.

Contour plot of Bessel function Y_1

The points along the real line where the colors swirl around a point are the zeros of the function. We can see that there are zeros near 2, 5, 9, and 11.

There’s something different going on at 0. The colors swirl in the opposite direction because there’s a pole at 0. And there’s an abrupt change in color on the real line to the left of 0 because there’s a branch cut there.

We want to find the sum of the first four zeros, so we’ll create a contour that includes these zeros and excludes the singularity at 0. We’ll use a rectangle with lower left corner at 1 − i and an upper right corner at 13 + i.

We’ll modify our Mathematica code from yesterday to use a different example function, Y1, and add a factor of z to the integrand to sum zeros rather than count zeros.

    f[z_] := BesselY[1, z]
    leg[z1_, z2_] := NIntegrate[z f'[z]/f[z], {z, z1, z2}]
    (leg[1-I, 13-I] + leg[13-I, 13+I] + leg[13+I, 1+I] + leg[1+I, 1-I])/(2 Pi I)

This returns 27.972 − 5.65432*10^−16 I. We know that the zeros are all real, so the imaginary part is integration error. We can ask Mathematica for more digits, and it will report the real part as 27.97198306597801.

We can compute the sum of the first four zeros directly

    N[Sum[BesselYZero[1, n], {n, 1, 4}]]

and the result agrees with the result above to 13 significant figures.

When we were counting zeros, we knew the result had to be an integer, so the rounded integral was exact. Here our result is only as accurate as our numerical integration, though we did use prior information to know that we could discard the imaginary part of the integration result. This would not have made any difference since the imaginary part is orders of magnitude smaller than the integration error in the real part.

Related posts

Bounding zeros of an analytic function

The previous post looked at the problem of finding the zeros of a cubic polynomial. Assuming we’re going to use a numerical method to calculate the zero, the hard part is knowing where to tell the numerical method to look. That post showed how to use a change of variables to guarantee that the polynomial has one, and only one, root in the unit interval.

Now suppose we’re looking at functions of a complex variable. On the real line we can say that if a function is negative here and positive there, it must be zero somewhere in between by the intermediate value theorem. But that’s not the case for a complex-valued function. Such a function can avoid going through zero because it can move in two dimensions, not just one.

For example, the function ez is never zero. Now eπi = −1 and e0 = 1, but there’s no point on a line from πi to 0 where the exponential function is zero.

So how can we tell whether an analytic function has a zero in some region of the complex plane? More generally, can we tell how many zeros it has in some region?

There is a theorem in complex analysis called the argument principle. It says that if an analytic function f is not zero along a closed curve C, then the number of zeros of f inside C equals

\frac{1}{2\pi i} \int_C \frac{f'(z)}{f(z)}\, dz

We can evaluate this integral numerically, and it will tell us the number of zeros inside. The exact value must be an integer, so the integral doesn’t have to be computed with much accuracy. If we know the error is less than a half, then rounding the result to the nearest integer gives the exact answer.

Riemann zeta example

Let’s use the argument principle to show that the Riemann zeta function ζ(z) has 3 zeros in the critical strip with imaginary part between 10 and 30. The critical strip is the part of the complex plane with real part between 0 and 1.

We can make our contour a rectangle with lower left corner at 10i and upper right corner at 1 + 30i.

We’ll use Mathematica and start by defining a function leg that numerically integrates along one leg of a rectangle.

    leg[z1_, z2_] := NIntegrate[Zeta'[z]/Zeta[z], {z, z1, z2}]

Now let’s use this function to integrate over the rectangle described above.

    (leg[0+10I, 1+10I] + leg[1+10I, 1+30I] + 
     leg[1+30I, 0+30I] + leg[0+30I, 0+10I]) / (2 Pi I)

This returns

    3. +3.77069*10^-12 I

and the nearest integer is 3, confirming that the zeta function has 3 zeros inside our box, assuming that our numerical integration error is less than 0.5.

Next, let’s show that zeta has a zero in the critical strip with imaginary part between 14 and 15.

The integration

    (leg[0+14I, 1+14I] + leg[1+14I, 1+15I] + 
     leg[1+15I, 0+15I] + leg[0+15I, 0+14I]) / (2 Pi I)

returns

    1. + 1.46221*10^-12 I

showing that there’s one zero inside this smaller box.

In fact the first three zeros in the critical strip are 1/2 + 14.1347i, 1/2 + 21.0220i, and 1/2 + 25.0109i. The next one is at 1/2 + 30.4249i, just outside the first box we used above.

Here’s a plot.

I plotted ζ(1/2 + iz) to turn the plot sideways. Here’s the code that produced the plot.

    ComplexPlot[Zeta[I z + 1/2], {z, 10 - I, 30 + I}, 
        ColorFunction -> "LocalMaxAbs", AspectRatio -> 1/3]

More on plots of a complex function and what the colors mean here.

Related posts

Reading plots of a complex function

This post is about understanding the following plot. If you’d like to read more along these lines, see [1].

Hankel function phase plot

The plot was made with the following Mathematica command.

    ComplexPlot[HankelH1[3, z], {z, -8 - 4 I, 4 + 4 I}, 
        ColorFunction -> "CyclicArg", AspectRatio -> 1/2]

The plot uses color to represent the phase of the function values. If the output of the function is written in polar form as (r, θ), we’re seeing θ.

Here’s a plot of the identity function f(z) = z to show how the color rotation works.

The color rotate from red to orange to green etc. (ROYGBIV) clockwise around a pole and counterclockwise around a zero.

You can see from the plot at the top that our function has a pole at 0. In fact, it’s a pole of order three because you go through the ROYGBIV cycle three times clockwise as you circle the origin.

You can also see four zeros, located near −6.4 − 0.4i, −2, 2 − i, −0.4 − 2i, and −1.3 − 1.7i. In each case we cycle through ROYGBIV one time counterclockwise, so these are simple zeros. If we had a double zero, we’d cycle through the colors twice. If we had a triple zero, we’d cycle through the colors three times, just as we did at the pole, except we’d go through the colors counterclockwise.

The colors in our plot vary continuously, except on the left side. There’s a discontinuity in our colors above and below the real axis. That’s because the function we’re plotting has a branch cut from −∞ to 0. The discontinuity isn’t noticeable near the origin, but it becomes more noticeable as you move away from the origin to the left.

Here’s a 3D plot to let us see the branch cut more clearly.

Here color represents phase as before, but now height represents absolute value, the r in our polar representation. The flat plateau on top is an artifact. The function becomes infinite at 0, so it had to be cut off.

You can see a seam in the graph. There’s a jump discontinuity along the negative real axis, with the function taking different values as you approach the real axis from the second quadrant or from the third quadrant.

This came up in a previous post, Architecture and math, but I didn’t say anything about it. Here’s a plot from that post.

Why is there a little radial line on top where the graph was chopped off? And why is there a white streak below it? That’s evidence of a branch cut, though the discontinuity is small in this graph. You’ll notice color swirls around the base of the mesa. The colors swirl counterclockwise because they’re all at zeros. But you’ll notice the colors swirl clockwise as you go around the walls of the mesa. In fact they swirl around 5 times because there’s a pole of order 5 at the origin.

The book I mentioned in that post had something like this on the cover, plotting a generalization of the Hankel function. That’s why I chose the Hankel function as my example in this post.

Related posts

[1] Elias Wegert. Visual Complex Functions: An Introduction with Phase Portraits

The ring of entire functions

Rings made a bad first impression on me. I couldn’t remember the definitions of all the different kinds of rings, much less have an intuition for what was important about each one. As I recall, all the examples of rings in our course were variations on the integers, often artificial variations.

Entire functions

I’m more interested in analysis than algebra, so my curiosity was piqued when I ran across an appendix on entire functions in the back of an algebra book [1]. This appendix opens with the statement

The ring E of entire functions is a most peculiar ring.

It’s interesting that something so natural from the perspective of analysis is considered peculiar from the perspective of algebra.

An entire function is a function of a complex variable that is analytic in the entire complex plane. Entire functions are functions that have a Taylor series with infinite radius of convergence.

Rings

A ring is a set of things with addition and multiplication operations, and these operations interact as you’d expect via distributive rules. You can add, subtract, and multiply, but not divide: addition is invertible but multiplication is not in general. Clearly the sum or product of entire functions is an entire function. But the reciprocal of an entire function is not an entire function because the reciprocal has poles where the original function has zeros.

So why is the ring of analytic functions peculiar to an algebraist? Osborne speaks of “the Jekyll-Hyde nature of E,” meaning that E is easy to work with in some ways but not in others. If Santa were an algebraist, he would say E is both naughty and nice.

Nice properties

On the nice side, E is an integral domain. That is, if f and g are entire functions and fg = 0, then either f = 0 or g = 0.

If we were looking at functions that were merely continuous, it would be possible for f to be zero in some places and g to be zero in the rest, so that the product fg is zero everywhere.

But analytic functions are much less flexible than continuous functions. If an analytic function is zero on a set of points with a limit point, it’s zero everywhere. If every point in the complex plane is either a zero of f or a zero of g, one of these sets of zeros must have a limit point.

Another nice property of E is that it is a Bezóut domain. This means that if f and g are entire functions with no shared zeros, there exist entire functions λ and μ such that

λf + μg = 1.

This is definition is analogous to (and motivated by) Bezóut’s theorem in number theory which says that if a and b are relatively prime integers, then there are integers m and n such that

ma + nb = 1.

Naughty properties

The naughty properties of E take longer to describe and involve dimension. “Nice” rings have small Krull dimension. For example Artinian rings have Krull dimension 0, and the ring of polynomials in n complex variables has dimension n. But the Krull dimension of the ring of entire functions is infinite. In fact it’s “very infinite” in the sense that it is at least

2^{\aleph_1}

and so the Krull dimension of E is larger than the cardinality of the complex numbers.

Wittgenstein’s ruler

Nassim Taleb described Wittgenstein’s ruler this way:

Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table you may also be using the table to measure the ruler. The less you trust the ruler’s reliability, the more information you are getting about the ruler and the less about the table.

An algebraist would say that entire functions are weird, but an analyst could say that on the contrary, ring theory, or at least Krull dimension, is weird.

Related posts

[1] Basic Homological Algebra by M. Scott Osborne.