Visualizing complex functions

It’s easy to visualize function from two real variables to one real variable: Use the function value as the height of a surface over its input value. But what if you have one more dimension in the output? A complex function of a complex variable is equivalent to a function from two real variables to two real variables. You can use one of the output variables as height, but what do you do about the other one?

Annotating phase on 3D plots

You can start by expressing the function output f(z) in polar form

f(z) = reiθ

Then you could plot the magnitude r as height and write the phase angle θ on the graph. Here’s a famous example of that approach from the cover of Abramowitz and Stegun.

You can find more on that book and the function it plots here.

This approach makes it easy to visualize the magnitude, but the phase is textual rather than visual. A more visual approach is to use color to represent the phase, such as hue in HSV scale.

Hue-only phase plots

You can combine color with height, but sometimes you’ll just use color alone. That is the approach taken in the book Visual Complex Functions. This is more useful than it may seem at first because the magnitude and phase are tightly coupled for an analytic function. The phase alone uniquely determines the function, up to a constant multiple. The image I posted the other day, the one I thought looked like Paul Klee meets Perry the Platypus, is an example of a phase-only plot.

filter contour plot

 

If I’d added more contour lines the plot would be more informative, but it would look less like a Paul Klee painting and less like a cartoon platypus, so I stuck with the defaults. Mathematica has dozens of color schemes for phase plots. I stumbled on “StarryNightColors” and liked it. I imagine I wouldn’t have seen the connection to Perry the Playtpus in a different color scheme.

Using hue and brightness

You can visualize magnitude as well as phase if you add another dimension to color. That’s what Daniel Velleman did in a paper that I read recently [1]. He uses hue to represent angle and brightness to represent magnitude. I liked his approach partly on aesthetic grounds. Phase plots using hue only tend to look psychedelic and garish. The varying brightness makes the plots more attractive. I’ll give some examples then include Velleman’s code.

First, here’s a plot of Jacobi’s sn function [2], an elliptic function analogous to sine. I picked it because it has a zero and a pole. Zeros show up as black and poles as white. (The function is elliptic, so it’s periodic horizontally and vertically. Functions like sine are periodic horizontally but not vertically.)

f[z_] := JacobiSN[z I, 1.5]; ComplexPlot[-2, -1, 2, 1, 200, 200]

You can see the poles on the left and right and the zero in the middle. Note that the hues swirl in opposite directions around zeros and poles: ROYGBIV counterclockwise around a zero and clockwise around a pole.

Next, here’s a plot of the 5th Chebyshev polynomial. I chose this because I’ve been thinking about Chebyshev polynomials lately, and because polynomials make plots that fade to white in every direction. (That is, |p(z)| → ∞ as |z| → ∞ for all polynomials.)

f[z_] := ChebyshevT[5, z]; ComplexPlot[-2, -2, 2, 2, 400, 400]

Finally, here’s a plot of the gamma function. I included this example because you can see the poles as little white dots on the left, and the plot has a nice dark-to-light overall pattern.

f[x_] := Gamma[x]; ComplexPlot[-3.5, -3, 7, 3, 200, 200]

Mathematica code

Here’s the Mathematica code from Velleman’s paper.  Note that in the HSV scale, he uses brightness to change both the saturation (S) and value (V).  Unfortunately the function f being plotted is a global rather than being passed in as an argument to the plotting function.

brightness[x_] := If[x <= 1, 
    Sqrt[x]/2, 
    1 - 8/((x + 3)^2)] 

ComplexColor[z_] := If[z == 0, 
    Hue[0, 1, 0], 
    Hue[Arg[z]/(2 Pi), 
        (1 - (b = brightness[Abs[z]])^4)^0.25, 
        b]] 

ComplexPlot[xmin_, ymin_, xmax_, ymax_, xsteps_, ysteps_] := Block[
    {deltax = N[(xmax - xmin)/xsteps], 
     deltay = N[(ymax - ymin)/ysteps]}, 
    Graphics[
        Raster[ 
            Table[
                f[x + I y], 
                {y, ymin + deltay/2, ymax, deltay}, 
                {x, xmin + deltax/2, xmax, deltax}], 
                {{xmin, ymin}, {xmax, ymax}}, 
                ColorFunction -> ComplexColor
            ]
        ]
    ]

 

Related posts

Footnotes

[1] Daniel J. Velleman. The Fundamental Theorem of Algebra: A Visual Approach. Mathematical Intelligencer, Volume 37, Number 4, 2015.

[2] I used f[z] = 10 JacobiSN[I z, 1.5]. I multiplied the argument by i because I wanted to rotate the picture 90 degrees. And I multiplied the output by 10 to get a less saturated image.

Joukowsky transformation

The Joukowsky transformation, or Joukowsky map, is a simple function that comes up in aerospace and electrical engineering calculations.

f(z) = \frac{1}{2} \left( z + \frac{1}{z} \right)

(Here z is a complex variable.) Despite its simplicity, it’s interesting to look at in detail.

Mapping lines and circles

Let zr exp(iθ) and let w = uiv be its image. Writing the Joukowsky transformation in terms of its real and complex parts makes it easier to understand how it transforms lines and circles.

r\exp(i\theta) \mapsto (u,v) = \left( \frac{1}{2} \left(r + \frac{1}{r}\right) \cos\theta, \frac{1}{2} \left(r - \frac{1}{r}\right) \sin\theta \right)

We can see how circles are mapped by holding r constant and letting θ vary. The unit circle gets crushed to the interval [-1, 1] on the real axis, traversed twice. Circles of radius ρ ≠ 1 get mapped to ellipses

\frac{u^2}{a^2} + \frac{v^2}{b^2} = 1

where

a &=& \frac{1}{2}\left(r + \frac{1}{r}\right) \\ b &=& \frac{1}{2}\left(r - \frac{1}{r}\right)

Next we hold θ constant and let r vary. If sin θ = 0 and cos θ > 0 then the image is [1, ∞). If sin θ = 0 and cos θ < 0 then the image is (-∞, -1]. In both cases the image is traced out twice. The image of the line with cos θ = 0 is the vertical axis. Otherwise lines through the origin are mapped to hyperbolas with equation

\frac{u^2}{\cos^2\theta} - \frac{v^2}{\sin^2\theta} = 1

Inverse functions

If (z + 1/z)/2 = w then z2 -2wz + 1 = 0. The discriminant of this equation is 4(w2 – 1) and so the Joukowsky transformation is 2-to-1 unless w = ± 1, in which case z = ± 1. The product of two roots of a quadratic equation equals the constant term divided by the leading coefficient. In this case, the product is 1. This says the Joukowski transformation is 1-to-1 in any region that doesn’t contain both z and 1/z. This is the case for the interior or exterior of the unit circle, or of the upper or lower half planes. In any of those four regions, one can invert the Joukowski transformation by solving a quadratic equation and choosing the correct root.

Read more: Applied complex analysis

Doubly and triply periodic functions

A function f is periodic if there exists a constant period ω such that f(x) = f(x + ω) for all x. For example, sine and cosine are periodic with period 2π.

There’s only one way a function on the real line can be periodic. But if you think of functions of a complex variable, it makes sense to look at functions that are periodic in two different directions. Sine and cosine are periodic as you move horizontally across the complex plane, but not if you move in any other direction. But you could imagine a function that’s periodic vertically as well as horizontally.

A doubly periodic function satisfies f(x) = f(x + ω1) and f(x) = f(x + ω2) for all x and for two different fixed complex periods, ω1 and ω2, with different angular components, i.e. the two periods are not real multiples of each other. For example, the two periods could be 1 and i.

How many doubly periodic functions are there? The answer depends on how much regularity you require. If you ask that the functions be differentiable everywhere as functions of a complex variable (i.e. entire), the only doubly periodic functions are constant functions [1]. But if you relax your requirements to allow functions to have singularities, there’s a wide variety of functions that are doubly periodic. These are the elliptic functions. They’re periodic in two independent directions, and meromorphic (i.e. analytic except at isolated poles). [2]

What about triply periodic functions? If you require them to be meromorphic, then the only triply periodic functions are constant functions. To put it another way, if a meromorphic function is periodic in three directions, it’s periodic in every direction for every period, i.e. constant. If a function has three independent periods, you can construct a sequence with a limit point where the function is constant, and so it’s constant everywhere.

Read more: Applied complex analysis

* * *

[1] Another way to put this is to say that elliptic functions must have at least one pole inside the parallelogram determined by the lines from the origin to ω1 and ω2. A doubly periodic function’s values everywhere are repeats of its values on this parallelogram. If the function were continuous over this parallelogram (i.e. with no poles) then it would be bounded over the parallelogram and hence bounded everywhere. But Liovuille’s theorem says a bounded entire function must be constant.

[2] We don’t consider arbitrary singularities, only isolated poles. There are doubly periodic functions with essential singularities, but these are outside the definition of elliptic functions.

Why are differentiable complex functions infinitely differentiable?

Complex analysis is filled with theorems that seem too good to be true. One is that if a complex function is once differentiable, it’s infinitely differentiable. How can that be? Someone asked this on math.stackexchange and this was my answer.

The existence of a complex derivative means that locally a function can only rotate and expand. That is, in the limit, disks are mapped to disks. This rigidity is what makes a complex differentiable function infinitely differentiable, and even more, analytic.

For a complex derivative to exist, it has to exist and have the same value for all ways the “h” term can go to zero in (f(z+h) – f(z))/h. In particular, h could approach 0 along any radial path, and that’s why an analytic function must map disks to disks in the limits.

By contrast, an infinitely differentiable function of two real variables could map a disk to an ellipse, stretching more in one direction than another. An analytic function can’t do that.

A smooth function of two variables could also flip a disk over, such as f(x, y) = (x, -y). An analytic function can’t do that either. That’s why complex conjugation is not an analytic function.

You might think that if complex differentiability is so restrictive, there must not be many complex differentiable functions. And yet nearly every function you’ve run across in school — trig functions, logs, polynomials, classical probability distributions, etc. — are all differentiable when extended to functions of a complex variable. According to this quote, “95 percent, if not 100 percent, of the functions studied by physics, engineering, and even mathematics students” are hypergeometric, a very special case of complex differentiable functions.

Contact me if you’d like help with complex analysis.

Addition formulas for Bessel functions

When trying to understand a complex formula, it helps to first ask what is being related before asking how they are related.

This post will look at addition theorems for Bessel functions. They related the values of Bessel functions at two points to the values at a third point.

Let a, b, and c be lengths of the sides of a triangle and let θ be the angle between the sides of length a and b. The triangle could be degenerate, i.e. θ could be π. The addition theorems here give the value of Bessel functions at c in terms of values of Bessel functions evaluated at b and c.

The first addition theorem says that

J0(c) = ∑ Jm(a) Jm(b) exp(imθ)

where the sum is over all integer values of m.

This says that the value of the Bessel function J0 at c is determined by a sort of inner product of values of Bessel functions of all integer orders at a and b. It’s not exactly an inner product because it is not positive definite. (Sometimes you’ll see the formula above written with a factor λ multiplying a, b, and c. This is because you can scale every side of a triangle by the same amount and not change the angles.)

Define the vectors x, y, and z to be the values of all Bessel functions evaluated at a, b, and c respectively. That is, for integer k, xk = Jk(a) and similar for y and z. Also, define the vector w by wk = exp(ikθ). Then the first addition theorem says

z0 = ∑ xm ym wm.

This is a little unsatisfying because it relates the value of one particular Bessel function at c to the values of all Bessel functions at a and b. We’d like to relate all Bessel function values at c to the values at a and b. That is, we’d like to relate the whole vector z to the vectors x and y.

Define the vector v by vk = exp(inψ) where ψ is the angle between the sides of length b and c. Then the formula we’re looking for is

zn = v-nxn+m ym wm.

Related links:

Why j for imaginary unit?

Electrical engineers use j for the square root of -1 while nearly everyone else uses i. The usual explanation is that EE’s do this because they use i for current. But here’s one advantage to using j that has nothing to do with electrical engineering.

The symbols i, j, and k are used for unit vectors in the directions of the x, y, and z axes respectively. That means that “i” has two different meanings in the real plane, depending on whether you think of it as the vector space spanned by i and j or as complex numbers. But if you use j to represent the imaginary unit, its meaning does not change. Either way it points along the y axis.

Said another way, bold face i and italic i point in different directions But bold face j and italic j both point in the same direction.

Here’s what moving from vectors to complex numbers looks like in math notation:

And here’s what it looks like in electrical engineering notation:

I don’t expect math notation to change, nor would I want it to. I’m happy with i. But using j might make moving between vectors and complex numbers a little easier.

Related: Applied complex analysis

Fractal-like phase plots

Define f(z) = iz*exp(-z/sin(iz)) + 1 and g(z) = f(f(z)) + 2 for a complex argument z. Here’s what the phase plots of g look like.

The first image lets the real and imaginary parts of z range from -10 to 10.

This close-up plots real parts between -1 and 0, and imaginary part between -3 and -2.

The plots were produced with this Python code:

from mpmath import cplot, sin, exp

def f(z): return 1j*z*exp(-z/sin(1j*z)) + 1

def g(z): return f(f(z)) + 2

cplot(g, [-1,0], [-3,-2], points=100000)

The function g came from Visual Complex Functions.

Read more: Applied complex analysis

Wallpaper and phase portraits

Suppose you want to create a background image that tiles well. You’d like it to be periodic horizontally and vertically so that there are no obvious jumps when the image repeats.

Functions like sine and cosine are period along the real line. But if you want to make a two-dimensional image by extending the sine function to the complex plane, the result is not periodic along the imaginary axis but exponential.

There are functions that are periodic horizontally and vertically. If you restrict your attention to functions that are analytic except at poles, these doubly-periodic functions are elliptic functions, a class of functions with remarkable properties. See this post if you’re interested in the details. Here we’re just after pretty wallpaper. I’ll give Python code for creating the wallpaper.

Here I’ll take a particular elliptic function sn(x). This is one of the Jacobi elliptic functions, somewhat analogous to the sine function, and use its phase portrait. Phase portraits use hue to encode the phase of a complex number, the θ value when a complex number is written in polar coordinates. The brightness of the color indicates the magnitude, the r value in polar coordinates.

Here’s the plot of sn(z, 0.2). (The sn function takes a parameter m that I arbitrarily chose as 0.2.) The plot shows two periods, horizontally and vertically. I included two periods so you could more easily see how it repeats. If you wanted to use this image as wallpaper, you could use 1/4 of the image, one period in each direction, to get by with a smaller image.

phase plot of sn(z, 0.2) - 0.2

Here’s the Python code that was used to create the image.

from mpmath import cplot, ellipfun, ellipk
sn = ellipfun('sn')
m = 0.2
x = 4*ellipk(m) # horizontal period
y = 2*ellipk(1-m) # vertical period
cplot(lambda z: sn(z, m) - 0.2, [0, 2*x], [0, 2*y], points = 100000)

I subtracted 0.2 from sn just to shift the color a little. Adding a positive number shifts the color toward red. Subtracting a positive number shifts the color toward blue. You could also multiply by some constant to increase or decrease the brightness.

You could also play around with other elliptic functions, described in the mpmath documentation here. And you can find more on cplot here. For example, you could supply your own function for how phase is mapped to color. The saturated colors used by default are good for mathematical visualization, but more subtle colors could be better for aesthetics.

If you make some interesting images, leave a comment with a link to your image and a description of how you made it.

Read more: Applied complex analysis

Product of polygon diagonals

Suppose you have a regular pentagon inscribed in a unit circle, and connect one vertex to each of the other four vertices. Then the product of the lengths of these four lines is 5.

More generally, suppose you have a regular n-gon inscribed in a unit circle. Connect one vertex to each of the others and multiply the lengths of each of these line segments. Then the product is n.

I ran across this theorem recently thumbing through Mathematical Diamonds and had a flashback. This was a homework problem in a complex variables class I took in college. The fact that it was a complex variables class gives a big hint at the solution: Put one of the vertices at 1 and then the rest are nth roots of 1. Connect all the roots to 1 and use algebra to show that the product of the lengths is n. This will be much easier than a geometric proof.

Let ω = exp(2πi/n). Then the roots of 1 are powers of ω. The products of the diagonals equals

|1 – ω| |1 – ω2| |1 – ω3| … |1 – ωn-1|

You can change the absolute value signs to parentheses because the terms come in conjugate pairs. That is,

|1 – ωk| |1 – ωn-k| = (1 – ωk) (1 – ωn-k).

So the task is to prove

(1 – ω)(1 – ω2)(1 – ω3) … (1 – ωn-1) = n.

Since

(zn – 1) = (z – 1)(zn-1 + zn-2 + … 1)

it follows that

(zn-1 + zn-2 + … 1) = (z – ω)(z – ω2) … (z – ωn-1).

The result follows from evaluating the expression above at z = 1.

Related: Applied complex analysis

Anti-calculus proposition of Erdős

The “anti-calculus proposition” is a little result by Paul Erdős that contrasts functions of a real variable and functions of a complex variable.

A standard calculus result says the derivative of a function is zero where the function takes on its maximum. The anti-calculus proposition says that for analytic functions, the derivative cannot be zero at the maximum.

To be more precise, a differentiable real-valued function on a closed interval takes on its maximum where the derivative is zero or at one of the ends. It’s possible that the maximum occurs at one of the ends of the interval and the derivative is zero there.

The anti-calculus proposition says that the analogous situation cannot occur for functions of a complex variable. Suppose a function f is analytic on a closed disk and suppose that f is not constant. Then |f| must take on its maximum somewhere on the boundary by the maximum modulus theorem. Erdős’ anti-calculus proposition adds that at the point on the boundary where |f| takes on its maximum, the derivative cannot be zero.

Related posts:

Doubly periodic functions

Functions like sine and cosine are periodic. For example, sin(x + 2πn) = sin(x) for all x and any integer n, and so the period of sine is 2π. But what happens if you look at sine or cosine as functions of a complex variable? They’re still periodic if you shift left or right, but not if you shift up or down. If you move up or down, i.e. in a pure imaginary direction, sines and cosines become unbounded.

Doubly periodic functions are periodic in two directions. Formally, a function f(z) of complex variable is doubly periodic if there exist two constants ω1 and ω2 such that

f(z) = f(z + ω1) = f(z + ω2)

for all z. The ratio ω1 / ω2 cannot be real; otherwise the two periods point in the same direction. For the rest of this post, I’ll assume ω1 = 1 and ω2 = i. Such functions are periodic in the horizontal (real-axis) and vertical (imaginary-axis) directions. They repeat everywhere their behavior on the unit square.

What do doubly periodic functions look like? It depends on what restrictions we place on the functions. When we’re working with complex functions, we’re typically interested in functions that are analytic, i.e. differentiable as complex functions.

Only constant functions can be doubly periodic and analytic everywhere. Why? Our functions take on over and over the values they take on over the unit square. If a function is analytic over the (closed) unit square then it’s bounded over that square, and since it’s doubly periodic, it’s bounded everywhere. By Liouville’s theorem, the only bounded analytic functions are constants.

This says that to find interesting doubly periodic functions, we’ll have to relax our requirements. Instead of requiring functions to be analytic everywhere, we will require them to be analytic except at isolated singularities. That is, the functions are allowed to blow up at a finite number of points. There’s a rich set of such functions, known as elliptic functions.

There are two well-known families of elliptic functions. One is the Weierstrass ℘ function (TeX symbol wp, Unicode U+2118) and its derivatives. The other is the Jacobi functions sn, cn, and dn. These functions have names resembling familiar trig functions because the Jacobi functions are in some ways analogous to trig functions.

It turns out that all elliptic functions can be written as combinations either of the Weierstrass function and its derivative or combinations of Jacobi functions. Roughly speaking, Weierstrass functions are easier to work with theoretically and Jacobi functions are easier to work with numerically.

Related: Applied complex analysis

A surprising theorem in complex variables

Here’s a strange theorem I ran across last week. I’ll state the theorem then give some footnotes.

Suppose f(z) and g(z) are two functions meromorphic in the plane. Suppose also that there are five distinct numbers a1, …, a5 such that the solution sets {z : f(z) = ai} and {z : g(z) = ai} are equal. Then either f(z) and g(z) are equal everywhere or they are both constant.

Notes

A complex function of a complex variable is meromorphic if it is differentiable except at isolated singularities. The theorem above applies to functions that are (complex) differentiable in the entire plane except at isolated poles.

The theorem is due to Rolf Nevanlinna. There’s a whole branch of complex analysis based on Nevanlinna’s work, but I’d not heard of it until last week. I have no idea why the theorem is true. It doesn’t seem that it should be true; the hypothesis seems far too weak for such a strong conclusion. But that’s par for the course in complex variables.

Update: I edited this post in response to the first comment below to make the theorem statement clearer.

More: Applied complex analysis

The gamma function

The gamma function Γ(x) is the most important function not on a calculator. It comes up constantly in math. In some areas, such as probability and statistics, you will see the gamma function more often than other functions that are on a typical calculator, such as trig functions.

The gamma function extends the factorial function to real numbers. Since factorial is only defined on non-negative integers, there are many ways you could define factorial that would agree on the integers and disagree elsewhere. But everyone agrees that the gamma function is “the” way to extend factorial. Actually, the gamma function Γ(x) does not extend factorial, but Γ(x+1) does. Shifting the definition over by one makes some equations simpler, and that’s the definition that has caught on.

In a sense, Γ(x+1) is the unique way to generalize factorial. Harald Bohr and Johannes Mollerup proved that it is the only log-convex function that agrees with factorial on the non-negative integers. That’s somewhat satisfying, except why should we look for log-convex functions? Log-convexity is very useful property to have, and a natural one for a function generalizing the factorial.

Here’s a plot of the logarithms of the first few factorial values.

The points nearly fall on a straight line, but if you look closely, the points bend upward slightly. If you have trouble seeing this, imagine a line connecting the first and last dot. This line would lie above the dots in between. This suggests that a function whose graph passes through all the dots should be convex.

Here’s what a graph showing what the gamma function looks like over the real line.

The gamma function is finite except for non-positive integers. It goes to +∞ at zero and negative even integers and to -∞ at negative odd integers. The gamma function can also be uniquely extended to an analytic function on the complex plane. The only singularities are the poles on the real axis.

Related posts: