Wallpaper and phase portraits

Suppose you want to create a background image that tiles well. You’d like it to be periodic horizontally and vertically so that there are no obvious jumps when the image repeats.

Functions like sine and cosine are period along the real line. But if you want to make a two-dimensional image by extending the sine function to the complex plane, the result is not periodic along the imaginary axis but exponential.

There are functions that are periodic horizontally and vertically. If you restrict your attention to functions that are analytic except at poles, these doubly-periodic functions are elliptic functions, a class of functions with remarkable properties. See this post if you’re interested in the details. Here we’re just after pretty wallpaper. I’ll give Python code for creating the wallpaper.

Here I’ll take a particular elliptic function sn(x). This is one of the Jacobi elliptic functions, somewhat analogous to the sine function, and use its phase portrait. Phase portraits use hue to encode the phase of a complex number, the θ value when a complex number is written in polar coordinates. The brightness of the color indicates the magnitude, the r value in polar coordinates.

Here’s the plot of sn(z, 0.2). (The sn function takes a parameter m that I arbitrarily chose as 0.2.) The plot shows two periods, horizontally and vertically. I included two periods so you could more easily see how it repeats. If you wanted to use this image as wallpaper, you could use 1/4 of the image, one period in each direction, to get by with a smaller image.

phase plot of sn(z, 0.2) - 0.2

Here’s the Python code that was used to create the image.

    from mpmath import cplot, ellipfun, ellipk
    sn = ellipfun('sn')
    m = 0.2
    x = 4*ellipk(m) # horizontal period
    y = 2*ellipk(1-m) # vertical period
    cplot(lambda z: sn(z, m) - 0.2, [0, 2*x], [0, 2*y], points = 100000)

I subtracted 0.2 from sn just to shift the color a little. Adding a positive number shifts the color toward red. Subtracting a positive number shifts the color toward blue. You could also multiply by some constant to increase or decrease the brightness.

You could also play around with other elliptic functions, described in the mpmath documentation here. And you can find more on cplot here. For example, you could supply your own function for how phase is mapped to color. The saturated colors used by default are good for mathematical visualization, but more subtle colors could be better for aesthetics.

If you make some interesting images, leave a comment with a link to your image and a description of how you made it.

Read more: Applied complex analysis

Product of polygon diagonals

Suppose you have a regular pentagon inscribed in a unit circle, and connect one vertex to each of the other four vertices. Then the product of the lengths of these four lines is 5.

More generally, suppose you have a regular n-gon inscribed in a unit circle. Connect one vertex to each of the others and multiply the lengths of each of these line segments. Then the product is n.

I ran across this theorem recently thumbing through Mathematical Diamonds and had a flashback. This was a homework problem in a complex variables class I took in college. The fact that it was a complex variables class gives a big hint at the solution: Put one of the vertices at 1 and then the rest are nth roots of 1. Connect all the roots to 1 and use algebra to show that the product of the lengths is n. This will be much easier than a geometric proof.

Let ω = exp(2πi/n). Then the roots of 1 are powers of ω. The products of the diagonals equals

|1 – ω| |1 – ω2| |1 – ω3| … |1 – ωn-1|

You can change the absolute value signs to parentheses because the terms come in conjugate pairs. That is,

|1 – ωk| |1 – ωn-k| = (1 – ωk) (1 – ωn-k).

So the task is to prove

(1 – ω)(1 – ω2)(1 – ω3) … (1 – ωn-1) = n.

Since

(zn – 1) = (z – 1)(zn-1 + zn-2 + … 1)

it follows that

(zn-1 + zn-2 + … 1) = (z – ω)(z – ω2) … (z – ωn-1).

The result follows from evaluating the expression above at z = 1.

Related: Applied complex analysis

Anti-calculus proposition of Erdős

The “anti-calculus proposition” is a little result by Paul Erdős that contrasts functions of a real variable and functions of a complex variable.

A standard calculus result says the derivative of a function is zero where the function takes on its maximum. The anti-calculus proposition says that for analytic functions, the derivative cannot be zero at the maximum.

To be more precise, a differentiable real-valued function on a closed interval takes on its maximum where the derivative is zero or at one of the ends. It’s possible that the maximum occurs at one of the ends of the interval and the derivative is zero there.

The anti-calculus proposition says that the analogous situation cannot occur for functions of a complex variable. Suppose a function f is analytic on a closed disk and suppose that f is not constant. Then |f| must take on its maximum somewhere on the boundary by the maximum modulus theorem. Erdős’ anti-calculus proposition adds that at the point on the boundary where |f| takes on its maximum, the derivative cannot be zero.

Related posts

Doubly periodic functions

Functions like sine and cosine are periodic. For example, sin(x + 2πn) = sin(x) for all x and any integer n, and so the period of sine is 2π. But what happens if you look at sine or cosine as functions of a complex variable? They’re still periodic if you shift left or right, but not if you shift up or down. If you move up or down, i.e. in a pure imaginary direction, sines and cosines become unbounded.

Doubly periodic functions are periodic in two directions. Formally, a function f(z) of complex variable is doubly periodic if there exist two constants ω1 and ω2 such that

f(z) = f(z + ω1) = f(z + ω2)

for all z. The ratio ω1 / ω2 cannot be real; otherwise the two periods point in the same direction. For the rest of this post, I’ll assume ω1 = 1 and ω2 = i. Such functions are periodic in the horizontal (real-axis) and vertical (imaginary-axis) directions. They repeat everywhere their behavior on the unit square.

What do doubly periodic functions look like? It depends on what restrictions we place on the functions. When we’re working with complex functions, we’re typically interested in functions that are analytic, i.e. differentiable as complex functions.

Only constant functions can be doubly periodic and analytic everywhere. Why? Our functions take on over and over the values they take on over the unit square. If a function is analytic over the (closed) unit square then it’s bounded over that square, and since it’s doubly periodic, it’s bounded everywhere. By Liouville’s theorem, the only bounded analytic functions are constants.

This says that to find interesting doubly periodic functions, we’ll have to relax our requirements. Instead of requiring functions to be analytic everywhere, we will require them to be analytic except at isolated singularities. That is, the functions are allowed to blow up at a finite number of points. There’s a rich set of such functions, known as elliptic functions.

There are two well-known families of elliptic functions. One is the Weierstrass ℘ function (TeX symbol wp, Unicode U+2118) and its derivatives. The other is the Jacobi functions sn, cn, and dn. These functions have names resembling familiar trig functions because the Jacobi functions are in some ways analogous to trig functions.

It turns out that all elliptic functions can be written as combinations either of the Weierstrass function and its derivative or combinations of Jacobi functions. Roughly speaking, Weierstrass functions are easier to work with theoretically and Jacobi functions are easier to work with numerically.

Related: Applied complex analysis

A surprising theorem in complex variables

Here’s a strange theorem I ran across last week. I’ll state the theorem then give some footnotes.

Suppose f(z) and g(z) are two functions meromorphic in the plane. Suppose also that there are five distinct numbers a1, …, a5 such that the solution sets {z : f(z) = ai} and {z : g(z) = ai} are equal. Then either f(z) and g(z) are equal everywhere or they are both constant.

Notes

A complex function of a complex variable is meromorphic if it is differentiable except at isolated singularities. The theorem above applies to functions that are (complex) differentiable in the entire plane except at isolated poles.

The theorem is due to Rolf Nevanlinna. There’s a whole branch of complex analysis based on Nevanlinna’s work, but I’d not heard of it until last week. I have no idea why the theorem is true. It doesn’t seem that it should be true; the hypothesis seems far too weak for such a strong conclusion. But that’s par for the course in complex variables.

Update: I edited this post in response to the first comment below to make the theorem statement clearer.

More: Applied complex analysis

The gamma function

The gamma function Γ(x) is the most important function not on a calculator. It comes up constantly in math. In some areas, such as probability and statistics, you will see the gamma function more often than other functions that are on a typical calculator, such as trig functions.

The gamma function extends the factorial function to real numbers. Since factorial is only defined on non-negative integers, there are many ways you could define factorial that would agree on the integers and disagree elsewhere. But everyone agrees that the gamma function is “the” way to extend factorial. Actually, the gamma function Γ(x) does not extend factorial, but Γ(x+1) does. Shifting the definition over by one makes some equations simpler, and that’s the definition that has caught on.

In a sense, Γ(x+1) is the unique way to generalize factorial. Harald Bohr and Johannes Mollerup proved that it is the only log-convex function that agrees with factorial on the non-negative integers. That’s somewhat satisfying, except why should we look for log-convex functions? Log-convexity is very useful property to have, and a natural one for a function generalizing the factorial.

Here’s a plot of the logarithms of the first few factorial values.

The points nearly fall on a straight line, but if you look closely, the points bend upward slightly. If you have trouble seeing this, imagine a line connecting the first and last dot. This line would lie above the dots in between. This suggests that a function whose graph passes through all the dots should be convex.

Here’s what a graph showing what the gamma function looks like over the real line.

The gamma function is finite except for non-positive integers. It goes to +∞ at zero and negative even integers and to -∞ at negative odd integers. The gamma function can also be uniquely extended to an analytic function on the complex plane. The only singularities are the poles on the real axis.

Related posts