More fun with quatrefoils

In a comment to my previous post on quatrefoils, Jan Van lint suggested a different equation for quatrefoils:

r = a + |cos(2θ)|

Here are some examples of how these curves look for varying values of a.

As a increases, the curves get rounder. We can quantify this by looking at the angle between the tangents on either side of the cusps. By symmetry, we can pick any one of the four cusps, so we’ll work with the one at θ = π/4 for convenience.

The slopes of the tangent lines are the left and right derivatives

\frac{dy}{dx} = \frac{r'(\theta)\sin\theta + r(\theta)\cos\theta}{r'(\theta)\cos\theta - r(\theta)\sin\theta}

Now the derivative of

a + |cos(2θ)|

with respect to θ at θ = π/4 is 2 from one size and -2 from the other.

Sine and cosine are equal at π/4, they cancel out in the ratio above and so the two derivatives, the slopes of the two tangent lines, are (2+a)/(2-a) and (2-a)/(2+a). The slopes are reciprocals of each other, which is what we’d expect since the quatrefoils are symmetric about the line θ = π/4.

The angles of the two tangent lines are the inverse tangents of the slopes, and so the angle between the two tangent lines is

Note that as a goes to zero, so does the angle between the tangent lines.

Here’s a plot of the angle as a function of a.

You could start with a desired angle and solve the equation above numerically for the value of a that gives the angle. From the graph above, it looks like if we wanted the curves to intersect at 90° we should pick a around 2. In fact, we should pick a exactly equal to 2. There the slopes are (2+2)/(2-2) = ∞ and (2-2)/(2+2) = 0, i.e. one tangent line is perfectly vertical and the other is perfectly horizontal.

The word problem

Most people have heard of word problems, but not as many have heard of the word problem. If you’re imagining that the word problem is some superlatively awful word problem, I can assure you it’s not. It’s both simpler and weirder than that.

The word problem is essentially about whether you can always apply algebraic rules in an automated way. The reason it is called the word problem is that you start by a description of your algebraic system in terms of symbols (“letters”) and concatenations of symbols (“words”) subject to certain rules, also called relations.

The word problem for groups

For example, you can describe a group by saying it contains a and b, and it satisfies the relations

a² = b²

and

a-1ba = b-1.

A couple things are implicit here. We’ve said this a group, and since every element in a group has an inverse, we’ve implied that a-1 and b-1 are in the group as well. Also from the definition of a group comes the assumption that multiplication is associative, that there’s an identity element, and that inverses work like they’re supposed to.

In the example above, you could derive everything about the group from the information given. In particular, someone could give you two words—strings made up of a, b, a-1, and b-1—and you could determine whether they are equal by applying the rules. But in general, this is not possible for groups.

Undecidable

The bad news is that in general this isn’t possible. In computer science terminology, the word problem is undecidable. There is no algorithm that can tell whether two words are equal given a list of relations, at least not in general. There are special cases where the word problem is solvable, but a general algorithm is not possible.

The word problem for semigroups

I presented the word problem above in the context of groups, but you could look at the word problem in more general contexts, such as semigroups. A semigroup is closed under some associative binary operation, and that’s it. There need not be any inverses or even an identity element.

Here’s a concrete example of a semigroup whose word problem has been proven to be undecidable. As before we have two symbols, a and b. And because we are in a semigroup, not a group, there are no inverses. Our semigroup consists of all finite sequences make out of a‘s and b‘s, subject to these five relations.

aba2b2 = b2a2ba

a2bab2a = b2a3ba

aba3b2 = ab2aba2

b3a2b2a2ba = b3a2b2a4

a4b2a2ba = b2a4

Source: Term Rewriting and All That

Experience

When I first saw groups presented this as symbols and relations, I got my hopes up that a large swath of group theory could be automated. A few minutes later my naive hopes were dashed. So in my mind I thought “Well, then this is hopeless.”

But that is not true. Sometimes the word problem is solvable. It’s like many other impossibility theorems. There’s no fifth degree analog of the quadratic equation in general, but there are fifth degree polynomials whose roots can be found in closed form. There’s no program that can tell whether any arbitrary program will halt, but that doesn’t mean you can’t tell whether some programs halt.

It didn’t occur to me at the time that it would be worthwhile to explore the boundaries, learning which word problems can or cannot be solved. It also didn’t occur to me that I would run into things like the word problem in practical applications, such as simplifying symbolic expressions and optimizing their evaluation. Undecidable problems lurk everywhere, but you can often step around them.

Real-time analytics

There’s an ancient saying “Whom the gods would destroy they first make mad.” (Mad as in crazy, not mad as in angry.) I wrote a variation of this on Twitter:

Whom the gods would destroy, they first give real-time analytics.

Having more up-to-date information is only valuable up to a point. Past that point, you’re more likely to be distracted by noise. The closer you look at anything, the more irregularities you see, and the more likely you are to over-steer [1].

I don’t mean to imply that the noise isn’t real. (More on that here.) But there’s a temptation to pay more attention to the small variations you don’t understand than the larger trends you believe you do understand.

I became aware of this effect when simulating Bayesian clinical trial designs. The more often you check your stopping rule, the more often you will stop [2]. You want to monitor a trial often enough to shut it down, or at least pause it, if things change for the worse. But monitoring too often can cause you to stop when you don’t want to.

Flatter than glass

A long time ago I wrote about the graph below.

The graph looks awfully jagged, until you look at the vertical scale. The curve represents the numerical difference between two functions that are exactly equal in theory. As I explain in that post, the curve is literally smoother than glass, and certainly flatter than a pancake.

Notes

[1] See The Logic of Failure for a discussion of how over-steering is a common factor in disasters such as the Chernobyl nuclear failure.

[2] Bayesians are loathe to talk about things like α-spending, but when you’re looking at stopping frequencies, frequentist phenomena pop up.

Naive modeling

Travel agent

In his book The Algorithm Design Manual, Steven Skiena has several sections called “War Stories” where he talks about his experience designing algorithms for clients.

Here’s an excerpt of a story about finding the best airline ticket prices.

“Look,” I said at the start of the first meeting. “This can’t be so hard. Consider a graph … The path/fare can be found with Dijkstra’s shorted path algorithm. Problem solved!” I announced waving my hand with a flourish.

The assembled cast of the meeting nodded thoughtfully, then burst out laughing.

Skiena had greatly underestimated the complexity of the problem, but he learned, and was able to deliver a useful solution.

This reminds me of a story about a calculus professor who wrote a letter to a company that sold canned food explaining how they could use less metal for the same volume by changing the dimensions of their can. Someone wrote back thanking him for his suggestion listing reasons why the optimization problem was far more complicated than he had imagined. If anybody has a link to that story, please let me know.

Related post: Bring out your equations!

Opening Windows files from bash and eshell

I often work in a sort of amphibious environment, using Unix software on Windows. As you can well imagine, this causes headaches. But I’ve found such headaches are generally more manageable than the headaches from alternatives I’ve tried.

On the Windows command line, you can type the name of a file and Windows will open the file with the default application associated with its file extension. For example, typing foo.docx and pressing Enter will open the file by that name using Microsoft Word, assuming that is your default application for .docx files.

Unix shells don’t work that way. The first thing you type at the command prompt must be a command, and foo.docx is not a command. The Windows command line generally works this way too, but it makes an exception for files with recognized extensions; the command is inferred from the extension and the file name is an argument to that command.

WSL bash

When you’re running bash on Windows, via WSL (Windows Subsystem for Linux), you can run the Windows utility start which will open a file according to its extension. For example,

    cmd.exe /C start foo.pdf

will open the file foo.pdf with your default PDF viewer.

You can also use start to launch applications without opening a particular file. For example, you could launch Word from bash with

    cmd.exe /C start winword.exe

Emacs eshell

Eshell is a shell written in Emacs Lisp. If you’re running Windows and you do not have access to WSL but you do have Emacs, you can run eshell inside Emacs for a Unix-like environment.

If you try running

    start foo.pdf

that will probably not work because eshell does not use the windows PATH environment.

I got around this by creating a Windows batch file named mystart.bat and put it in my path. The batch file simply calls start with its argument:

    start %

Now I can open foo.pdf from eshell with

    mystart foo.pdf

The solution above for bash

    cmd.exe /C start foo.pdf

also works from eshell.

(I just realized I said two contradictory things: that eshell does not use your path, and that it found a bash file in my path. I don’t know why the latter works. I keep my batch files in c:/bin, which is a Unix-like location, and maybe eshell looks there, not because it’s in my Windows path, but because it’s in what it would expect to be my path based on Unix conventions. I’ve searched the eshell documentation, and I don’t see how to tell what it uses for a path.)

More shell posts

Generating all primitive Pythagorean triples with linear algebra

A Pythagorean triple is a set of positive integers that can be the lengths of sides of a right triangle, i.e. numbers a, b, and c such that

a² + b² = c².

A primitive Pythagorean triple (PPT) is a Pythagorean triple whose elements are relatively prime. For example, (50, 120, 130) is a Pythagorean triple, but it’s not primitive because all the entries are divisible by 10. But (5, 12, 13) is a primitive Pythagorean triple.

A method of generating all PPTs has been known since the time of Euclid, but I recently ran across a different approach to generating all PPTs [1].

Let’s standardize things a little by assuming our triples have the form (a, b, c) where a is odd, b is even, and c is the hypotenuse [2]. In every PPT one of the sides is even and one is odd, so we will assume the odd side is listed first.

It turns out that all PPTs can be found by multiplying the column vector [3, 4, 5] repeatedly by matrices M0, M1, or M2. In [1], Romik uses the sequence of matrix multiplications needed to create a PPT as trinary number associated with the PPT.

The three matrices are given as follows.

\begin{align*} M_0 &= \begin{bmatrix} \phantom{-}1 & \phantom{-}2 & \phantom{-}2 \\ \phantom{-}2 & \phantom{-}1 & \phantom{-}2 \\ \phantom{-}2 & \phantom{-}2 & \phantom{-}3 \end{bmatrix} \\ M_1 &= \begin{bmatrix} -1 & \phantom{-}2 & \phantom{-}2 \\ -2 & \phantom{-}1 & \phantom{-}2 \\ -2 & \phantom{-}2 & \phantom{-}3 \end{bmatrix} \\ M_2 &= \begin{bmatrix} \phantom{-}1 & -2 & \phantom{-}2 \\ \phantom{-}2 & -1 & \phantom{-}2 \\ \phantom{-}2 & -2 & \phantom{-}3 \end{bmatrix} \end{align*}

Note that all three matrices have the same entries, though with different signs. If you number the columns starting at 1 (as mathematicians commonly do and computer scientists may not) then Mk is the matrix whose kth column is negative. There is no 0th column, so M0 is the matrix with no negative columns. The numbering I’ve used here differs from that used in [1].

For example, the primitive Pythagorean triple [5, 12, 13] is formed by multiplying [3, 4, 5] on the left by M2. The PPT [117, 44, 125] is formed by multipling [3, 4, 5] by
M1 M1 M2.

\begin{bmatrix} 117 \\ 44 \\ 125 \end{bmatrix} = M_1 M_1 M_2 \begin{bmatrix} 3 \\ 4 \\ 5 \end{bmatrix}

More Pythagorean posts

[1] The dynamics of Pythagorean triples by Dan Romik

[2] Either a is odd and b is even or vice versa, so we let a be the odd one.

If a and b were both even, c would be even, and the triple would not be primitive. If a and b were both odd, c² would be divisible by 2 but not by 4, and so it couldn’t be a square.

Playing around with a rational rose

A “rose” in mathematics is typically a curve with polar equation

r = cos(kθ)

where k is a positive integer. If k is odd, the resulting graph has k “petals” and if k is even, the plot has 2k petals.

Sometimes the term rose is generalized to the case of non-integer k. This is the sense in which I’m using the phrase “rational rose.” I’m not referring to an awful piece of software by that name [1]. This post will look at a particular rose with k = 2/3.

My previous post looked at

r = cos(2θ/3)

and gave the plot below.

Plot of r = abs(cos(2 theta / 2) )

Unlike the case where k is an integer, the petals overlap.

In this post I’d like to look at two things:

  1. The curvature in the figure above, and
  2. Differences between polar plots in Python and Mathematica

Curvature

The graph above has radius 1 since cosine ranges from -1 to 1. The curve is made of arcs that are approximately circular, with the radius of these approximating circles being roughly 1/2, sometimes bigger and sometimes smaller. So we would expect the curvature to oscillate roughly around 2. (The curvature of a circle of radius r is 1/r.)

The curvature can be computed in Mathematica as follows.

    numerator = D[x[t], {t, 1}] D[y[t], {t, 2}] - 
                D[x[t], {t, 2}] D[y[t], {t, 1}]
    denominator = (D[x[t], t]^2 + D[y[t], t]^2)^(3/2)
    Simplify[numerator / denominator]

This produces

\kappa = \frac{3 \sqrt{2} \left(5 \cos \left(\dfrac{4 \theta}{3}\right)+21\right)}{\left(5 \cos \left(\dfrac{4 \theta}{3}\right)+13\right)^{3/2}}

A plot shows that the curvature does indeed oscillate roughly around 2.

The minimum curvature is 13/9, which the curve takes on at polar coordinate (1, 0), as well as at other points. That means that the curve starts out like a circle of radius 9/13 ≈ 0.7.

The maximum curvature is 3 and occurs at the origin. There the curve is approximately a circle of radius 1/3.

Matplotlib vs Mathematica

To make the plot we’ve been focusing on, I plotted

r = cos(2θ/3)

in Mathematica, but in matplotlib I had to plot

r = |cos(2θ/3)|.

In both cases, θ runs from 0 to 8π. To highlight the differences in the way the two applications make polar plots, let’s plot over 0 to 2π with both.

Mathematica produces what you might expect.

    PolarPlot[Cos[2 t/3], {t, 0, 2 Pi}]

Mathematica plot of Cos[2 theta/3]

Matplotlib produces something very different. It handles negative r values by moving the point r = 0 to a circle in the middle of the plot. Notice the r-axis labels at about 22° running from -1 to 1.

    theta = linspace(0, 2*pi, 1000)
    plt.polar(theta, cos(2*theta/3))

Python plot of cos( 2 theta / 3 )

Note also that in Mathematica, the first argument to PolarPlot is r(θ) and the second is the limits on θ. In matplotlib, the first argument is θ and the second argument is r(θ).

Note that in this particular example, taking the absolute value of the function being plotted was enough to make matplotlib act like I expected. That’s only happened true when plotted over the entire range 0 to 8π. In general you have to do more work than this. If we insert absolute value in the plot above, still plotting from 0 to 2π, we do not reproduce the Mathematca plot.

    plt.polar(theta, abs(cos(2*theta/3)))

More polar coordinate posts

[1] Rational Rose was horribly buggy when I used it in the 1990s. Maybe it’s not so buggy now. But I imagine I still wouldn’t like the UML-laden style of software development it was built around.

Quatrefoils

I was reading The 99% Invisible City this evening, and there was a section on quatrefoils. Here’s an example of a quatrefoil from Wikipedia.

quatrefoil

There’s no single shape known as a quatrefoil. It’s a family of shapes that look something like the figure above.

I wondered how you might write a fairly simple mathematical equation to draw a quatrefoil. Some quatrefoils are just squares with semicircles glued on their edges. That’s no fun.

Here’s a polar equation I came up with that looks like a quatrefoil, if you ignore the interior lines.

This is the plot of r = cos(2θ/3).

Plot of r = abs(cos(2 theta / 2) )

Update: Based on a suggestion in the comments, I’ve written another post on quatrefoils using an equation that has a parameter to control the shape.

Kronecker sum

I’m working on a project these days where I’ve used four different kinds of matrix product, which made me wonder if there’s another kind of product out there that I could find some use for.

In the process of looking around for other matrix products, I ran across the Kronecker sum. I’ve seen Kronecker products many times, but I’d never heard of Kronecker sums.

The Kronecker sum is defined in terms of the Kronecker product, so if you’re not familiar with the latter, you can find a definition and examples here. Essentially, you multiply each scalar element of the first matrix by the second matrix as a block matrix.

The Kronecker product of an m × n matrix A and a p × q matrix B is a mp × nq matrix KA B. You could think of K as an m × n matrix whose entries are p × q blocks.

So, what is the Kronecker sum? It is defined for two square matrices, an n × n matrix A and an m × m matrix B. The sizes of the two matrices need not match, but the matrices do need to be square.  The Kronecker sum of A and B is

AB = AIm + InB

where Im and In are identity matrices of size m and n respectively.

Does this make sense dimensionally? The left side of the (ordinary) matrix addition is nm × nm, and so is the right side, so the addition makes sense.

However, the Kronecker sum is not commutative, and usually things called “sums” are commutative. Products are not always commutative, but it goes against convention to call a non-commutative operation a sum. Still, the Kronecker sum is kinda like a sum, so it’s not a bad name.

I don’t have any application in mind (yet) for the Kronecker sum, but presumably it was defined for a good reason, and maybe I’ll run an application, maybe even on the project alluded to at the beginning.

There are several identities involving Kronecker sums, and here’s one I found interesting:

exp( A ) ⊗ exp( B ) = exp( A B ).

If you haven’t seen the exponential of a matrix before, basically you stick your matrix into the power series for the exponential function.

Examples

First, let’s define a couple matrices A and B.

\begin{align*} A &= \left( \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right) \\ B &= \left( \begin{array}{ccc} 1 & 0 & 1 \\ 1 & 2 & 0 \\ 2 & 0 & 3 \\ \end{array} \right) \end{align*}

We can compute the Kronecker sums

S = AB

and

T = B ⊕ A

with Mathematica to show they are different.

    A = {{1, 2}, {3, 4}}
    B = {{1, 0, 1}, {1, 2, 0}, {2, 0, 3}}
    S = KroneckerProduct[A, IdentityMatrix[3]] + 
        KroneckerProduct[IdentityMatrix[2], B]
    T = KroneckerProduct[B, IdentityMatrix[2]] + 
        KroneckerProduct[IdentityMatrix[3], A]

This shows

\begin{align*} A \oplus B &= \left( \begin{array}{cccccc} 2 & 0 & 1 & 2 & 0 & 0 \\ 1 & 3 & 0 & 0 & 2 & 0 \\ 2 & 0 & 4 & 0 & 0 & 2 \\ 3 & 0 & 0 & 5 & 0 & 1 \\ 0 & 3 & 0 & 1 & 6 & 0 \\ 0 & 0 & 3 & 2 & 0 & 7 \\ \end{array} \right) \\ B \oplus A &= \left( \begin{array}{cccccc} 2 & 2 & 0 & 0 & 1 & 0 \\ 3 & 5 & 0 & 0 & 0 & 1 \\ 1 & 0 & 3 & 2 & 0 & 0 \\ 0 & 1 & 3 & 6 & 0 & 0 \\ 2 & 0 & 0 & 0 & 4 & 2 \\ 0 & 2 & 0 & 0 & 3 & 7 \\ \end{array} \right) \end{align*}

and so the two matrices are not equal.

We can compute the matrix exponentials of A and B with the Mathematica function MatrixExp to see that

\begin{align*} \exp(A) &= \left( \begin{array}{cc} 2.71828 & 7.38906 \\ 20.0855 & 54.5982 \\ \end{array} \right) \\ \exp(B) &= \left( \begin{array}{ccc} 2.71828 & 1. & 2.71828 \\ 2.71828 & 7.38906 & 1. \\ 7.38906 & 1. & 20.0855 \\ \end{array} \right) \end{align*}

(I actually used MatrixExp[N[A]] and similarly for B so Mathematica would compute the exponentials numerically rather than symbolically. The latter takes forever and it’s hard to read the result.)

Now we have

\begin{align*} \exp(A) \otimes \exp(B) &= \left( \begin{array}{cccccc} 512.255 & 0. & 606.948 & 736.673 & 0. & 872.852 \\ 361.881 & 384.002 & 245.067 & 520.421 & 552.233 & 352.431 \\ 1213.9 & 0. & 1726.15 & 1745.7 & 0. & 2482.38 \\ 1105.01 & 0. & 1309.28 & 1617.26 & 0. & 1916.22 \\ 780.631 & 828.349 & 528.646 & 1142.51 & 1212.35 & 773.713 \\ 2618.55 & 0. & 3723.56 & 3832.45 & 0. & 5449.71 \\ \end{array} \right) \\ \exp(A \oplus B) &= \left( \begin{array}{cccccc} 512.255 & 0. & 606.948 & 736.673 & 0. & 872.852 \\ 361.881 & 384.002 & 245.067 & 520.421 & 552.233 & 352.431 \\ 1213.9 & 0. & 1726.15 & 1745.7 & 0. & 2482.38 \\ 1105.01 & 0. & 1309.28 & 1617.26 & 0. & 1916.22 \\ 780.631 & 828.349 & 528.646 & 1142.51 & 1212.35 & 773.713 \\ 2618.55 & 0. & 3723.56 & 3832.45 & 0. & 5449.71 \\ \end{array} \right) \end{align*}

and so the two matrices are equal.

Even though the identity

exp( A ) ⊗ exp( B ) = exp( A B )

may look symmetrical, it’s not. The matrices on the left do not commute in general. And not only are AB and BA different in general, their exponentials are also different. For example

\exp(B\oplus A) = \left( \begin{array}{cccccc} 512.255 & 736.673 & 0. & 0. & 606.948 & 872.852 \\ 1105.01 & 1617.26 & 0. & 0. & 1309.28 & 1916.22 \\ 361.881 & 520.421 & 384.002 & 552.233 & 245.067 & 352.431 \\ 780.631 & 1142.51 & 828.349 & 1212.35 & 528.646 & 773.713 \\ 1213.9 & 1745.7 & 0. & 0. & 1726.15 & 2482.38 \\ 2618.55 & 3832.45 & 0. & 0. & 3723.56 & 5449.71 \\ \end{array} \right)

Related posts

Nonlinear mod 5

This post is a follow-on to the previous post on perfectly nonlinear functions. In that post we defined a way to measure the degree of nonlinearity of a function between two Abelian groups. We looked at functions that take sequences of four bits to a single bit. In formal terms, our groups were GF(24) and GF(2).

In this post we’ll start out by looking at integers mod 5 to show how the content of the previous post takes a different form in different groups. Sometimes the simplicity of working in binary makes things harder to see. For example, we noted in the previous post that the distinction between addition and subtraction didn’t matter. It matters in general!

Let B be the group of integers mod 5 and A the group of pairs of integers mod 5. That is, A is the direct product B × B. We will compute the uniformity of several functions from A to B. (Recall that less uniformity means more nonlinearity.) Then we’ll conclude by looking what happens if we work modulo other integers.

Python code

Here’s the code we’ll use to compute uniformity.

    from itertools import product
    from numpy import array

    m = 5
    r = range(m)

    def deriv(f, x, a):
        return (f(x + a) - f(x))%m

    def delta(f, a, b):
        return {x for x in product(r, r) if all(deriv(f, array(x), a) == b)}

    def uniformity(f):
        u = 0
        for a in product(r, r):
            if a == (0, 0):
                continue
            for b in product(r, r):
                t = len(delta(f, array(a), array(b)))
                if t > u:
                    u = t
        return u

We didn’t call attention to it last time, but the function f(x + a) – f(x) is called a derivative of f, and that’s why we named the function above deriv.

Here are a few functions whose uniformity we’d like to compute.

    # (u, v) -> u^2 + v^2 
    def F2(x): return sum(x**2)%m

    # (u, b) -> u^3 + v^3 
    def F3(x): return sum(x**3)%m

    # (u, b) -> u^2 + v
    def G(x):  return (x[0]**2 + x[1])%m

    # (u, v) -> uv
    def H(x):  return (x[0]*x[1])%m

Now let’s look at their uniformity values.

    print( uniformity(F2) )
    print( uniformity(F3) )
    print( uniformity(G) )
    print( uniformity(H) )

Results mod 5

This prints out 5, 10, 25, and 5. This says that the functions F2 and H are the most nonlinear, in fact “perfectly nonlinear.” The function G, although nonlinear, has the same uniformity as a linear function.

The function F3 may be the most interesting, having intermediate uniformity. In some sense the sum of cubes is closer to being a linear function than the sum of squares is.

Other moduli

We can easily look at other groups by simply changing the value of m at the top of the code.

If we set the modulus to 7, we get analogous results. The uniformities of the four functions are 7, 14, 49, and 7. They’re ranked exactly as they were when the modulus was 5.

But that’s not always the case for other moduli as you can see in the table below. Working mod 8, for example, the sum of squares is more uniform than the sum of cubes, the opposite of the case mod 5 or mod 7.

    |----+-----+-----+-----+----|
    |  m |  F2 |  F3 |   G |  H |
    |----+-----+-----+-----+----|
    |  2 |   4 |   4 |   4 |  2 |
    |  3 |   3 |   9 |   9 |  3 |
    |  4 |  16 |   8 |  16 |  8 |
    |  5 |   5 |  10 |  25 |  5 |
    |  6 |  36 |  36 |  36 | 18 |
    |  7 |   7 |  14 |  49 |  7 |
    |  8 |  64 |  32 |  64 | 32 |
    |  9 |  27 |  81 |  81 | 27 |
    | 10 | 100 | 100 | 100 | 50 |
    | 11 |  11 |  22 | 121 | 11 |
    | 12 | 144 | 144 | 144 | 72 |
    |----+-----+-----+-----+----|

In every case, G is the most uniform function and H is the least. Also, G is strictly more uniform than H. But there are many different ways F2 and F3 can fit between G and H.