Sphere spilling out

Center a small blue sphere on every corner of an n-dimensional unit hypercube. These are the points in ℝn for which every coordinate is either a 0 or a 1. Now inflate each of these small spheres at the same time until they touch. Each sphere will have radius 1/2. For example, the spheres centered at

(0, 0, 0, …, 0)

and

(0, 0, 0, …, 1)

can’t have a radius larger than 1/2 because they’ll intersect at (0, 0, 0, …, 1/2).

Now center a little red sphere at center, the point where every coordinate equals 1/2, and blow it up until it touches the blue spheres. Which is larger, the blue spheres or the red one?

The answer depends on n. In low dimensions, the blue spheres are larger. But in higher dimensions, the red sphere is larger.

The distance of from the center of the hypercube to any of the vertices is √n / 2 and so the radius of the red sphere is (√n − 1) / 2. When n = 4, the radius of the red sphere equals the radius of the blue spheres.

The hypercube and blue spheres will fit inside a hypercube whose side is length 2. But if n > 9 the red sphere will spill out of this larger hypercube.

This post was motivate by this post by Aryeh Kontorovich on Twitter.

More on high-dimensional spheres

A variation on Rock, Paper, Scissors

Imagine in a game of Rock, Paper, Scissors one player is free to play as usual but the other is required to choose each option the same number of times. That is, in 3n rounds of the game, the disadvantaged player much choose Rock n times, Paper n times, and Scissors n times.

Obviously the unrestricted player would be expected to win more games, but how many more? At least one, because the unrestricted player knows what the restricted player will do on the last round.

If n is large, then in the early rounds of the game it makes little difference that one of the players is restricted. The restriction doesn’t give the unrestricted player much exploitable information. But in the later rounds of the game, the limitations on the restricted player’s moves increase the other players chances of winning more.

It turns out that the if the unrestricted player uses an optimal strategy, he can expect to win O(√n) more rounds than he loses. More precisely, the expected advantage approaches 1.4658√n as n grows. You can find a thorough analysis of the problem in this paper.

Related posts

q-analog of rising powers

The previous post looked at the probability that a random n by n matrix over a finite field of order q is invertible. This works out to be

\prod_{i=1}^n \left(1 - \frac{1}{q^i}\right)

This function of q and n comes up in other contexts as well and has a name that we will get to shortly.

Pochhammer symbols

Leo August Pochhammer (1841–1920) defined the kth rising power of x by

(x)_k = x(x + 1)(x + 2)\cdots(x + k - 1)

Rising and falling powers come up naturally in combinatorics, finite difference analogs of calculus, and in the definition of hypergeometric functions.

q-Pochhammer symbols

The q-analog of the Pochhammer symbol is defined as

(a;q)_n = \prod_{k=0}^{n-1} (1-aq^k)=(1-a)(1-aq)(1-aq^2)\cdots(1-aq^{n-1})

Like the Pochhammer symbol, the q-Pochhammer symbol also comes up in combinatorics.

In general, q-analogs of various concepts are generalizations that reduce to the original concept in some sort of limit as q goes to 1. The relationship between the q-Pochhammer symbol and the Pochhammer symbol is

\lim_{q\to 1} \frac{(q^x;q)_n}{(1-q)^n} = (x)_n
For a simpler introduction to q-analogs, see this post on q-factorial and q-binomial coefficients.

Back to our probability problem

The motivation for this post was to give a name to the function that gives probability a random n by n matrix over a finite field of order q is invertible. In the notation above, this function is (1/q; 1/q)n.

There’s a confusing notational coincidence here. The number of elements in a finite field is usually denoted by q. The q in q-analogs such as the q-Pochhammer symbol has absolute value less than 1. It’s a coincidence that they both use the letter q, and the q in our application of q-Pochhammer symbols is the reciprocal of the q representing the order of a finite field.

I mentioned in the previous post that the probability of the matrix being invertible is a decreasing function of n. This probability decreases to a positive limit, varying with the value of q. This limit is (1/q; 1/q). Here the subscript ∞ denotes that we take the limit in (1/q; 1/q)n as n goes to infinity. There’s no problem here because the infinite product converges.

Mathematica and plotting

The q-Pochhammer symbol (a; q)n is implemented in Mathematica as QPochhammer[a, q, n] and the special case (q; q) is implemented as QPochhammer[q]. We can use the latter to make the following plot.

Plot[QPochhammer[q], {q, -1, 1}]

Recall that the q in our motivating application is the reciprocal of the q in the q-Pochhhammer symbol. This says for large fields, the limiting probability that an n by n matrix is invertible as n increases is near 1, but that for smaller fields the limiting probability is also smaller. For q = 2, the probability is 0.288788.

Plot[QPochhammer[1/q], {q, 2, 100}, PlotRange -> All]

Related posts

Solvability of linear systems over finite fields

If you have n equations in n unknowns over a finite field with q elements, how likely is it that the system of equations has a solution?

The number of possible n × n matrices with entries from a field of size q is qn². The set of invertible n × n matrices over a field with q elements is GLn(q) and the number of elements in this set is [1]

|GL_n(q)| = q^{n(n-1)/2} \prod_{i=1}^n(q^i - 1)

The probability that an n × n matrix is invertible is then

\prod_{i=1}^n \left(1 - \frac{1}{q^i}\right)

which is an increasing function of q and a decreasing function of n. More on this function in the next post.

Related posts

[1] Robert A. Wilson. The Finite Simple Groups. Springer 2009

Rényi’s parking constant

Parallel parking photo by IFCAR, Public Domain

Imagine parallel parking is available along the shoulder of a road, but no parking spaces are marked.

The first person to park picks a spot on the shoulder at random. Then another car also chooses a spot along the shoulder at random, with the constraint that the second car can’t overlap the first. This process continues until no more cars can park. How many people can park this way?

Assume all cars have the same length, and we choose our units so that cars have length 1. The expected number of cars that can park is a function M(x) of the length of the parking strip x. Clearly if x < 1 then M(x) = 0. Alfréd Rényi [1] found that for x ≥ 1, M(x) satisfies the equation

M(x) = 1 + \frac{2}{x-1}\int_0^{x-1} M(t)\, dt

This is an unusual equation, difficult to work with because it defined M only implicitly. It’s not even clear that the equation has a solution. But it does, and the ratio of M(x) to x approaches a constant as x increases.

m = \lim_{x\to\infty} \frac{M(x)}{x} = 0.74759\ldots

The number m is known as Rényi’s parking constant.

This says for a long parking strip, parking sequentially at random will allow about 3/4 as many cars to park as if you were to pack the cars end-to-end.

More posts on Rényi’s work

[1] Steven R. Finch. Mathematical Constants. Cambridge University Press, 2003.

The Clausen function

I ran across the Clausen function the other day, and when I saw a plot of the function my first thought was that it looks sorta like a sawtooth wave.

Plot of Clausen function Cl_2

I wondered whether it also sounds like a sawtooth wave, and indeed it does. More on that shortly.

The Clausen function can be defined in terms of its Fourier series:

\text{Cl}_2(x) = \sum_{k=1}^\infty \frac{\sin(kx)}{k^2}

The function commonly known as the Clausen function is one of a family of functions, hence the subscript 2. The Clausen functions for all non-negative integers n are defined by replacing 2 with n on both sides of the defining equation.

The Fourier coefficients decay quadratically, as do those of a triangle wave or sawtooth wave, as discussed here. This implies the function Cl2(x) cannot have a continuous derivative. In fact, the derivative of Cl2(x) is infinite at 0. This follows quickly from the integral representation of the function.

\text{Cl}_2(x)=-\int_0^x\log \left|2\sin\frac{t}{2} \right|\, dt

The fundamental theorem of calculus shows that the derivative

\text{Cl}'_2(x)=-\log \left|2\sin\frac{x}{2} \right|

blows up at 0.

Now suppose we create an audio clip of Cl2(440x). This creates a sound with pitch A 440, but rather than a sinewave it has an unpleasant buzzing sound, much like a sawtooth wave.

clausen2.wav

 

The harshness of the sound is due to the slow decay of the Fourier coefficients; the Fourier coefficients of more pleasant musical sounds decay much faster than quadratically.

Related posts

Limit of a doodle

Suppose you’re in a boring meeting and you start doodling. You draw a circle, and then you draw a triangle on the outside of that circle.

Next you draw a circle through the vertices of the triangle, and draw a square outside that.

Then you draw a circle through the vertices of the square, and draw a pentagon outside that.

Then you think “Will this ever stop?!”, meaning the meeting, but you could ask a similar question about your doodling: does your sequence of doodles converge to a circle of finite radius, or do they grow without bound?

An n-gon circumscribed on the outside of a circle of radius r is inscribed in a circle of radius

\frac{r}{\cos(\pi/n)}

So if you start with a unit circle, the radius of the circle through the vertices of the N-gon is

\prod_{n=3}^N \frac{1}{\cos(\pi/n)}

and the limit as N → ∞ exists. The limit is known as the polygon circumscribing constant, and it equals 8.7000366252….

We can visualize the limit by making a plot with large N. The plot is less cluttered if we leave out the circles and just show the polygons. N = 30 in the plot below.

The reciprocal of the polygon circumscribing constant is known as the Kepler-Bouwkamp constant. The Kepler-Bouwkamp constant is the limiting radius if you were to reverse the process above, inscribing polygons at each step rather than circumscribing them. It would make sense to call the Kepler-Bouwkamp constant the polygon inscribing constant, but for historical reasons it is named after Johannes Kepler and Christoffel Bouwkamp.

Computing Γ(z) for complex z with tables

In the previous post I mentioned that the general strategy for computing a mathematical function using tables is to first reduce the function argument to be within the range of the tabulated values, and then to use interpolation to compute the function at values that are not directly tabulated.

The second step is always the same, applying Lagrange interpolation with enough points to get the accuracy you need. But the first step, range reduction, depends on the function being evaluated. And as the previous post concluded, evaluating more advanced functions generally requires more advanced range reduction.

Real argument

For the gamma function, the identity

\Gamma(x + 1) = x\, \Gamma(x)

can be used to reduce the problem of computing Γ(x) for any real x to the problem of computing Γ(x) for x in the interval [1, 2]. If x is large, the identity will have to be applied many times and so this would be a lot of work. However, the larger x is, the more accurate Stirling’s approximation becomes.

Complex argument

Computing Γ(x + iy) is more complex, pardon the pun. We can still use the identity above to reduce the real part x of the argument to lie in the interval [1, 2], but what about the complex part y?

Abramowitz and Stegun, Table 6.7, tabulates the principal branch of log Γ(x + iy) for x from 1 to 2 and for y from 0 to 10, both in increments of 0.1. Generally the logarithm of the gamma function is more useful in computation than the gamma function itself. It is also easier to interpolate, which I imagine is the main reason A&S tabulate it rather than the gamma function per se. A note below Table 6.7 says that linear interpolation gives about 3 significant figures, and eight-point interpolation gives about 8 figures.

By the Schwarz reflection principle,

\Gamma(\bar{z}) = \overline{\Gamma(z)}

and with this we can extend our range on y to [−10, 10].

Large imaginary part

What about larger y? We have two options: the duplication formula and Stirling’s approximation.

The duplication formula

\Gamma(2z) = \frac{1}{\sqrt{2\pi}}2^{2z - \frac{1}{2}} \Gamma(z) \Gamma\left(z + \tfrac{1}{2} \right )

lets us compute Γ(2z) if we can compute Γ(z) and Γ(z + ½).

Stirling’s approximation for Γ(z) is accurate when |z| is large, and |x + iy| is large when |y| is large.

More posts on using tables

Calculating trig functions from tables

It takes some skill to use tables of mathematical functions; it’s not quite as simple as it may seem. Although it’s no longer necessary to use tables, it’s interesting to look into the details of how it is done.

For example, the Handbook of Mathematical Functions edited by Abramowitz and Stegun tabulates sines and cosines in increments of one tenth of a degree, from 0 degrees to 45 degrees. What if your angle was outside the range 0° to 45° or if you needed to specify your angle more precisely than 1/10 of a degree? What if you wanted, for example, to calculate cos 203.147°?

The high-level answer is that you would use range reduction and interpolation. You’d first use range reduction to reduce the problem of working with any angle to the problem of working with an angle between 0° and 45°, then you’d use interpolation to get the necessary accuracy for a value within this range.

OK, but how exactly do you do the range reduction and how exactly do you to the interpolation? This isn’t deep, but it’s not trivial either.

Range reduction

Since sine and cosine have a period of 360°, you can add or subtract some multiple of 360° to obtain an angle between −180° and 180°.

Next, you can use parity to reduce the range further. That is, since sin(−x) = −sin(x) and cos(−x) = cos(x) you can reduce the problem to computing the sine or cosine of an angle between 0 and 180°.

The identities sin(180° − x) = sin(x) and cos(180° −x) = −cos(x) let you reduce the range further to between 0 and 90°.

Finally, the identities cos(x) = sin(90° − x) and sin(x) = cos(90° − x) can reduce the range to 0° to 45°.

Interpolation

You can fill in between the tabulated angles using interpolation, but how accurate will your result be? How many interpolation points will you need to use in order to get single precision, e.g. an error on the order of 10−7?

The tables tell you. As explained in this post on using a table of logarithms, the tables have a notation at the bottom of the table that tells you how many Lagrange interpolation points to use and what kind of accuracy you’ll get. Five interpolation points will give you roughly single precision accuracy, and the notation gives you a little more accurate error bound. The post on using log tables also explains how Lagrange interpolation works.

Beyond trig functions

I intend to write more posts on using tables. The general pattern is always range reduction and interpolation, but it takes more advanced math to reduce the range of more advanced functions.

Update: The next post shows how to use tables to compute the gamma function for complex arguments.

Rapidly convergent series for ellipse perimeter

The previous post looked at two simple approximations for the perimeter of an ellipse. Approximations are necessary since the perimeter of an ellipse cannot be expressed as an elementary function of the axes.

Kepler noted in 1609 that you could approximate the perimeter of an ellipse as the perimeter of a circle whose radius is the mean of the semi-axes of the ellipse, where the mean could be either the arithmetic mean or the geometric mean. The previous post showed that the arithmetic mean is more accurate, and that it under-estimates the perimeter. This post will explain both of these facts.

There are several series for calculating the perimeter of an ellipse. In 1798 James Ivory came up with a series that converges more rapidly than previously discovered series. Ivory’s series is

P = \pi (a+b) \left(1 + \sum_{n=1}^\infty \left(\frac{(2n-1)!!}{2^n n!}\right)^2 \frac{h^n}{(2n-1)^2}\right)

where

h = \frac{(a-b)^2}{(a+b)^2}

If you’re not familiar with the !! notation, see this post on multifactorials.

The 0th order approximation using Ivory’s series, dropping all the infinite series terms, corresponds to Kepler’s approximation using the arithmetic mean of the semi-axes a and b. By convention the semi-major axis is labeled a and the semi-minor axis b, but the distinction is unnecessary here since Ivory’s series is symmetric in a and b.

Note that h ≥ 0 and h = 0 only if the ellipse is a circle. So the terms in the series are positive, which explains why Kepler’s approximation under-estimates the perimeter.

Using just one term in Ivory’s series gives a very good approximation

P \approx \pi(a + b)\left(1 + \frac{1}{4} \frac{(a-b)^2}{(a+b)^2}\right)

The approximation error increases as the ratio of a to b increases, but even for a highly eccentric ellipse like the orbit of the Mars Orbital Mission, the ratio of a to b is only 2, and the relative approximation error is about 1/500, about 12 times more accurate than Kepler’s approximation.