Least common multiple of the first n positive integers

Here’s a surprising result: The least common multiple of the first n positive integers is approximately exp(n).

More precisely, let φ(n) equal the log of the least common multiple of the numbers 1, 2, …, n. There are theorems that give upper and lower bounds on how far φ(n) can be from n. We won’t prove or even state these bounds here. See [1] for that. Instead, we’ll show empirically that φ(n) is approximately n.

Here’s some Python code to plot φ(n) over n. The ratio jumps up sharply after the first few values of n. In the plot below, we chop off the first 20 values of n.

from scipy import arange, empty
from sympy.core.numbers import ilcm
from sympy import log
import matplotlib.pyplot as plt

N = 5000
x = arange(N)
phi = empty(N)

M = 1
for n in range(1, N):
    M = ilcm(n, M)
    phi[n] = log(M)

a = 20
plt.plot(x[a:], phi[a:]/x[a:])
plt.xlabel("$n$")
plt.ylabel("$\phi(n) / n$")
plt.show()

Here’s the graph this produces.

[1] J. Barkley Rosser and Lowell Schoenfeld. Approximate formulas for some functions of prime numbers. Illinois Journal of Mathematics, Volume 6, Issue 1 (1962), 64-94. (On Project Euclid)

Subscribing by email

You can subscribe to my blog by email or RSS. I also have a brief newsletter you could sign up for. There are links to these in the sidebar of the blog:

subscription options

If you subscribe by email, you’ll get an email each morning containing the post(s) from the previous day.

I just noticed a problem with email subscription: it doesn’t show SVG images, at least when reading via Gmail; maybe other email clients display SVG correctly. Here’s what a portion of yesterday’s email looks like in Gmail:

screen shot of missing image

I’ve started using SVG for graphs, equations, and a few other images. The main advantage to SVG is that the images look sharper. Also, you can display the same image file at any resolution; no need to have different versions of the image for display at different sizes. And sometimes SVG files are smaller than their raster counterparts.

There may be a way to have web site visitors see SVG and email subscribers see PNG. If not, email subscribers can click on the link at the top of each post to open it in a browser and see all the images.

By the way, RSS readers handle SVG just fine. At least Digger Reader, the RSS reader I use, works well with SVG. The only problem I see is that centered content is always moved to the left.

* * *

The email newsletter is different from the email blog subscription. I only send out a newsletter once a month. It highlights the most popular posts and says a little about what I’ve been up to. I just sent out a newsletter this morning, so it’ll be another month before the next one comes out.

Effective sample size for MCMC

In applications we’d like to draw independent random samples from complicated probability distributions, often the posterior distribution on parameters in a Bayesian analysis. Most of the time this is impractical.

MCMC (Markov Chain Monte Carlo) gives us a way around this impasse. It lets us draw samples from practically any probability distribution. But there’s a catch: the samples are not independent. This lack of independence means that all the familiar theory on convergence of sums of random variables goes out the window.

There’s not much theory to guide assessing the convergence of sums of MCMC samples, but there are heuristics. One of these is effective sample size (ESS). The idea is to have a sort of “exchange rate” between dependent and independent samples. You might want to say, for example, that 1,000 samples from a certain Markov chain are worth about as much as 80 independent samples because the MCMC samples are highly correlated. Or you might want to say that 1,000 samples from a different Markov chain are worth about as much as 300 independent samples because although the MCMC samples are dependent, they’re weakly correlated.

Here’s the definition of ESS:

\mbox{ESS} = \frac{n}{1 + 2\sum_{k=1}^\infty \rho(k)}

where n is the number of samples and ρ(k) is the correlation at lag k.

This behaves well in the extremes. If your samples are independent, your effective samples size equals the actual sample size. If the correlation at lag k decreases extremely slowly, so slowly that the sum in the denominator diverges, your effective sample size is zero.

Any reasonable Markov chain is between the extremes. Zero lag correlation is too much to hope for, but ideally the correlations die off fast enough that the sum in the denominator not only converges but also isn’t a terribly large value.

I’m not sure who first proposed this definition of ESS. There’s a reference to it in Handbook of Markov Chain Monte Carlo where the authors cite a paper [1] in which Radford Neal mentions it. Neal cites B. D. Ripley [2].

***

[1] Markov Chain Monte Carlo in Practice: A Roundtable Discussion. Robert E. Kass, Bradley P. Carlin, Andrew Gelman and Radford M. Neal. The American Statistician. Vol. 52, No. 2 (May, 1998), pp. 93-100

[2] Stochlastic Simulation, B. D. Ripley, 1987.

Quicksort and prime numbers

The average number of operations needed for quicksort to sort a list of n items is approximately 10 times the nth prime number.

Here’s some data to illustrate this.

|------+-----------------+---------|
|    n | avg. operations | 10*p(n) |
|------+-----------------+---------|
|  100 |          5200.2 |    5410 |
|  200 |         12018.3 |   12230 |
|  300 |         19446.9 |   19870 |
|  400 |         27272.2 |   27410 |
|  500 |         35392.2 |   35710 |
|  600 |         43747.3 |   44090 |
|  700 |         52297.8 |   52790 |
|  800 |         61015.5 |   61330 |
|  900 |         69879.6 |   69970 |
| 1000 |         78873.5 |   79190 |
| 1100 |         87984.4 |   88310 |
| 1200 |         97201.4 |   97330 |
| 1300 |        106515.9 |  106570 |
| 1400 |        115920.2 |  116570 |
| 1500 |        125407.9 |  125530 |
| 1600 |        134973.5 |  134990 |
| 1700 |        144612.1 |  145190 |
| 1800 |        154319.4 |  154010 |
| 1900 |        164091.5 |  163810 |
| 2000 |        173925.1 |  173890 |
|------+-----------------+---------|

The maximum difference between the quicksort and prime columns is about 4%. In the latter half of the table, the maximum error is about 0.4%.

What’s going on here? Why should quicksort be related to prime numbers?!

The real mystery is the prime number theorem, not quicksort. The prime number theorem tells us that the nth prime number is approximately n log n. And the number of operations in an efficient sort is proportional to n log n. The latter is easier to see than the former.

A lot of algorithms have run time proportional to n log n: mergesort, heapsort, FFT (Fast Fourier Transform), etc. All these have run time approximately proportional to the nth prime.

Now for the fine print. What exactly is the average run time for quicksort? It’s easy to say it’s O(n log n), but getting more specific requires making assumptions. I used as the average number of operations 11.67 n log n – 1.74 n based on Knuth’s TAOCP, Volume 3. And why 10 times the nth prime and not 11.67? I chose 10 to make the example work better. For very large values on n, a larger coefficient would work better.

 

Why do linear prediction confidence regions flare out?

Suppose you’re tracking some object based on its initial position x0 and initial velocity v0. The initial position and initial velocity are estimated from normal distributions with standard deviations σx and σv. (To keep things simple, let’s assume our object is moving in only one dimension and that the distributions around initial position and velocity are independent.)

The confidence region for the object flares out over time, something like the bell of a trumpet.

Why does the region get larger? Because there’s uncertainty in the velocity, and the velocity gets multiplied by elapsed time.

Why isn’t the confidence region a cone? Because that would ignore the uncertainty in the initial position. The result would be too small.

Why isn’t the confidence region a truncated cone? That’s not a bad approximation, though it’s a bit too large. If we ignore probability for a moment and treat confidence intervals as deterministic limits, then we get a truncated cone. For example, suppose assume position and velocity are each within two standard deviations of their estimates. Then we’d estimate position to be between x0 – 2σx + (v0 – 2σv) t on the low end and x0 + 2σx + (v0 + 2σv) t on the high end. This is only an approximation because we’ve ignored probability, and it’s pessimistic because it assumes extreme error values for both estimates at the same time.

So what is the confidence region? It’s some where between the cone and the truncated cone.

The position xt v is the sum of two random variables. The first has variance σx² and the second has variance t² σv². Variances of independent random variables add, so the standard deviation for the sum is

√(σx² + t² σv²) = t √(σx² / t² + σv²)

Note that as t increases, the latter approaches t σv from above. Ignoring the uncertainty in initial position underestimates standard deviation, but the relative error decreases as t increases.

For large t, a confidence interval for position at time t is approximately proportional to t, so the width of the confidence intervals over time look like a cone. But from small t, the dependence on t is less linear and more curved.

 

Polynomials evaluated at integers

Let p(x) = a0 + a1x + a2x2 + … + anxn and suppose at least one of the coefficients ai is irrational for some i ≥ 1. Then a theorem by Weyl says that the fractional parts of p(n) are equidistributed as n varies over the integers. That is, the proportion of values that land in some interval is equal to the length of that interval.

Clearly it’s necessary that one of the coefficients be irrational. What may be surprising is that it is sufficient.

If the coefficients are all rational with common denominator N, then the sequence would only contain multiples of 1/N. The interval [1/3N, 2/3N], for example, would never get a sample. If a0 were irrational but the rest of the coefficients were rational, we’d have the same situation, simply shifted by a0.

This is a theorem about what happens in the limit, but we can look at what happens for some large but finite set of terms. And we can use a χ2 test to see how evenly our sequence is compared to what one would expect from a random sequence.

In the code below, we use the polynomial p(x) = √2 x² + πx + 1 and evaluate p at 0, 1, 2, …, 99,999. We then count how many fall in the bins [0, 0.01), [0.01, 0.02), … [0.99, 1] and compute a chi-square statistic on the counts.

from math import pi, sqrt, floor

def p(x):
    return 1 + pi*x + sqrt(2)*x*x

def chisq_stat(O, E):
    return sum( [(o - e)**2/e for (o, e) in zip(O, E)] )

def frac(x):
    return x - floor(x)

N = 100000
data = [frac(p(n)) for n in range(N)]

count = [0]*100
for d in data:
    count[ int(floor(100*d)) ] += 1

expected = [N/100]*100

print(chisq_stat(count, expected))

We get a chi-square statistic of 95.4. Since we have 100 bins, there are 99 degrees of freedom, and we should compare our statistic to a chi-square distribution with 99 degrees of freedom. Such a distribution has mean 99 and standard deviation √(99*2) = 14.07, so a value of 95.4 is completely unremarkable.

If we had gotten a very large chi-square statistic, say 200, we’d have reason to suspect something was wrong. Maybe a misunderstanding on our part or a bug in our software. Or maybe the sequence was not as uniformly distributed as we expected.

If we had gotten a very small chi-square statistic, say 10, then again maybe we misunderstood something, or maybe our sequence is remarkably evenly distributed, more evenly than one would expect from a random sequence.

Related posts:

Leading digits of powers of 2

The first digit of a power of 2 is a 1 more often than any other digit. Powers of 2 begin with 1 about 30% of the time. This is because powers of 2 follow Benford’s law. We’ll prove this below.

When is the first digit of 2n equal to k? When 2n is between k × 10p and (k+1) × 10p for some positive integer p. By taking logarithms base 10 we find that this is equivalent to the fractional part of n log102 being between log10 k and log10 (k + 1).

The map

x ↦ ( x + log10 2 ) mod 1

is ergodic. I wrote about irrational rotations a few weeks ago, and this is essentially the same thing. You could scale x by 2π and think of it as rotations on a circle instead of arithmetic mod 1 on an interval. The important thing is that log10 2 is irrational.

Repeatedly multiplying by 2 is corresponds to adding log10 2 on the log scale. So powers of two correspond to iterates of the map above, starting with x = 0. Birkhoff’s Ergodic Theorem tells us that the number of times iterates of this map fall in the interval [ab] equals b – a. So for k = 1, 2, 3, … 9, the proportion of powers of 2 start with k is equal to  log10 (k + 1) –  log10 (k) =  log10 ((k + 1) / k).

This is Benford’s law. In particular, the proportion of powers of 2 that begin with 1 is equal to  log10 (2) = 0.301.

Note that the only thing special about 2 is that log10 2 is irrational. Powers of 3 follow Benford’s law as well because log10 3 is also irrational. For what values of b do powers of b not follow Benford’s law? Those with log10 b rational, i.e. powers of 10. Obviously powers of 10 don’t follow Benford’s law because their first digit is always 1!

[Interpret the “!” above as factorial or exclamation as you wish.]

Let’s look at powers of 2 empirically to see Benford’s law in practice. Here’s a simple program to look at first digits of powers of 2.

count = [0]*10
N = 10000

def first_digit(n):
    return int(str(n)[0])

for i in range(N):
    n = first_digit( 2**i )
    count[n] += 1

print(count)

Unfortunately this only works for moderate values of N. It ran in under a second with N set to 10,000 but for larger values of N it rapidly becomes impractical.

Here’s a much more efficient version that ran in about 2 seconds with N = 1,000,000.

from math import log10
N = 1000000
count = [0]*10

def first_digit_2_exp_e(e):
    r = (log10(2.0)*e) % 1
    for i in range(2, 11):
        if r < log10(i):
            return i-1
for i in range(N):
    n = first_digit_2_exp_e( i )
    count[n] += 1

print(count)

You could make it more efficient by caching the values of log10 rather than recomputing them. This brought the run time down to about 1.4 seconds. That’s a nice improvement, but nothing like the orders of magnitude improvement from changing algorithms.

Here are the results comparing the actual counts to the predictions of Benford’s law (rounded to the nearest integer).

|---------------+--------+-----------|
| Leading digit | Actual | Predicted |
|---------------+--------+-----------|
|             1 | 301030 |    301030 |
|             2 | 176093 |    176091 |
|             3 | 124937 |    124939 |
|             4 |  96911 |     96910 |
|             5 |  79182 |     79181 |
|             6 |  66947 |     66948 |
|             7 |  57990 |     57992 |
|             8 |  51154 |     51153 |
|             9 |  45756 |     45757 |
|---------------+--------+-----------|

The agreement is almost too good to believe, never off by more than 2.

Are the results correct? The inefficient version relied on integer arithmetic and so would be exact. The efficient version relies on floating point and so it’s conceivable that limits of precision caused a leading digit to be calculated incorrectly, but I doubt that happened. Floating point is precise to about 15 significant figures. We start with log10(2), multiply it by numbers up to 1,000,000 and take the fractional part. The result is good to around 9 significant figures, enough to correctly calculate which log digits the result falls between.

Update: See Andrew Dalke’s Python script in the comments. He shows a way to efficiently use integer arithmetic.

Extreme beta distributions

A beta probability distribution has two parameters, a and b. You can think of these as the number of successes and failures out of a+b trials. The PDF of a beta distribution is approximately normal if a and b are approximately equal and ab is large.

If a and b are close, they don’t have to be very large for the beta PDF to be approximately normal. (In all the plots below, the solid blue line is the beta distribution and the dashed orange line is the normal distribution with the same mean and variance.)

beta(9, 11) PDF vs normal

On the other hand, when a and b are very different, the beta distribution can be skewed and far from normal. Note that ab is the same in the example above and below.

beta(2, 18) PDF vs normal

Why the sharp corner above? The beta distribution is only defined on the interval [0, 1] and so the PDF is zero for negative values.

An application came up today that raised an interesting question: What if ab is very large, but a and b are very different? The former works in favor of the normal approximation but the latter works against it.

The application had a low probability of success but a very large number of trials. Specifically, a + b would be on the order of a million, but a would be less than 500. Does the normal approximation hold for these kinds of numbers? Here are some plots to see.

extreme beta distribution pdfs

When a = 500 the normal approximation is very good. It’s still pretty good when a = 50, but not good at all when a = 5.

Update: Mike Anderson suggested using the skewness to quantify how far a beta is from normal. Good idea.

The skewness of a beta(a, b) distribution is

2(ba)√(a + b + 1) / (a + b + 2) √(ab)

Let Nab and assume N is large and a is small, so that NN + 2,  b – a, and N – a are all approximately equal in ratios. Then the skewness is approximately 2 /√a. In other words, once the number of trials is sufficiently large, sknewness hardly depends on the number of trials and only depends on the number of successes.

Fractional parts, invariant measures, and simulation

A function f: XX is measure-preserving if for each iteration of f sends the same amount of stuff into a given set. To be more precise, given a measure μ and any μ-measurable set E with μ(E) > 0, we have

μ( ) = μ( f –1(E) ).

You can read the right side of the equation above as “the measure of the set of points that f maps into E.” You can apply this condition repeatedly to see that the measure of the set of points mapped into E after n iterations is still just the measure of E.

If X is a probability space, i.e. μ( ) = 1, then you could interpret the definition of measure-preserving to mean that the probability that a point ends up in E after n iterations is independent of n. We’ll illustrate this below with a simulation.

Let X be the half-open unit interval (0, 1] and let f be the Gauss map, i.e.

f(x) = 1/x – [1/x]

where [z] is the integer part of z. The function f is measure-preserving, though not for the usual Lebesgue measure. Instead it preserves the following measure:

\mu(E) = \frac{1}{\log 2} \int_E \frac{1}{1+x} \, dx

Let’s take as our set E an interval [a, b] and test via simulation whether the probability of landing in E after n iterations really is just the measure of E, independent of n.

We can’t just generate points uniformly in the interval (0, 1]. We have to generate the points so that the probability of a point coming from a set E is μ(E). That means the PDF of the distribution must be p(x) = 1 / (log(2) (1 + x)). We use the inverse-CDF method to generate points with this density.

from math import log, floor
from random import random

def gauss_map(x):
    y = 1.0/x
    return y - floor(y)

# iterate gauss map n times
def iterated_gauss_map(x, n):
    while n > 0:
        x = gauss_map(x)
        n = n - 1
    return x      

# measure mu( [a,b] )
def mu(a, b):
    return (log(1.0+b) - log(1.0+a))/log(2.0)

# Generate samples with Prob(x in E) = mu( E )
def sample():
    u = random()
    return 2.0**u - 1.0

def simulate(num_points, num_iterations, a, b):
    count = 0
    for _ in range(num_points):
        x = sample()
        y = iterated_gauss_map(x, num_iterations)
        if a < y < b:
            count += 1
    return count / num_points

# Change these parameters however you like
a, b = 0.1, 0.25
N = 1000000

exact = mu(a, b)
print("Exact probability:", exact)
print("Simulated probability after n iterations")
for n in range(1, 10):
    simulated = simulate(N, n, a, b)
    print("n =", n, ":", simulated)

Here’s the output I got:

Exact probability: 0.18442457113742736
Simulated probability after n iterations
n = 1 : 0.184329
n = 2 : 0.183969
n = 3 : 0.184233
n = 4 : 0.184322
n = 5 : 0.184439
n = 6 : 0.184059
n = 7 : 0.184602
n = 8 : 0.183877
n = 9 : 0.184834

With 1,000,000 samples, we expect the results to be the same to about 3 decimal places, which is what we see above.

Related post: Irrational rotations are ergodic.

(A transformation f  is ergodic if it is measure preserving and the only sets E with  f –1(E)  = E are those with measure 0 or full measure. Rational rotations are measure-preserving but not ergodic. The Gauss map above is ergodic.)

One practical application of functional programming

Arguments in favor of functional programming are often unconvincing. For example, the most common argument is that functional programming makes it easier to “reason about your code.” That’s true to some extent. All other things being equal, it’s easier to understand a function if all its inputs and outputs are explicit. But all other things are not equal. In order to make one function easier to understand, you may have to make something else harder to understand.

Here’s an argument from Brian Beckman for using a functional style of programming in a particular circumstance that I find persuasive. The immediate context is Kalman filtering, but it applies to a broad range of mathematical computation.

By writing a Kalman filter as a functional fold, we can test code in friendly environments and then deploy identical code with confidence in unfriendly environments. In friendly environments, data are deterministic, static, and present in memory. In unfriendly, real-world environments, data are unpredictable, dynamic, and arrive asynchronously.

If you write the guts of your mathematical processing as a function to be folded over your data, you can isolate the implementation of your algorithm from the things that make code hardest to test, i.e. the data source. You can test your code in a well-controlled “friendly” test environment and deploy exactly the same code into production, i.e. an “unfriendly” environment.

Brian continues:

The flexibility to deploy exactly the code that was tested is especially important for numerical code like filters. Detecting, diagnosing and correcting numerical issues without repeatable data sequences is impractical. Once code is hardened, it can be critical to deploy exactly the same code, to the binary level, in production, because of numerical brittleness. Functional form makes it easy to test and deploy exactly the same code because it minimizes the coupling between code and environment.

I ran into this early on when developing clinical trial methods first for simulation, then for production. Someone would ask whether we were using the same code in production as in simulation.

“Yes we are.”

Exactly the same code?”

“Well, essentially.”

“Essentially” was not good enough. We got to where we would use the exact same binary code for simulation and production, but something analogous to Kalman folding would have gotten us there sooner, and would have made it easier to enforce this separation between numerical code and its environment across applications.

Why is it important to use the exact same binary code in test and production, not just a recompile of the same source code? Brian explains:

Numerical issues can substantially complicate code, and being able to move exactly the same code, without even recompiling, between testing and deployment can make the difference to a successful application. We have seen many cases where differences in compiler flags, let alone differences in architectures, even between different versions of the same CPU family, introduce enough differences in the generated code to cause qualitative differences in the output. A filter that behaved well in the lab can fail in practice.

Emphasis added, here and in the first quote above.

Note that this post gives an argument for a functional style of programming, not necessarily for the use of functional programming languages. Whether the numerical core or the application that uses it would best be written in a functional language is a separate discussion.

Related posts:

Dividing projects into math, statistics, and computing

If you’ve read this blog for long, you know that my work is a combination of math, statistics, and computing.

I was looking over my records and tried to see how my work divides into these three areas. In short, it doesn’t.

The boundaries between these areas are fuzzy or arbitrary to begin with, but a few projects fell cleanly into one of the three categories. However, 85% of my income has come from projects that involve a combination of two areas or all three areas.

 

If you calculate a confidence interval using R, you could say you’re doing math, statistics, and computing. But for the accounting above I’d simply call that statistics. When I say a project uses math and computation, for example, I mean it requires math outside what is typical in programming, and programming outside what is typical in math.

Gaussian correlation inequality

The Gaussian correlation inequality was proven in 2014, but the proof only became widely known this year. You can find Thomas Royan’s remarkably short proof here.

Let X be a multivariate Gaussian random variable with mean zero and let E and F be two symmetric convex sets, both centered at the origin. The Gaussian correlation inequality says that

Prob(in E and F) ≥ Prob(X in E) Prob(X in F).

Here’s a bit of Python code for illustrating the inequality. For symmetric convex sets we take balls of p-norm r where p ≥ 1 and r > 0. We could, for example, set one of the values of p to 1 to get a cube and set the other p to 2 to get a Euclidean ball.

from scipy.stats import norm as gaussian

def pnorm(v, p):
    return sum( [abs(x)**p for x in v] )**(1./p)

def simulate(dim, r1, p1, r2, p2, numReps):
    count_1, count_2, count_both = (0, 0, 0)
    for _ in range(numReps):
        x = gaussian.rvs(0, 1, dim)
        in_1 = (pnorm(x, p1) < r1)
        in_2 = (pnorm(x, p2) < r2)
        if in_1:
            count_1 += 1
        if in_2:
            count_2 += 1
        if in_1 and in_2:
            count_both += 1
    print("Prob in both:", count_both / numReps)
    print("Lower bound: ", count_1*count_2 * numReps**-2)


simulate(3, 1, 2, 1, 1, 1000)

When numReps is large, we expect the simulated probability of the intersection to be greater than the simulated lower bound. In the example above, the former was 0.075 and the latter 0.015075, ordered as we’d expect.

If we didn’t know that the theorem has been proven, we could use code like this to try to find counterexamples. Of course a simulation cannot prove or disprove a theorem, but if we found what appeared to be a counterexample, we could see whether it persists with different random number generation seeds and with a large value of  numReps. If so, then we could try to establish the inequality analytically. Now that the theorem has been proven we know that we’re not going to find real counterexamples, but the code is only useful as an illustration.

Related posts:

Irrational rotations are ergodic

In a blog post yesterday, I mentioned that the golden angle is an irrational portion of a circle, and so a sequence of rotations by the golden angle will not repeat itself. We can say more: rotations by an irrational portion of a circle are ergodic. Roughly speaking, this means that not only does the sequence not repeat itself, the sequence “mixes well” in a technical sense.

Ergodic functions have the property that “the time average equals the space average.” We’ll unpack what that means and illustrate it by simulation.

Suppose we pick a starting point x on the circle then repeatedly rotate it by a golden angle. Take an integrable function f on the circle and form the average of its values at the sequence of rotations. This is the time average. The space average is the integral of f over the circle, divided by the circumference of the circle. The ergodic theorem says that the time average equals the space average, except possibly for a setting of starting values of measure zero.

More generally, let X be a measure space (like the unit circle) with measure μ let T be an ergodic transformation (like rotating by a golden angle), Then for almost all starting values x we have the following:

\lim_{n\to \infty} \frac{1}{n} \sum_{k=0}^{n-1} f(T^k x) = \frac{1}{\mu(X)} \int_X f\, d\mu

Let’s do a simulation to see this in practice by running the following Python script.

        from scipy import pi, cos
        from scipy.constants import golden
        from scipy.integrate import quad
        
        golden_angle = 2*pi*golden**-2
        
        def T(x):
            return (x + golden_angle) % (2*pi)
        
        def time_average(x, f, T, n):
            s = 0
            for k in range(n):
                s += f(x)
                x = T(x)
            return s/n
        
        def space_average(f):
            integral = quad(f, 0, 2*pi)[0]
            return integral / (2*pi)
        
        f = lambda x: cos(x)**2
        N = 1000000
        
        print( time_average(0, f, T, N) )
        print( space_average(f) )

In this case we get 0.49999996 for the time average, and 0.5 for the space average. They’re not the same, but we only used a finite value of n; we didn’t take a limit. We should expect the two values to be close because n is large, but we shouldn’t expect them to be equal.

Update: The code and results were updated to fix a bug pointed out in the comments below.  I had written ... % 2*pi when I should have written ... % (2*pi). I assumed the modulo operator was lower precedence than multiplication, but it’s not. It was a coincidence that the buggy code was fairly accurate.

A friend of mine, a programmer with decades of experience, recently made a similar error. He’s a Clojure fan but was writing in C or some similar language. He rightfully pointed out that this kind of error simply cannot happen in Clojure. Lisps, including Clojure, don’t have operator precedence because they don’t have operators. They only have functions, and the order in which functions are called is made explicit with parentheses. The Python code x % 2*pi corresponds to (* (mod x 2) pi) in Clojure, and the Python code x % (2*pi) corresponds to (mod x (* 2 pi)).

Related: Origin of the word “ergodic”

Saxophone with two octave keys

Last year I wrote a post about saxophone octave keys. I was surprised to discover, after playing saxophone for most of my life, that a saxophone has not one but two octave holes. Modern saxophones have one octave key, but two octave holes. Originally saxophones had a separate octave key for each octave hole; you had to use different octave keys for different notes.

I had not seen one of these old saxophones until Carlo Burkhardt sent me photos today of a Pierret Modele 5 Tenor Sax from around 1912.

Here’s a closeup of the octave keys.

two octave keys on 1912 tenor saxophone

And here’s a closeup of the bell where you can see the branding.

Pierret Modele 5 Tenor Sax circa 1912