A function *f*: *X* → *X* is **measure-preserving** if for each iteration of *f* sends the same amount of stuff into a given set. To be more precise, given a measure μ and any μ-measurable set *E* with μ(*E*) > 0, we have

μ( *E *) = μ( *f*^{ –1}(*E*) ).

You can read the right side of the equation above as “the measure of the set of points that *f* maps into *E*.” You can apply this condition repeatedly to see that the measure of the set of points mapped into *E* after *n* iterations is still just the measure of *E*.

If *X* is a probability space, i.e. μ( *X *) = 1, then you could interpret the definition of measure-preserving to mean that the probability that a point ends up in *E* after *n* iterations is independent of *n*. We’ll illustrate this below with a simulation.

Let *X* be the half-open unit interval (0, 1] and let *f* be the Gauss map, i.e.

*f*(*x*) = 1/*x* – [1/*x*]

where [*z*] is the integer part of *z*. The function *f* is measure-preserving, though not for the usual Lebesgue measure. Instead it preserves the following measure:

Let’s take as our set *E* an interval [*a*, *b*] and test via simulation whether the probability of landing in *E* after *n* iterations really is just the measure of *E*, independent of *n*.

We can’t just generate points uniformly in the interval (0, 1]. We have to generate the points so that the probability of a point coming from a set *E* is μ(*E*). That means the PDF of the distribution must be *p*(*x*) = 1 / (log(2) (1 + *x*)). We use the inverse-CDF method to generate points with this density.

from math import log, floor from random import random def gauss_map(x): y = 1.0/x return y - floor(y) # iterate gauss map n times def iterated_gauss_map(x, n): while n > 0: x = gauss_map(x) n = n - 1 return x # measure mu( [a,b] ) def mu(a, b): return (log(1.0+b) - log(1.0+a))/log(2.0) # Generate samples with Prob(x in E) = mu( E ) def sample(): u = random() return 2.0**u - 1.0 def simulate(num_points, num_iterations, a, b): count = 0 for _ in range(num_points): x = sample() y = iterated_gauss_map(x, num_iterations) if a < y < b: count += 1 return count / num_points # Change these parameters however you like a, b = 0.1, 0.25 N = 1000000 exact = mu(a, b) print("Exact probability:", exact) print("Simulated probability after n iterations") for n in range(1, 10): simulated = simulate(N, n, a, b) print("n =", n, ":", simulated)

Here’s the output I got:

Exact probability: 0.18442457113742736 Simulated probability after n iterations n = 1 : 0.184329 n = 2 : 0.183969 n = 3 : 0.184233 n = 4 : 0.184322 n = 5 : 0.184439 n = 6 : 0.184059 n = 7 : 0.184602 n = 8 : 0.183877 n = 9 : 0.184834

With 1,000,000 samples, we expect the results to be the same to about 3 decimal places, which is what we see above.

**Related post**: Irrational rotations are ergodic.

(A transformation *f* is ergodic if it is measure preserving and the only sets *E* with *f*^{ –1}(*E*) = *E* are those with measure 0 or full measure. Rational rotations are measure-preserving but not ergodic. The Gauss map above is ergodic.)

Great post as usual. There is an entire book on the subject.

* Computational Ergodic Theory, Choe, Geon Ho

(http://www.springer.com/de/book/9783540231219)

He has lots of Maple code there.

Also, I was wondering, why did you call $f(x) = 1/x- \lfloor 1/x \rfloor$ as Gauss map? Is it well known map in the literature? Any reference to this.

Viana and Oliveira call it that function the Gauss map in their book “Foundations of Ergodic Theory.” It’s closely related to continued fractions. Presumably its name comes from work by Gauss on continued fractions.

Gauss had his fingers is so many pies that there are probably many functions called a “Gauss map.” The one that comes to mind is the one from differential geometry, mapping a point on a surface to a point on the unit sphere.