Pick a large number n. Divide n by each of the positive integers up to n and round the results up to the nearest integer. On average, how far do you round up?
Or in terms of probability, what is the expected distance between a fraction n/r, where n is large and fixed and r is chosen randomly between 1 and n, and the nearest larger integer?
In symbols, the question above is asking for the approximate value of
for large n, i.e. in the limit as n goes to ∞. Here ⌈x⌉ denotes the ceiling of x, the smallest integer greater than or equal to x.
Let’s plot this as a function of n and see what it looks like. Here’s the Python code.
import matplotlib.pyplot as plt from numpy import ceil, arange def f(n): return sum( ceil(n/r) - n/r for r in range(1, n) )/n x = arange(1, 100) y = [f(n) for n in x] plt.plot(x, y) plt.show()
And here’s the result.
It appears the graph may be converging to some value, and in fact it is. Charles de la Vallée Poussin proved in 1898 that the limiting value is the Euler–Mascheroni constant γ = 0.5772…. This constant is the limiting difference between the nth harmonic number and log n, i.e.
We can add a horizontal line to our plot to see how well the graph seems to match γ. To do this we need to import the constant euler_gamma
from numpy
and add the
plt.axhline(y=euler_gamma, linestyle=":")
after the plot command. When we do, this is what we see.
It looks like the plot is converging to a value slightly less than γ. Apparently the convergence is very slow. When we go out to 10,000 the plot is closer to being centered around γ but still maybe below γ more than above.
If we evaluate our function at n = 1,000,000, we get 0.577258… while γ = 0.577215….
At n = 10,000,000 we get 0.577218…. So taking 100 times as many terms in our sum gives us one extra correct decimal place, as we’d expect of a random processes since convergence usually goes like 1/√n.
> as we’d expect of a random processes since convergence usually goes like 1/√n
and what is exactly random wrt this process?
It’s a deterministic process, but it behaves analogously to a random process. This often happens in number theory.
I’m confused…
You said with n1 = 1,000,000, you got 0.577258… while γ = 0.577215…
and with n2 = 10,000,000 you got 0.577218… So taking 100 times as many terms in our sum gives us one extra correct decimal place… but n2 = 10×n1 not 100x, or is it that I misunderstood?
David. Good catch. I don’t remember now, but perhaps I meant n2 = 100,000,000.
Interesting that for highly composite numbers like 10080 the result is smaller, because lots of the r go into n so no rounding up at all. Neat!