József Kürschák proved in 1908 that the function
is never an integer for 0 < m < n. In particular, the harmonic numbers
are never integers for n > 1.
The function f(m, n) can get arbitrarily close to any integer value by taking m and n large enough, but it can never exactly equal an integer.
For this post, I’d like to look at how close f(m, n) comes to an integer value when 0 < m < n ≤ N for some large N, say N = 100,000.
Computation strategy
The most naive way to approach this would be to compute f(m, n) for all m and n and keep track of which values were closest to integers. This would be wasteful since it would recompute the same terms over and over. Instead, we could take advantage of the fact that
Instead of working with f(m, n), it will be convenient to work with just its fractional part
because it won’t hurt to throw away the integer parts as we go. The values of m and n minimizing g(m, n) will be the values for which f(m, n) comes closest to an integer from above. The values of m and n maximizing g(m, n) will be the values for which f(m, n) comes closest to an integer from below.
We could calculate a matrix with all values of g(m, n), but this would take O(N²) memory. Instead, for each n we will calculate g(m, n), save the maximum and minimum values, then overwrite that memory with g(m, n + 1). This approach will only take O(N) memory.
Floating point error
When we compute f(m, n) for large values of n, can we rely on floating point arithmetic?
If N = 100,000, f(m, n) < 16 = 24. A floating point fraction has 53 bits, so we’d expect each addition to be correct to within an error of 2−49 and so we’d expect our total error to be less than 2−49 N.
Python code
The following code computes the values of g(m, n) closest to 0 and 1.
import numpy as np
N = 100_000
f_m = np.zeros(N+1) # working memory
# best values of m for each n
min_fm = np.zeros(N+1)
max_fm = np.zeros(N+1)
n = 2
f_m[1] = 1.5
for n in range(3, N+1):
f_m[n-1] = 1/(n-1)
f_m[1:n] += 1/n
f_m[1:n] -= np.floor(f_m[1:n])
min_fm[n] = np.min(f_m[1:n])
max_fm[n] = np.max(f_m[1:n])
print(min(min_fm[3:]))
print(max(max_fm))
This reports a minimum value of 5.2841953035454026e-11 and a maximum value of 0.9999999996613634. The minimum value is closer to 0 than our (pessimistic) error estimate, though the maximum value is further from 1 than our error estimate.
Modifying the code a bit shows that the minimum occurs at (27134, 73756), and this the input to g that is within our error estimate. So we can be confident that it is the minimum, though we can’t be confident of its value. So next we turn to Mathematica to find the exact value of g(27133, 73756) as a rational number, a fraction with 32024 digits in the numerator and denominator, and convert it to a floating point number. The result agrees with our estimate in magnitude and to four significant figures.
So in summary
with an error on the order of 10−11, and this is the closest value of f(m, n) to an integer, for 0 < m < n ≤ 100,000.

