Reciprocals of primes

Here’s an interesting little tidbit:

For any prime p except 2 and 5, the decimal expansion of 1/p repeats with a period that divides p-1.

The period could be as large as p-1, but no larger. If it’s less than p-1, then it’s a divisor of p-1.

Here are a few examples.

1/3 = 0.33…
1/7 = 0.142857142857…
1/11= 0.0909…
1/13 = 0.076923076923…
1/17 = 0.05882352941176470588235294117647…
1/19 = 0.052631578947368421052631578947368421…
1/23 = 0.04347826086956521739130434782608695652173913…

Here’s a table summarizing the periods above.

|-------+--------|
| prime | period |
|-------+--------|
|     3 |      1 |
|     7 |      6 |
|    11 |      2 |
|    13 |      6 |
|    17 |     16 |
|    19 |     18 |
|    23 |     22 |
|-------+--------|

For a proof of the claims above, and more general results, see Periods of Fractions.

Probability of coprime sets

The latest blog post from Gödel’s Lost Letter and P=NP looks at the problem of finding relatively prime pairs of large numbers. In particular, they want a deterministic algorithm. They mention in passing that the probability of a set of k large integers being relatively prime (coprime) is 1/ζ(k) where ζ is the Riemann zeta function. This blog post will look closer at that statement.

How large is large?

What does it mean that the numbers are large? That they are large enough that the asymptotic result given by the zeta function is accurate enough. We’ll explore this with simulation code shortly.

The exact statement is that in the limit as N goes to infinity, the probability of k integers chosen uniformly from 1 to N being coprime converges to 1/ζ(k). I won’t go into the proof here, but in a nutshell, it falls out of Euler’s product formula for the zeta function.

What does it mean for a set of numbers to be coprime?

The probability above seemed wrong to me at first. The function ζ(k) decreases as k increases, and so its reciprocal increases. That means it’s more likely that a set of numbers is coprime as the size of the set increases. But the larger the set of numbers, the more likely it is that two will have a common factor, so shouldn’t the probability that they’re coprime go down with k, not up?

The resolution to the apparent contradiction is that I had the wrong idea of coprime in mind. Two integers are coprime if they have no prime factors in common. How do you generalize that to more than two integers? You could define a set of numbers to be coprime if every pair of numbers in the set is coprime. That’s the definition I had in mind. But that’s not the standard definition. The right definition here is that the greatest common divisor of the set is 1. For example, (6, 14, 21) would be coprime because no single prime divides all three numbers, even though no pair of numbers from the set is coprime.

Python simulation

Let’s write some Python code to try this out. We’ll randomly generate some sets of large numbers and see how often they’re coprime.

How do you generate large random integers in Python? You can use the function getrandbits to generate numbers as large as you like. In the code below we’ll generate 128-bit integers.

How do you compute the greatest common divisor in Python? There’s a library function gcd, but it only takes two integers. We’ll use the reduce function to fold gcd over a list of integers.

How do you compute the zeta function in Python? SciPy doesn’t have an implementation of ζ(x) per se, but it does have an implementation of ζ(x)-1. Why not just implement ζ itself? Because for large x, ζ(x) is close to 1, so more precision can be saved by computing ζ – 1.

Putting these pieces together, here’s code to randomly generate triples of 128-bit integers, see how often they’re coprime, and compare the result to 1/ζ(3).

    from random import getrandbits
    from fractions import gcd
    from functools import reduce
    from scipy.special import zetac
    
    def riemann_zeta(x):
        return zetac(x) + 1
    
    k = 3
    size = 128
    N = 10000
    
    coprime_count = 0
    for _ in range(N):
        x = [getrandbits(size) for _ in range(k)]
        if reduce(gcd, x) == 1:
            coprime_count += 1
    print(coprime_count/N)
    print(1/riemann_zeta(k))

When I ran this I got

    0.8363
    0.831907372581

Simulation agrees with theory to two decimal places, which is just what we should expect from 10,000 repetitions. (We’d expect error on the order of 1/√N.)

Here’s what I got when I changed k to 4:

    0.9234
    0.923938402922

And here’s what I got for k set to 5:

    0.9632
    0.964387340429

Related posts

Sums of palindromes

Every positive integer can be written as the sum of three palindromes, numbers that remain the same when their digits are reverse. For example, 389 = 11 + 55 + 323. This holds not just for base 10 but for any base b ≥ 5.

[Update: a new paper on arXiv extends this to b = 3 and 4 as well. Base 2 requires four palindromes.]

The result and algorithms for finding the palindromes was published online last August and is in the most recent print issue of Mathematics of Computation.

Javier Cilleruelo, Florian Luca and Lewis Baxter. Every positive integer is a sum of three palindromes. DOI: https://doi.org/10.1090/mcom/3221

Fibonacci / Binomial coefficient identity

Here’s an identity relating Fibonacci numbers and binomial coefficients.

F_{n+1} = {n \choose 0} + {n-1 \choose 1} + {n-2\choose 2} + \cdots

You can think of it as a finite sum or an infinite sum: binomial coefficients are zero when the numerator is an integer and the denominator is a larger integer. See the general definition of binomial coefficients.

Source: Sam E. Ganis. Notes on the Fibonacci Sequence. The American Mathematical Monthly, Vol. 66, No. 2 (Feb., 1959), pp. 129-130

More Fibonacci number posts:

New prime number record: 50th Mersenne prime

A new record for the largest known prime was announced yesterday:

2^{77,232,917} - 1

This number has 23,249,425 digits when written in base 10.

In base 2, 2p – 1 is a sequence of p ones. For example, 31 = 25 -1  which is 11111 in binary. So in binary, the new record prime is a string of 77,232,917 ones.

77,232,917 ones

You can convert a binary number to hexadecimal (base 16) by starting at the right end and converting blocks of 4 bits into hexadecimal. For example, to convert 101101111 to hex, we break it into three blocks: 1, 0110, and 1111. These blocks convert to 1, 6, and F, and so 101101111 in binary corresponds to 16F in hex.

Now 77,232,917  = 19,308,229 * 4 + 1, so if we break our string of 77,232,917 ones into groups of four, we have a single bit followed by 19,308,229 groups of 4. This means that in hexadecimal, the new prime record is 1FFF…FFF, a 1 followed by 19,308,229 F’s.

19,308,229 Fs

The new record is the 50th Mersenne prime. A Mersenne prime is a prime number that is one less than a power of 2, i.e. of the form 2p – 1. It turns out p must be prime before 2p – 1 can be prime. In the case of the new record, 77,232,917 is prime.

It is unknown whether there are infinitely many Mersenne primes. But now we know there are at least 50 of them.

All the recent prime number records have been Mersenne primes because there is an algorithm for testing whether a number of the special form 2p – 1 is prime, the Lucas-Lehmer test.

Mersenne primes are closely related to perfect numbers. A number n is perfect if the sum of the numbers less than n that divide n equals n. For example, 28 is divisible by 1, 2, 4, 7, and 14, and 1 + 2 + 4 + 7 + 14 = 28.

Finding a new Mersenne prime means finding a new perfect number. If m is a Mersenne prime, then nm(m + 1)/2 is a perfect number. Conversely, if n is an even perfect number, n has the form m(m + 1)/2, Nobody knows whether odd perfect numbers exist. Since we don’t know whether there are infinitely many Mersenne primes, we don’t know whether there are infinitely many perfect numbers. But there are at least 50 of them.

Related posts:

Distribution of Fibonacci numbers mod m

The last digits of Fibonacci numbers repeat with period 60. This is something I’ve written about before.

The 61st Fibonacci number is 2504730781961. The 62nd is 4052739537881. Since these end in 1 and 1, the 63rd Fibonacci number must end in 2, etc. and so the pattern starts over.

It’s not obvious that the cycle should have length 60, but it is fairly easy to see that there must be a cycle.

In any base, the last digits must cycle. The length of these cycles varies erratically:

In this post I want to as a different question: how often do Fibonacci numbers take on each of the possible last digits in each base? In other words, how are the Fibonacci numbers distributed mod m for various values of m?

I haven’t tried to prove any theorems. I’ve just run some examples using the following Python code.

    def histogram(base):

        prev = 1
        F = 2
    
        counts = [0]*base
        counts[F] += 1
    
        while F != 1 or prev != 1:
            temp = F
            F = (F + prev) % base
            counts[F] += 1    
            prev = temp

        return counts

In base 10, the last digits of Fibonacci numbers have period 60. Do all digits appear in the cycle? Do they all appear equally often?

Yes and no. Here are the results for base 10:

    >>> histogram(10) 
    >>> [4, 8, 4, 8, 4, 8, 4, 8, 4, 8]

Each even digits appears 4 times and each odd digit appears 8 times.

What happens if we look at the last two digits, i.e. if we look at Fibonacci numbers mod 100?

    >>> histogram(100)
    >>> [2, 6, 2, 2, …, 2, 6, 2, 2]

Each two-digit number n appears six times if n = 1 mod 4. Otherwise it appears two times.

What about the last three digits? Now we see some zeros. For example, no Fibonacci number is congruent to 4 or 6 mod 1000. (Or mod 8.)

    >>> histogram(1000)
    >>> [2, 3, 2, 1, 0, 3, 0, 1, …, 2, 3, 2, 1, 0, 3, 0, 1]

Here’s something curious. The Fibonacci numbers are exactly evenly distributed mod 5.

    >>> histogram(5)
    >>> [4, 4, 4, 4, 4]

The same is apparently true for all powers of 5. Not only are all the frequencies the same, they’re all 4’s. I’ve tested this for powers of 5 up to 510. And conversely, it seems the Fibonacci numbers are not evenly distributed mod m unless m is a power of 5. I’ve tested this for m up to 100,000.

Conjecture: The Fibonacci numbers are uniformly distributed mod m if and only if m is a power of 5.

Update: The conjecture is correct, and was proved in 1972 by Niederreiter.

 

Recent exponential sums

The exponential sum of the day draws a line between consecutive partial sums of

\sum_{n=0}^N \exp\left( 2\pi i \left( \frac{n}{m} + \frac{n^2}{d} + \frac{n^3}{y} \right ) \right )

where md, and y are the current month, day, and two-digit year. The four most recent images show how different these plots can be.

exponential sums

These images are from 10/30/17, 10/31/17, 11/1/17, and 11/2/17.

Consecutive dates often produce very different images for a couple reasons. First, consecutive integers are relatively prime. From a number theoretic perspective, 30 and 31 are very different, for example. (This touches on the motivation for p-adic numbers: put a different metric on integers, one based on their prime factorization.)

The other reason consecutive dates produce qualitatively different images is that you might roll over a month or year, such as going from October (10) to November (11). You’ll see a big change when we roll over from 2017 to 2018.

The partial sums are periodic [1] with period lcm(mdy). The image for 10/31/17 above has the most points because 10, 31, and 17 are relatively prime. It’s also true that 11, 2, and 17 are relatively prime, but these are smaller numbers.

You could think of the month, day, and year components of the sum as three different gears. The sums repeat when all three gears return to their initial positions. In the image for yesterday, 11/1/17, the middle gear is effectively not there.

[1] The terms of the sums are periodic. The partial sums are periodic if the full sum is zero. So if the partial sums are periodic, the lcm is a period.

Poisson distribution and prime numbers

Let ω(n) be the number of distinct prime factors of x. A theorem of Landau says that for N large, then for randomly selected positive integers less than N, ω-1 has a Poisson(log log N) distribution. This statement holds in the limit as N goes to infinity.

Apparently N has to be extremely large before the result approximately holds. I ran an experiment for N = 10,000,000 and the fit is not great.

Here’s the data:

|----+----------+-----------|
|    | Observed | Predicted |
|----+----------+-----------|
|  1 |   665134 |  620420.6 |
|  2 |  2536837 | 1724733.8 |
|  3 |  3642766 | 2397330.6 |
|  4 |  2389433 | 2221480.4 |
|  5 |   691209 | 1543897.1 |
|  6 |    72902 |  858389.0 |
|  7 |     1716 |  397712.0 |
|  8 |        1 |  157945.2 |
|  9 |        0 |   54884.8 |
| 10 |        0 |   16952.9 |
|----+----------+-----------|

And here’s a plot:

bar chart of actual vs predicted

I found the above theorem in these notes. It’s possible I’ve misunderstood something or there’s an error in the notes. I haven’t found a more formal reference on the theorem yet.

Update: According to the results in the comments below, it seems that the notes I cited may be wrong. The notes say “distinct prime factors”, i.e. ω(n), while numerical results suggest they meant to say Ω(n), the number of prime factors counted with multiplicity. I verified the numbers given below. Here’s a plot.

 

Here’s the Python code I used to generate the table. (To match the code for the revised graph, replace omega which calculated ω(n) with the SymPy function primeomega which calculates Ω(n).)

from sympy import factorint
from numpy import empty, log, log2
from scipy.stats import poisson

N = 10000000

def omega(n):
    return len(factorint(n))

table = empty(N, int)
table[0] = table[1] = 0
for n in range(2, N):
    table[n] = omega(n)

# upper bound on the largest value of omega(n) for n < N.
u = int(log2(N))

for n in range(1, u+1):
    print(n, len(table[table==n]), poisson(log(log(N))).pmf(n-1)*N)

Related posts

Finding numbers in pi

You can find any integer you want as a substring of the digits in π. (Probably. See footnote for details.) So you could encode a number by reporting where it appears.

If you want to encode a single digit, the best you can do is break even: it takes at least one digit to specify the location of another digit. For example, 9 is the 5th digit of π. But 7 doesn’t appear until the 13th digit. In that case it would take 2 digits to encode 1 digit.

What if we work in another base b? Nothing changes as long as we also describe positions in base b. But there’s a possibility of compression if we look for digits of base b but describe their position using base p numbers where p < b. For example, 15 is the 2nd base-100 “digit” in π.

Blocks of digits

For the rest of this post we’ll describe blocks of k digits by their position as a base 10 number. That is, we’ll use p = 10 and b = 10k in the notation above.

There are ten 2-digit numbers that can be described by a 1-digit position in π: 14, 15, 32, …, 92. There are 57 more 2-digit numbers that can be described by a 2-digit position. The other 33 2-digit numbers require 3 digits to describe their position.

So if we encode a 2-digit number by its position in pairs of digits in π, there’s a 1/10 chance that we’ll only need 1 digit, i.e. a 10% chance of compression. The chance that we’ll break even by needing two digits is 57/100, and the chance that we’ll need three digits, i.e. more digits than what we’re encoding, is 33/100. On average, we expect to use 2.23 digits to encode 2 digits.

If we look at 3-digit numbers, there are 10 that can be described by a 1-digit position, 83 that require 2 digits, 543 that require 3 digits, and 364 that require 4 digits. So on average we’d need 3.352 digits to encode 3 digits. Our “compression” scheme makes things about 8% larger on average.

With 4-digit numbers, we need up to 5 digits to describe the position of each, and on average we need 4.2611 digits.

With 5-digit numbers, we need at least 7 digits to describe each position. As I write this, my program is still running. The positions of 99,994 five-digit numbers can be described by numbers with at most 6 digits. The remaining 6 need at least 7 digits. Assuming they need exactly seven digits, we need an average of 5.2623 digits to encode 5-digit numbers by their position in π.

Compression scheme efficiency

If we’re assuming the numbers we’re wishing to encode are uniformly distributed, the representation as a location in π will take more digits than the number itself, on average. But all compression algorithms expand their contents on average if by “average” you mean picking all possible inputs with the same probability. Uniform randomness is not compressible. It takes n bits to describe n random bits.

Compression methods are practical because their inputs are not completely random. For example, English prose compresses nicely because it’s not random.

If we had some reason to suspect that a number came from a block of digits in π, and one not too far our, then the scheme here could be a form of compression. The possibility of efficient compression comes from an assumption about our input.

Extensions

You could extend the ideas in this post theoretically or empirically, i.e. you could write a theorem or a program.

Suppose you look at blocks of k base-b numbers whose position is described as a base b number. The case b = 2 seems particularly interesting. What is the compression efficiency of the method as k varies? What is it for particular k and what is it in the limit as k does to infinity?

You could look at this empirically for digits of π, or some other number, or you could try to prove it theoretically for digits of a normal number.

Footnote on normal numbers

It might not be true that you can find any integer in the digits of π, though we know it’s true for numbers of the size we’re considering here. You can find any number in the digits of π if it is a “normal number.” Roughly speaking, a number is normal if you can find any pattern in its digits with the probability you’d expect. That is, 1/10 of the digits in its decimal expansion will be 7’s, for example. Or if you grouped the digits in pairs, thinking of the number as being in base 100, 1/100 of the pairs to be 42’s, etc.

Almost all numbers are normal, so in a sense, the probability that π is normal is 1. Even though almost all numbers are normal, nobody has been able to prove that that any familiar number is normal: π, e, √2, etc.

It’s possible that every integer appears in the decimal expansion of π, but not necessarily with the expected probability. This would be a weaker condition than normality. Maybe this has been proven for π, but I don’t know. It would be strange if every number appeared in the digits of π but with some bias.

Empirically testing the Chowla conjecture

Terry Tao’s most recent blog post looks at the Chowla conjecture theoretically. This post looks at the same conjecture empirically using Python. (Which is much easier!)

The Liouville function λ(n) is (-1)Ω(n) where Ω(n) is the number of prime factors of n counted with multiplicity. So, for example, Ω(9) = 2 because even though 9 has only one distinct prime factor, that factor counts twice.

Given some set of k strides h1, h2, …, hk, define

f(n) = λ(n + h1) λ(n + h1) … λ(n + hk).

The Chowla conjecture says that the average of the first N values of f(n) tends to zero as N goes to infinity. This has been proven for k = 1 but not for larger k.

If f(n) acts like a Bernoulli random variable, our experiments might increase our confidence in the conjecture, but they wouldn’t prove anything. Unexpected results wouldn’t prove anything either, but a departure from random behavior might help find a counterexample if the conjecture is false.

We’re going to be evaluating the Liouville function repeatedly at the same arguments, so it will save some compute time if we tabulate its values. This will also let us use some compact Python notation to average f. We’ll need to tabulate the function up to Nhk.

In the code below, maxstride is an upper bound on the strides hk we may look at. SymPy has a function primeomega that calculates Ω(n) so we might as well use it. If you wanted to use a very large value of N, you might want to fill the array of Liouville function values using a more direct approach that avoids all the factoring implicit in calls to primeomega.

    from sympy import primeomega
    from numpy import empty

    N = 10000
    maxstride = 100

    liouville = empty(N + maxstride)
    liouville[0] = 1
    for n in range(1, len(liouville)):
        liouville[n] = (-1)**primeomega(n)

The following code looks at the Chowla conjecture for h1 = 0 and h2 ranging over 1, 2, …, 99.

    average = empty(maxstride-1)
    for s in range(1, maxstride):
        average[s-1] = (liouville[0:N] * liouville[s:N+s]).sum()/N

If the Liouville function is distributed like a random variable taking on -1 and 1 with equal probability, we’d expect the averages to vary like samples form a normal distribution with mean 0 and variance 1/(2N).

    print(average.mean())
    print(average.std())
    print( (2*N)**-0.5 )

This returns

    0.00141
    0.00851
    0.00707

and so the means are indeed near zero, and the standard deviation of the samples is about what we’d expect.

What if we replace Ω(n), the number of prime factors with multiplicity, with ω(n), the number of distinct prime factors? The averages above are still small, but the sample variance is about twice as big as before. I’ve seen this pattern with several different large values of N, so I suspect there’s something going on here.

(I didn’t see a function in SymPy corresponding to ω(n), but you can create your own with len(factorint(n)).

Exponential sums make pretty pictures

Exponential sums are a specialized area of math that studies series with terms that are complex exponentials. Estimating such sums is delicate work. General estimation techniques are ham-fisted compared to what is possible with techniques specialized for these particular sums. Exponential sums are closely related to Fourier analysis and number theory.

Exponential sums also make pretty pictures. If you make a scatter plot of the sequence of partial sums you can get surprising shapes. This is related to the trickiness of estimating such sums: the partial sums don’t simply monotonically converge to a limit.

The exponential sum page at UNSW suggests playing around with polynomials with dates in the denominator. If we take that suggestion with today’s date, we get the curve below:

f(n) = n/10 + n**2/7 + n**3/17

These are the partial sums of exp(2πi f(n)) where f(n) = n/10 + n²/7 + n³/17.

[Update: You can get an image each day for the current day’s date here.]

Here’s the code that produced the image.

    import matplotlib.pyplot as plt
    from numpy import array, pi, exp, log

    N = 12000
    def f(n):
        return n/10 + n**2/7 + n**3/17 

    z = array( [exp( 2*pi*1j*f(n) ) for n in range(3, N+3)] )
    z = z.cumsum()

    plt.plot(z.real, z.imag, color='#333399')
    plt.axes().set_aspect(1)
    plt.show()

If we use logarithms, we get interesting spirals. Here f(n) = log(n)4.1.

f(n) = log(n)**4.1

And we can mix polynomials with logarithms. Here f(n) = log(n) + n²/100.

f(n) = log(n) + n**2/100

In this last image, I reduced the number of points from 12,000 to 1200. With a large number of points the spiral nature dominates and you don’t see the swirls along the spiral as clearly.

 

Pascal’s triangle and Fermat’s little theorem

I was listening to My Favorite Theorem when Jordan Ellenberg said something in passing about proving Fermat’s little theorem from Pascal’s triangle. I wasn’t familiar with that, and fortunately Evelyn Lamb wasn’t either and so she asked him to explain.

Fermat’s little theorem says that for any prime p, then for any integer a,

ap = a (mod p).

That is, ap and a have the same remainder when you divide by p. Jordan Ellenberg picked the special case of a = 2 as his favorite theorem for the purpose of the podcast. And it’s this special case that can be proved from Pascal’s triangle.

The pth row of Pascal’s triangle consists of the coefficients of (xy)p when expanded using the binomial theorem. By setting x = y = 1, you can see that the numbers in the row must add up to 2p. Also, all the numbers in the row are divisible by p except for the 1’s on each end. So the remainder when 2p is divided by p is 2.

It’s easy to observe that the numbers in the pth row, except for the ends, are divisible by p. For example, when p = 5, the numbers are 1, 5, 10, 10, 5, 1. When p = 7, the numbers are 1, 7, 28, 35, 35, 28, 7, 1.

To prove this you have to show that the binomial coefficient C(p, k) is divisible by p when 0 < k < p. When you expand the binomial coefficient into factorials, you see that there’s a factor of p on top, and nothing can cancel it out because p is prime and the numbers in the denominator are less than p.

By the way, you can form an analogous proof for the general case of Fermat’s little theorem by expanding

(1 + 1 + 1 + … + 1)p

using the multinomial generalization of the binomial theorem.

Predicting when an RNG will output a given value

A few days ago I wrote about how to pick the seed of a simple random number generator so that a desired output came n values later. The number n was fixed and we varied the seed. In this post, the seed will be fixed and we’ll solve for n. In other words, we ask when a pseudorandom sequence will produce a given value.

In principle you could just run the RNG until you get the output you’re looking for, but we’ll assume such a brute force approach is not feasible or at least not fast enough.

If a LCG (linear congruential generator) has seed z, multiplier a, and modulus m, then the nth output is an z reduced mod m. So our task is to solve

x = an z mod m

for n. If we forget for a moment that we’re working with integers mod m, we see that the solution is

n = loga (x / z)

We can actually make this work if we interpret division by z to mean multiplication by the inverse of z mod m and if we interpret the logarithm to be a discrete logarithm. For more on discrete logarithms and one algorithm for computing them, see this post.

In an earlier post I used  a = 742938285 and m = 231 – 1 = 2147483647. We set n = 100 and solved for z to make the 100th output equal to 20170816, the date of that post. It turned out that z = 1898888478.

Now let’s set the seed z = 1898888478 and ask when the LCG sequence will take on the value x = 20170816. Of course we know that n will turn out to be 100, but let’s pretend we don’t know that. We’ll write a little Python script to find n.

I expect there’s a simple way to compute modular inverses using SymPy, but I haven’t found it, so I used some code from StackOverflow.

The following Python code produces n = 100, as expected.

from sympy.ntheory import discrete_log

def egcd(a, b):
    if a == 0:
        return (b, 0, 1)
    else:
        g, y, x = egcd(b % a, a)
        return (g, x - (b // a) * y, y)

def modinv(a, m):
    g, x, y = egcd(a, m)
    if g != 1:
        raise Exception('modular inverse does not exist')
    else:
        return x % m

a = 742938285
z = 1898888478
m = 2**31 - 1
x = 20170816
zinv = modinv(z, m)

n = discrete_log(m, x*zinv, a)
print(n)

Reverse engineering the seed of a linear congruential generator

The previous post gave an example of manipulating the seed of a random number generator to produce a desired result. This post will do something similar for a different generator.

A couple times I’ve used the following LCG (linear congruential random number generator) in examples. An LCG starts with an initial value of z and updates z at each step by multiplying by a constant a and taking the remainder by m. The particular LCG I’ve used has a = 742938285 and m = 231 – 1 = 2147483647.

(I used these parameters because they have maximum range, i.e. every positive integer less than m appears eventually, and because it was one of the LCGs recommended in an article years ago. That is, it’s very good as far as 32-bit LCGs go, which isn’t very far. An earlier post shows how it quickly fails the PractRand test suite.)

Let’s pick the seed so that the 100th output of the generator is today’s date in ISO form: 20170816.

We need to solve

a100z = 20170816 mod 2147483647.

By reducing  a100 mod 2147483647 we  find we need to solve

160159497 z = 20170816 mod 2147483647

and the solution is z = 1898888478. (See How to solve linear congruences.)

The following Python code verifies that the solution works.

    a = 742938285
    z = 1898888478
    m = 2**31 - 1

    for _ in range(100):
        z = a*z % m
    print(z)

Update: In this post, I kept n = 100 fixed and solved for the seed to give a specified output n steps later. In a follow up post I do the opposite, fixing the seed and solving for n.

Least common multiple of the first n positive integers

Here’s a surprising result: The least common multiple of the first n positive integers is approximately exp(n).

More precisely, let φ(n) equal the log of the least common multiple of the numbers 1, 2, …, n. There are theorems that give upper and lower bounds on how far φ(n) can be from n. We won’t prove or even state these bounds here. See [1] for that. Instead, we’ll show empirically that φ(n) is approximately n.

Here’s some Python code to plot φ(n) over n. The ratio jumps up sharply after the first few values of n. In the plot below, we chop off the first 20 values of n.

from scipy import arange, empty
from sympy.core.numbers import ilcm
from sympy import log
import matplotlib.pyplot as plt

N = 5000
x = arange(N)
phi = empty(N)

M = 1
for n in range(1, N):
    M = ilcm(n, M)
    phi[n] = log(M)

a = 20
plt.plot(x[a:], phi[a:]/x[a:])
plt.xlabel("$n$")
plt.ylabel("$\phi(n) / n$")
plt.show()

Here’s the graph this produces.

[1] J. Barkley Rosser and Lowell Schoenfeld. Approximate formulas for some functions of prime numbers. Illinois Journal of Mathematics, Volume 6, Issue 1 (1962), 64-94. (On Project Euclid)