Münchausen numbers

Baron Münchausen

Baron Münchausen

The number 3435 has the following curious property:

3435 = 33 + 44 + 33 + 55.

It is called a Münchausen number, an allusion to fictional Baron Münchausen. When each digit is raised to its own power and summed, you get the original number back. The only other Münchausen number is 1.

At least in base 10. You could look at Münchausen numbers in other bases. If you write out a number n in base b, raise each of its “digits” to its own power, take the sum, and get n back, you have a Münchausen number in base b. For example 28 is a Münchausen number in base 9 because

28ten = 31nine = 33 + 11

Daan van Berkel proved that there are only finitely many Münchausen in any given base. In fact, he shows that a Münchausen number in base b cannot be greater than 2bb, and so you could do a brute-force search to find all the Münchausen numbers in any base.

The upper bound 2bb grows very quickly with b and so brute force becomes impractical for large b. If you wanted to find all the hexadecimal Münchausen numbers you’d have to search 2*1616 = 36,893,488,147,419,103,232 numbers. How could you do this more efficiently?

Prime factors, phone numbers, and the normal distribution

Telephone numbers typically have around three distinct prime factors.

The length of a phone number varies by country, but US a phone number is a 10 digit number, and 10-digit numbers are often divisible by three different prime numbers, give or take a couple. Assuming phone numbers are scattered among possible 10-digit numbers in a way that doesn’t bias their number of prime factors, these numbers will often have between 1 and 5 prime factors. If a country has 9- or 11-digit phone numbers, the result is essentially the same.

Let ω(n) be the number of distinct prime factors of n. Then the Erdős–Kac theorem says roughly that ω(n) is distributed like a normal random variable with mean and variance log log n. More precisely, fix two numbers a and b.  For a given value of x, count the proportion of positive integers less than x where

(ω(n) – log log n) / sqrt( log log n)

is between a and b. Then in the limit as x goes to infinity, this proportion approaches the probability that a standard normal random variable is between a and b.

So by that heuristic, the number of distinct prime factors of a 10-digit number is approximately normally distributed with mean and variance log log 10^11 = 3.232, and such a distribution is between 1 and 5 around 73% of the time.

My business phone number, for example, is 8324228646. Obviously this is divisible by 2. In fact it equals 2 × 32 × 462457147, and so it has exactly three distinct prime factors: 2, 3, and 462457147.

Here’s how you could play with this using Python.

    from sympy.ntheory import factorint

    def omega(n):
        return len(factorint(n))

I looked in SymPy and didn’t see an implementation of ω(n) directly, but it does have a function factorint that returns the prime factors of a number, along with their multiplicities, in a dictionary. So ω(n) is just the size of that dictionary.

I took the first 20 phone numbers in my contacts and ran them through omega and got results consistent with what you’d expect from the theory above. One was prime, and none had more than five factors.

Bar chart of umber of prime factors in a sample of phone numbers with heights [1, 3, 5, 8, 3]

Benford’s law, chi-square, and factorials

A while back I wrote about how the leading digits of factorials follow Benford’s law. That is, if you look at just the first digit of a sequence of factorials, they are not evenly distributed. Instead, 1’s are most popular, then 2’s, etc. Specifically, the proportion of factorials starting with n is roughly log10(1 + 1/n).

Someone has proved that the limiting distribution of leading digits of factorials exactly satisfies Benford’s law. But if we didn’t know this, we might use a chi-square statistic to measure how well the empirical results match expectations. As I argued in the previous post, statistical tests don’t apply here, but they can be useful anyway in giving us a way to measure the size of the deviations from theory.

Benford’s law makes a better illustration of the chi-square test than the example of prime remainders because the bins are unevenly sized, which they’re allowed to be in general. In the prime remainder post, they were all the same size.

The original post on leading digits of factorial explains why we compute the leading digits the way we do. Only one detail has changed: the original post used Python 2 and this one uses Python 3. Integer division was the default in Python 2, but now in Python 3 we have to use // to explicitly ask for integer division, floating point division being the new default.

Here’s a plot of the distribution of the leading digits for the first 500 factorials.

And here’s code to compute the chi-square statistic:

    from math import factorial, log10

    def leading_digit_int(n):
        while n > 9:
            n = n//10
        return n

    def chisq_stat(O, E):
        return sum( [(o - e)**2/e for (o, e) in zip(O, E)] )

    # Waste the 0th slot to make the code simpler.
    digits = [0]*10

    N = 500
    for i in range(N):
        digits[ leading_digit_int( factorial(i) ) ] += 1

    expected = [ N*log10(1 + 1/n) for n in range(1, 10) ]

    print( chisq_stat(digits[1:], expected) )

This gives a chi-square statistic of 7.693, very near the mean value of 8 for a chi-square distribution with eight degrees of freedom. (There are eight degrees of freedom, not nine, because if we know how many factorials start with the digits 1 through 8, we know how many start with 9.)

So the chi-square statistic suggests that the deviation from Benford’s law is just what we’d expect from random data following Benford’s law. And as we said before, this suggestion turns out to be correct.

Related posts:

Hypothesis testing and number theory

This post uses a hypothesis test for proportions to look at a couple conjectures in number theory. It is similar to my earlier post on the chi-square test and prime remainders. You could read this as a post on statistics or a post on number theory, depending on which you’re less familiar with.

Using statistical tests on number theory problems is kind of odd. There’s nothing random going on, so in that since the whole enterprise is unjustified. Nevertheless, statistical tests can be suggestive. They certainly don’t prove theorems, but they can give reason to suspect a theorem is true or false. In that sense, applying statistical tests to number theory isn’t all that different from applying them to more traditional settings.

First we’ll look at the remainders of primes modulo 4. Except for 2, all primes are odd, and so they either have remainder 1 or 3 when divided by 4. Brian Hayes wrote recently that Chebyshev noticed in the 1850’s that there seems to be more primes with remainder 3. Is the imbalance larger than one would expect to see from fair coin tosses?

Here’s some Python code to find the proportion of the first million primes (after 2) that have remainder 3 when divided by 4.

    from sympy import prime
    from scipy import sqrt

    n = 1000000
    rem3 = 0
    for i in range (2, n+2):
        if prime(i) % 4 == 3:
            rem3 += 1
    p_hat = rem3/n

This shows that of the first million odd primes, 500,202 are congruent to 3 mod 4. Would it be unusual for a coin to come up heads this many times in a million flips? To find out we’d compute the z-statistic:

z = \frac{\hat{p} - p}{\sqrt{pq/n}}

Here p is the proportion under the null hypothesis, q = 1 – p, and n is the sample size. In our case, the null hypothesis is p = 0.5 and n = 1,000,000. [1]

The code

    p = 0.5
    q = 1 - p
    z = (p_hat - p)/sqrt(p*q/n)

shows that z = 0.404, hardly a suspiciously large value. If we were looking at random values we’d see a z-score this large or larger 34% of the time. So in this case doesn’t suggest much.

Click to learn more about Bayesian statistics consulting

 
[1] The derivation of the z statistic is fairly quick. If the proportion of successes is p, then the number of successes out of n trials is binomial(np). For large n, this is has approximately the same distribution as a normal distribution with the same mean and variance, mean np and variance npq. The proportion of successes then has approximately mean p and standard deviation √(pq/n). Subtracting the mean and dividing by the standard deviation normalizes the distribution to have mean 0 and variance 1. So under the null hypothesis, the z statistic has a standard normal distribution.

Prime remainders too evenly distributed

First Brian Hayes wrote an excellent post about the remainders when primes are divided by other primes. Then I wrote a follow-on just focusing on the first part of his post. He mostly looked at pairs of primes, but I wanted to look in more detail at the first part of his post, simulating dice rolls by keeping the remainder when consecutive primes are divided by a fixed prime. For example, using a sequence of primes larger than 7 and taking their remainder by 7 to create values from 1 to 6.

The results are evenly distributed with some variation, just like dice rolls. In fact, given these results and results from a set of real dice rolls, most people would probably think the former are real because they’re more evenly distributed. A chi-squared goodness of fit test shows that the results are too evenly distributed compared to real dice rolls.

At the end of my previous post, I very briefly discuss what happens when you look a “dice” with more than six sides. Here I’ll go into a little more detail and look at a large number of examples.

In short, you either get a suspiciously good fit or a terrible fit. If you look at the remainder when dividing primes by m, you get values between 1 and m-1. You can’t get a remainder of 0 because primes aren’t divisible by m (or anything else!). If m itself is prime, then you get all the numbers between 1 and m-1, and as we’ll show below you get them in very even proportions. But if m isn’t prime, there are some remainders you can’t get.

The sequence of remainders looks random in the sense of being unpredictable. (Of course it is predictable by the algorithm that generates them, but it’s not predictable in the sense that you could look at the sequence out of context and guess what’s coming next.) The sequence is biased, and that’s the big news. Pairs of consecutive primes have correlated remainders. But I’m just interested in showing a different departure from a uniform distribution, namely that the results are too evenly distributed compared to random sequences.

The table below gives the chi-square statistic and p-value for each of several primes. For each prime p, we take remainders mod p of the next million primes after p and compute the chi-square goodness of fit statistic with p-2 degrees of freedom. (Why p-2? There are p-1 different remainders, and the chi-square test for k possibilities has k-1 degrees of freedom.)

The p-value column gives the probability of seeing at fit this good or better from uniform random data. (The p in p-value is unrelated to our use of p to denote a prime. It’s an unfortunate convention of statistics that everything is denoted p.) After the first few primes, the p-values are extremely small, indicating that such an even distribution of values would be astonishing from random data.

|-------+------------+------------|
| Prime | Chi-square | p-value    |
|-------+------------+------------|
|     3 |     0.0585 |   2.88e-02 |
|     5 |     0.0660 |   5.32e-04 |
|     7 |     0.0186 |   1.32e-07 |
|    11 |     0.2468 |   2.15e-07 |
|    13 |     0.3934 |   6.79e-08 |
|    17 |     0.5633 |   7.64e-10 |
|    19 |     1.3127 |   3.45e-08 |
|    23 |     1.1351 |   2.93e-11 |
|    29 |     1.9740 |   3.80e-12 |
|    31 |     2.0052 |   3.11e-13 |
|    37 |     2.5586 |   3.92e-15 |
|    41 |     3.1821 |   9.78e-16 |
|    43 |     4.4765 |   5.17e-14 |
|    47 |     3.7142 |   9.97e-18 |
|    53 |     3.7043 |   3.80e-21 |
|    59 |     7.0134 |   2.43e-17 |
|    61 |     5.1461 |   6.45e-22 |
|    67 |     7.1037 |   5.38e-21 |
|    71 |     7.6626 |   6.13e-22 |
|    73 |     7.5545 |   4.11e-23 |
|    79 |     8.0275 |   3.40e-25 |
|    83 |    12.1233 |   9.92e-21 |
|    89 |    11.4111 |   2.71e-24 |
|    97 |    12.4057 |   2.06e-26 |
|   101 |    11.8201 |   3.82e-29 |
|   103 |    14.4733 |   3.69e-26 |
|   107 |    13.8520 |   9.24e-29 |
|   109 |    16.7674 |   8.56e-26 |
|   113 |    15.0897 |   1.20e-29 |
|   127 |    16.4376 |   6.69e-34 |
|   131 |    19.2023 |   6.80e-32 |
|   137 |    19.1728 |   1.81e-34 |
|   139 |    22.2992 |   1.82e-31 |
|   149 |    22.8107 |   6.67e-35 |
|   151 |    22.8993 |   1.29e-35 |
|   157 |    30.1726 |   2.60e-30 |
|   163 |    26.5702 |   3.43e-36 |
|   167 |    28.9628 |   3.49e-35 |
|   173 |    31.5647 |   7.78e-35 |
|   179 |    33.3494 |   2.46e-35 |
|   181 |    36.3610 |   2.47e-33 |
|   191 |    29.1131 |   1.68e-44 |
|   193 |    29.9492 |   2.55e-44 |
|   197 |    34.2279 |   3.49e-41 |
|   199 |    36.7055 |   1.79e-39 |
|   211 |    41.0392 |   8.42e-40 |
|   223 |    39.6699 |   1.73e-45 |
|   227 |    42.3420 |   2.26e-44 |
|   229 |    37.1896 |   2.02e-50 |
|   233 |    45.0111 |   4.50e-44 |
|   239 |    43.8145 |   2.27e-47 |
|   241 |    51.3011 |   1.69e-41 |
|   251 |    47.8670 |   6.28e-48 |
|   257 |    44.4022 |   1.54e-53 |
|   263 |    51.5905 |   7.50e-49 |
|   269 |    59.8398 |   3.92e-44 |
|   271 |    59.6326 |   6.02e-45 |
|   277 |    52.2383 |   2.80e-53 |
|   281 |    52.4748 |   1.63e-54 |
|   283 |    64.4001 |   2.86e-45 |
|   293 |    59.7095 |   2.59e-52 |
|   307 |    65.2644 |   1.64e-52 |
|   311 |    63.1488 |   1.26e-55 |
|   313 |    68.6085 |   7.07e-52 |
|   317 |    63.4099 |   1.72e-57 |
|   331 |    66.3142 |   7.20e-60 |
|   337 |    70.2918 |   1.38e-58 |
|   347 |    71.3334 |   3.83e-61 |
|   349 |    75.8101 |   3.38e-58 |
|   353 |    74.7747 |   2.33e-60 |
|   359 |    80.8957 |   1.35e-57 |
|   367 |    88.7827 |   1.63e-54 |
|   373 |    92.5027 |   7.32e-54 |
|   379 |    86.4056 |   5.67e-60 |
|   383 |    74.2349 |   3.13e-71 |
|   389 |   101.7328 |   9.20e-53 |
|   397 |    86.9403 |   1.96e-65 |
|   401 |    90.3736 |   3.90e-64 |
|   409 |    92.3426 |   2.93e-65 |
|   419 |    95.9756 |   8.42e-66 |
|   421 |    91.1197 |   3.95e-70 |
|   431 |   100.3389 |   1.79e-66 |
|   433 |    95.7909 |   1.77e-70 |
|   439 |    96.2274 |   4.09e-72 |
|   443 |   103.6848 |   6.96e-68 |
|   449 |   105.2126 |   1.07e-68 |
|   457 |   111.9310 |   1.49e-66 |
|   461 |   106.1544 |   7.96e-72 |
|   463 |   116.3193 |   1.74e-65 |
|   467 |   116.2824 |   1.02e-66 |
|   479 |   104.2246 |   3.92e-79 |
|   487 |   116.4034 |   9.12e-73 |
|   491 |   127.2121 |   6.69e-67 |
|   499 |   130.9234 |   5.90e-67 |
|   503 |   118.4955 |   2.60e-76 |
|   509 |   130.9212 |   6.91e-70 |
|   521 |   118.6699 |   6.61e-82 |
|   523 |   135.4400 |   3.43e-71 |
|   541 |   135.9210 |   3.13e-76 |
|   547 |   120.0327 |   2.41e-89 |
|-------+------------+------------|

Chi-square goodness of fit test example with primes

chi squared

Yesterday Brian Hayes wrote a post about the distribution of primes. He showed how you could take the remainder when primes are divided by 7 and produce something that looks like rolls of six-sided dice. Here we apply the chi-square goodness of fit test to show that the rolls are too evenly distributed to mimic randomness. This post does not assume you’ve seen the chi-square test before, so it serves as an introduction to this goodness of fit test.

In Brian Hayes’ post, he looks at the remainder when consecutive primes are divided by 7, starting with 11. Why 11? Because it’s the smallest prime bigger than 7. Since no prime is divisible by any other prime, all the primes after 7 will have a remainder of between 1 and 6 inclusive when divided by 7. So the results are analogous to rolling six-sided dice.

The following Python code looks at prime remainders and (pseudo)random rolls of dice and computes the chi-square statistic for both.

First, we import some functions we’ll need.

    from sympy import prime
    from random import random
    from math import ceil

The function prime takes an argument n and returns the nth prime. The function random produces a pseudorandom number between 0 and 1. The ceiling function ceil rounds its argument up to an integer. We’ll use it to convert the output of random into dice rolls.

In this example we’ll use six-sided dice, but you could change num_sides to simulate other kinds of dice. With six-sided dice, we divide by 7, and we start our primes with the fifth prime, 11.

    num_sides = 6
    modulus = num_sides + 1

    # Find the index of the smallest prime bigger than num_sides
    index = 1
    while prime(index) <= modulus:
        index += 1

We’re going to take a million samples and count how many times we see 1, 2, …, 6. We’ll keep track of our results in an array of length 7, wasting a little bit of space since the 0th slot will always be 0. (Because the remainder when dividing a prime by a smaller number is always positive.)

    # Number of samples
    N = 1000000
    
    observed_primes = [0]*modulus
    observed_random = [0]*modulus

Next we “roll” our dice two ways, using prime remainders and using a pseudorandom number generator.

    for i in range(index, N+index):
        m = prime(i) % modulus
        observed_primes[m] += 1
        m = int(ceil(random()*num_sides))
        observed_random[m] += 1

The chi-square goodness of fit test depends on the observed number of events in each cell and the expected number. We expect 1/6th of the rolls to land in cell 1, 2, …, 6 for both the primes and the random numbers. But in a general application of the chi-square test, you could have a different expected number of observations in each cell.

    expected = [N/num_sides for i in range(1, modulus)]

The chi-square test statistic sums (O – E)2/E over all cells, where O stands for “observed” and E stands for “expected.”

    def chisq_stat(O, E):
        return sum( [(o - e)**2/e for (o, e) in zip(O, E)] )

Finally, we compute the chi-square statistic for both methods.

    ch = chisq_stat(observed_primes[1:], expected[1:])
    print(ch)

    ch = chisq_stat(observed_random[1:], expected[1:])
    print(ch)

Note that we chop off the first element of the observed and expected lists to get rid of the 0th element that we didn’t use.

When I ran this I got 0.01865 for the prime method and 5.0243 for the random method. Your results for the prime method should be the same, though you might have a different result for the random method.

Now, how do we interpret these results? Since we have six possible outcomes, our test statistics has a chi-square distribution with five degrees of freedom. It’s one less than the number of possibilities because the total counts have to sum to N; if you know how many times 1, 2, 3, 4, and 5 came up, you can calculate how many times 6 came up.

A chi-square distribution with ν degrees of freedom has expected value ν. In our case, we expect a value around 5, and so the chi-square value of 5.0243 is unremarkable. But the value of 0.01864 is remarkably small. A large chi-square statistics would indicate a poor fit, the observed numbers being suspiciously far from their expected values. But a small chi-square value suggests the fit is suspiciously good, closer to the expected values than we’d expect of a random process.

We can be precise about how common or unusual a chi-square statistic is by computing the probability that a sample from the chi square distribution would be larger or smaller. The cdf gives the probability of seeing a value this small or smaller, i.e. a fit this good or better. The sf gives the probability of seeing a value this larger or larger, i.e. a fit this bad or worse. (The scipy library uses sf for “survival function,” another name for the ccdf, complementary cumulative distribution function).

    from scipy.stats import chi2
    print(chi2.cdf(ch, num_sides-1), chi2.sf(ch, num_sides-1))

This says that for the random rolls, there’s about a 41% chance of seeing a better fit and a 59% chance of seeing a worse fit. Unremarkable.

But it says there’s only a 2.5 in a million chance of seeing a better fit than we get with prime numbers. The fit is suspiciously good. In a sense this is not surprising: prime numbers are not random! And yet in another sense it is surprising since there’s a heuristic that says primes act like random numbers unless there’s a good reason why in some context they don’t. This departure from randomness is the subject of research published just this year.

If you look at dice with 4 or 12 sides, you get a suspiciously good fit, but not as suspicious as with 6 sides. But with 8 or 20-sided dice you get a very bad fit, so bad that its probability underflows to 0. This is because the corresponding moduli, 9 and 21, are composite, which means some of the cells in our chi-square test will have no observations. (Suppose m has a proper factor a. Then if a prime p were congruent to a mod m, p would be have to be divisible by a.)

Update: See the next post for a systematic look at different moduli.

You don’t have to use “dice” that correspond to regular solids. You could consider 10-sided “dice,” for example. For such numbers it may be easier to think of spinners than dice, a spinner with 10 equal arc segments it could fall into.

Related post: Probability that a number is prime

Click to learn more about Bayesian statistics consulting

Graphs and square roots modulo a prime

Imagine a clock with a prime number of hours. So instead of being divided into 12 hours, it might be divided into 11 or 13, for example. You can add numbers on this clock the way you’d add numbers on an ordinary clock: add the two numbers as ordinary integers and take the remainder by p, the number of hours on the clock. You can multiply them the same way: multiply as ordinary integers, then take the remainder by p. This is called arithmetic modulo p, or just mod p.

Which numbers have square roots mod p? That is, for which numbers x does there exist a number y such that y2 = x mod p? It turns out that of the non-zero numbers, half have a square root and half don’t. The numbers that have square roots are called quadratic residues and the ones that do not have square roots are called quadratic nonresidues. Zero is usually left out of the conversation. For example, if you square the numbers 1 through 6 and take the remainder mod 7, you get 1, 4, 2, 2, 4, and 1. So mod 7 the quadratic residues are 1, 2, and 4, and the nonresidues are 3, 5, and 6.

A simple, brute force way to tell whether a number is a quadratic residue mod p is to square all the numbers from 1 to p-1 and see if any of the results equals your number. There are more efficient algorithms, but this is the easiest to understand.

At first it seems that the quadratic residues and nonresidues are scattered randomly. For example, here are the quadratic residues mod 23:

[1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18]

But there are patterns, such as the famous quadratic reciprocity theorem. We can see some pattern in the residues by visualizing them on a graph. We make the numbers mod p vertices of a graph, and join two vertices a and b with an edge if ab is a quadratic residue mod p.

Here’s the resulting graph mod 13:

And here’s the corresponding graph for p = 17:

And here’s Python code to produce these graphs:

import networkx as nx
import matplotlib.pyplot as plt

def residue(x, p):
    for i in range(1, p):
        if (i*i - x) % p == 0:
            return True
    return False

p = 17
G = nx.Graph()
for i in range(p):
    for j in range(i+1, p):
        if residue(i-j, p):
            G.add_edge(i, j)

nx.draw_circular(G)
plt.show()

However, this isn’t the appropriate graph for all values of p. The graphs above are undirected graphs. We’ve implicitly assumed that if there’s an edge between a and b then there should be an edge between b and a. And that’s true for p = 13 or 17, but not, for example, for p = 11. Why’s that?

If p leaves a remainder of 1 when divided by 4, i.e. p = 1 mod 4, then x is a quadratic residue mod p if and only if –x is a quadratic residue mod p. So if ab is a residue, so is ba. This means an undirected graph is appropriate. But if p = 3 mod 4, x is a residue mod p if and only if –x is a non-residue mod p. This means we should use directed graphs if p = 3 mod 4. Here’s an example for p = 7.

The code for creating the directed graphs is a little different:

G = nx.DiGraph()
for i in range(p):
    for j in range(p):
        if residue(i-j, p):
            G.add_edge(i, j)

Unfortunately, the way NetworkX draws directed graphs is disappointing. A directed edge from a to b is drawn with a heavy line segment on the b end. Any suggestions for a Python package that draws more attractive directed graphs?

The idea of creating graphs this way came from Roger Cook’s chapter on number theory applications in Graph Connections. (I’m not related to Roger Cook as far as I know.)

You could look at quadratic residues modulo composite numbers too. And I imagine you could also make some interesting graphs using Legendre symbols.

Click to learn more about consulting for complex networks

 

Related posts:

Curious numbers

A an n-digit number is said to be curious if the last n digits of its square are the same as the original number. For example, 252 = 625 and 762 = 5776. (Curious numbers are also known as automorphic numbers.)

There are bigger curious numbers, such as 212890625 and 787109376:

2128906252 = 45322418212890625

and

7871093762 = 619541169787109376.

And if the square of x has the same last n digits as x, so does the cube of x and all higher powers.

It turns out that for each n > 1, there are two curious numbers of length n.

Always two there are; no more, no less. — Yoda

There’s even a formula for the two solutions. The first is the remainder when 5 to the power 2n is divided by 10n and the second is 10n + 1 minus the first.

Here’s a little Python code to show that the first several solutions really are solutions.

for i in range(2, 20):
    a = 5**(2**i) % 10**i
    b = 10**i - a + 1
    print((a**2 - a)%10**i, (b**2 - b)%10**i)

Related: Applied number theory

New prime number record

Last year I wrote a couple posts about what was then the largest known prime, 257885161 – 1. Now there’s a new recordP = 274207281 – 1.

For most of the last 500 years, the largest known prime has been a Mersenne prime, a number of the form 2p – 1 where p is itself prime. Such numbers are not always prime. For example, 211 − 1 = 2047 = 23 × 89.

Euclid proved circa 300 BC that if M is Mersenne prime, then M(M + 1)/2 is a perfect number, i.e. it is equal to the sum of the divisors less than itself. Euler proved two millennia later that all even perfect numbers must have this form. Since no odd perfect numbers have been discovered, the discovery of a new Mersenne prime implies the discovery of a new perfect number.

Using the same methods as in the posts from last year, we can say some things about how P appears in various bases.

In binary, P is written as a string of 74,207,281 ones.

In base four, P is a 1 followed by 37,103,640 threes.

In base eight, P is a 1 followed by 24,735,760 sevens.

In base 16, P is a 1 followed by 18,551,820 F’s.

In base 10, P has 22,338,618 digits, the first of which is 3 and the last of which is 1.

Related posts:

Alternating sums of factorials

Richard Guy’s Strong Law of Small Numbers says

There aren’t enough small numbers to meet the many demands made of them.

In his article by the same name [1] Guy illustrates his law with several examples of patterns that hold for small numbers but eventually fail. One of these examples is

3! – 2! + 1! = 5

4! – 3! + 2! – 1! = 19

5! – 4! + 3! – 2! + 1! = 101

6! – 5! + 4! – 3! + 2! – 1! = 619

7! – 6! + 5! – 4! + 3! – 2! + 1! = 4421

8! – 7! + 6! – 5! + 4! – 3! + 2! – 1! = 35899

If we let f(n) be the alternating factorial sum starting with nf(n) is prime for n = 3, 4, 5, 6, 7, 8, but not for n = 9. So the alternating sums aren’t all prime. Is f(n) usually prime? f(10) is, so maybe 9 is the odd one. Let’s write a code to find out.

    from sympy import factorial, isprime

    def altfact(n):
        sign = 1
        sum = 0
        while n > 0:
            sum += sign*factorial(n)
            sign *= -1
            n -= 1
        return sum

    numprimes = 0
    for i in range(3, 1000):
        if isprime( altfact(i) ):
            print(i)
            numprimes += 1
    print(numprimes)

You could speed up this code by noticing that

    altfact(n+1) = factorial(n+1) - altfact(n)

and tabulating the values of altfact. The code above corresponds directly to the math, though it takes a little while to run.

So it turns out the alternating factorial sum is only prime for 15 values less than 1000. In addition to the values of n mentioned above, the other values are 15, 19, 41, 59, 61, 105, 160, and 601.

* * *

[1] The Strong Law of Small Numbers, Richard K. Guy, The American Mathematical Monthly, Vol. 95, No. 8 (Oct., 1988), pp. 697-712.

For daily tips on Python and scientific computing, follow @SciPyTip on Twitter.

Scipytip twitter icon

Fibonacci formula for pi

Here’s an unusual formula for pi based on the product and least common multiple of the first m Fibonacci numbers.

 

\pi = \lim_{m\to\infty} \sqrt{\frac{6 \log F_1 \cdots F_m}{\log \mbox{lcm}( F_1, \ldots, F_m )}}

Unlike the formula I wrote about a few days ago relating Fibonacci numbers and pi, this one is not as simple to prove. The numerator inside the root is easy enough to estimate asymptotically, but estimating the denominator depends on the distribution of primes.

Source: Yuri V. Matiyasevich and Richard K. Guy, A new formula for π, American Mathematical Monthly, Vol 93, No. 8 (October 1986), pp. 631-635.

 

Last digit of largest known prime

In my previous post, we looked at the largest known prime,  P = 257885161 – 1, and how many digits it has in various bases. This post looks at how to find the last digit of P in base b. We assume b < P. (If b = P then the last digit is 0, and if b > P the last digit is P.)

If b is a power of 2, we showed in the previous post that the last digit of P is b-1.

If b is odd, we can find the last digit using Euler’s totient theorem. Let φ(b) be the number of positive integers less than b and relatively prime to b. Then Euler’s theorem tells us that 2φ(b) ≡ 1 mod b since b is odd. So if r is the remainder when 57885161 is divided by φ(b), then the last digit of Q = P+1 in base b is the same as the last digit of 2r in base b.

For example, suppose we wanted to know the last digit of P in base 15. Since φ(15) = 8, and the remainder when 57885161 is divided by 8 is 1, the last digit of Q base 15 is 2. So the last digit of P is 1.

So we know how to compute the last digit in base b if b is a power or 2 or odd. Factor out all the powers of 2 from b so that b = 2k d and d is odd. We can find the last digit base 2k, and we can find the last digit base d, so can we combine these to find the last digit base b? Yes, that’s exactly what the Chinese Remainder Theorem is for.

To illustrate, suppose we want to find the last digit of P in base 12. P ≡ 3 mod 4 and P ≡ 1 mod 3, so P ≡ 7 mod 12. (The numbers are small enough here to guess. For a systematic approach, see the post mentioned above.) So the last digit of P is 7 in base 12.

If you’d like to write a program to play around with this, you need to be able to compute φ(n). You may have an implementation of this function in your favorite environment. For example, it’s sympy.ntheory.totient in Python. If not, it’s easy to compute φ(n) if you can factor n. You just need to know two things about φ. First, it’s a multiplicative function, meaning that if a and b are relatively prime, then φ(ab) = φ(a) φ(b). Second, if p is a prime, then φ(pk) = pkpk-1.

Update: There’s a new record prime as of January 2016, and it’s also a Mersenne prime. Update of this post here.

Related: Applied number theory

Playing with the largest known prime

The largest known prime at the moment is P = 257885161 – 1. This means that in binary, the number is a string of 57,885,161 ones.

You can convert binary numbers to other bases that are powers of two, say 2k, by starting at the right end and converting to the new base in blocks of size k. So in base 4, we’d start at the right end and convert pairs of bits to base 4. Until we get to the left end, all the pairs are 11, and so they all become 3’s in base four. So in base 4, P would be written as a 1 followed by 28,942,580 threes.

Similarly, we could write P in base 8 by starting at the right end and converting triples of bits to base 8. Since 57885161 = 3*19295053 + 2, we’ll have two bits left over when we get to the left end, so in base 8, P would be written as 3 followed by 19,295,053 sevens. And since 57885161 = 4*14471290 + 1, the hexadecimal (base 16) representation of P is a 1 followed by 14,471,290 F’s.

What about base 10, i.e. how many digits does P have? The number of digits in a positive integer n is ⌊log10n⌋ + 1 where ⌊x⌋ is the greatest integer less than x. For positive numbers, this just means chop off the decimals and keep the integer part.

We’ll make things slightly easier for a moment and find how many digits P+1 has.

log10(P+1) = log10(257885161) = 57885161 log10(2)  = 17425169.76…

and so P+1 has 17425170 digits, and so does P. How do we know P and P+1 have the same number of digits? There are several ways you could argue this. One is that P+1 is not a power of 10, so the number of digits couldn’t change between P and P+1.

Update: How many digits does P have in other bases?

We can take the formula above and simply substitute b for 10: The base b representation of a positive integer n has ⌊logbn⌋ + 1 digits.

As above, we’d rather work with Q = P+1 than P. The numbers P and Q will have the same number of digits unless Q is a power of the base b. But since Q is a power of 2, this could only happen if b is also a power of two, say b = 2k, and k is a divisor of 57885161. But 57885161 is prime, so this could only happen if b = Q. If b > P, then P has one digit base b. So we can concentrate on the case b < Q, in which case P and Q have the same number of digits. The number of digits in Q is then 57885161 logb(2) + 1.

If b = 2k, we can generalize the discussion above about bases 4, 8, and 16. Let q and r be the quotient and remainder when 57885161 is divided by k. The first digit is r and the remaining q digits are b-1.

Next post: Last digit of largest known prime

Update: There’s a new record prime as of January 2016, and it’s also a Mersenne prime. Update of this post here.

Related: Applied number theory

Last digits of Fibonacci numbers

If you write out a sequence of Fibonacci numbers, you can see that the last digits repeat every 60 numbers.

The 61st Fibonacci number is 2504730781961. The 62nd is 4052739537881. Since these end in 1 and 1, the 63rd Fibonacci number must end in 2, etc. and so the pattern starts over.

It’s not obvious that the cycle should have length 60, but it is fairly easy to see that there must be a cycle. There are only 10*10 possibilities for two consecutive digits. Since the Fibonacci numbers are determined by a two-term recurrence, and since the last digit of a sum is determined by the sum of the last digits, the sequence of last digits must repeat eventually. Here “eventually” means after at most 10*10 terms.

Replace “10” by any other base in the paragraph above to show that the sequence of last digits must be cyclic in any base. In base 16, for example, the period is 24. In hexadecimal notation the 25th Fibonacci number is 12511 and the 26th is 1DA31, so the 27th must end in 2, etc.

Here’s a little Python code to find the period of the last digits of Fibonacci numbers working in any base b.

from sympy import fibonacci as f

def period(b):
    for i in range(1, b*b+1):
        if f(i)%b == 0 and f(i+1)%b == 1:
            return(i)

This shows that in base 100 the period is 300. So in base 10 the last two digits repeat every 300 terms.

The period seems to vary erratically with base as shown in the graph below.

Related: Applied number theory

Numerators of harmonic numbers

Harmonic numbers

The nth harmonic number, Hn, is the sum of the reciprocals of the integers up to and including n. For example,

H4 = 1 + 1/2 + 1/3 + 1/4 = 25/12.

Here’s a curious fact about harmonic numbers, known as Wolstenholme’s theorem:

For a prime p > 3, the numerator of Hp-1 is divisible by p2.

The example above shows this for p = 5. In that case, the numerator is not just divisible by p2, it is p2, though this doesn’t hold in general. For example, H10 = 7381/2520. The numerator 7381 is divisible by 112 = 121, but it’s not equal to 121.

Generalized harmonic numbers

The generalized harmonic numbers Hn,m are the sums of the reciprocals of the first n positive integers, each raised to the power m. Wolstenholme’s theorem also says something about these numbers too:

For a prime p > 3, the numerator of Hp-1,2 is divisible by p.

For example, H4,2 = 205/144, and the numerator is clearly divisible by 5.

Computing with Python

You can play with harmonic numbers and generalized harmonic numbers in Python using SymPy. Start with the import statement

from sympy.functions.combinatorial.numbers import harmonic

Then you can get the nth harmonic number with harmonic(n) and generalized harmonic numbers with harmonic(n, m).

To extract the numerators, you can use the method as_numer_denom to turn the fractions into (numerator, denominator) pairs. For example, you can create a list of the numerators of the first 10 harmonic numbers with

[harmonic(n).as_numer_denom()[0] for n in range(10)]

What about 0?

You might notice that harmonic(0) returns 0, as it should. The sum defining the harmonic numbers is empty in this case, and empty sums are defined to be zero.