Prime factors, phone numbers, and the normal distribution

Telephone numbers typically have around three distinct prime factors.

The length of a phone number varies by country, but US a phone number is a 10 digit number, and 10-digit numbers are often divisible by three different prime numbers, give or take a couple. Assuming phone numbers are scattered among possible 10-digit numbers in a way that doesn’t bias their number of prime factors, these numbers will often have between 1 and 5 prime factors. If a country has 9- or 11-digit phone numbers, the result is essentially the same.

Let ω(n) be the number of distinct prime factors of n. Then the Erdős–Kac theorem says roughly that ω(n) is distributed like a normal random variable with mean and variance log log n. More precisely, fix two numbers a and b.  For a given value of x, count the proportion of positive integers less than x where

(ω(n) – log log n) / sqrt( log log n)

is between a and b. Then in the limit as x goes to infinity, this proportion approaches the probability that a standard normal random variable is between a and b.

So by that heuristic, the number of distinct prime factors of a 10-digit number is approximately normally distributed with mean and variance log log 10^11 = 3.232, and such a distribution is between 1 and 5 around 73% of the time.

My business phone number, for example, is 8324228646. Obviously this is divisible by 2. In fact it equals 2 × 32 × 462457147, and so it has exactly three distinct prime factors: 2, 3, and 462457147.

Here’s how you could play with this using Python.

    from sympy.ntheory import factorint

    def omega(n):
        return len(factorint(n))

I looked in SymPy and didn’t see an implementation of ω(n) directly, but it does have a function factorint that returns the prime factors of a number, along with their multiplicities, in a dictionary. So ω(n) is just the size of that dictionary.

I took the first 20 phone numbers in my contacts and ran them through omega and got results consistent with what you’d expect from the theory above. One was prime, and none had more than five factors.

Bar chart of umber of prime factors in a sample of phone numbers with heights [1, 3, 5, 8, 3]

Chi-square goodness of fit test example with primes

chi squared

Yesterday Brian Hayes wrote a post about the distribution of primes. He showed how you could take the remainder when primes are divided by 7 and produce something that looks like rolls of six-sided dice. Here we apply the chi-square goodness of fit test to show that the rolls are too evenly distributed to mimic randomness. This post does not assume you’ve seen the chi-square test before, so it serves as an introduction to this goodness of fit test.

In Brian Hayes’ post, he looks at the remainder when consecutive primes are divided by 7, starting with 11. Why 11? Because it’s the smallest prime bigger than 7. Since no prime is divisible by any other prime, all the primes after 7 will have a remainder of between 1 and 6 inclusive when divided by 7. So the results are analogous to rolling six-sided dice.

The following Python code looks at prime remainders and (pseudo)random rolls of dice and computes the chi-square statistic for both.

First, we import some functions we’ll need.

    from sympy import prime
    from random import random
    from math import ceil

The function prime takes an argument n and returns the nth prime. The function random produces a pseudorandom number between 0 and 1. The ceiling function ceil rounds its argument up to an integer. We’ll use it to convert the output of random into dice rolls.

In this example we’ll use six-sided dice, but you could change num_sides to simulate other kinds of dice. With six-sided dice, we divide by 7, and we start our primes with the fifth prime, 11.

    num_sides = 6
    modulus = num_sides + 1

    # Find the index of the smallest prime bigger than num_sides
    index = 1
    while prime(index) <= modulus:
        index += 1

We’re going to take a million samples and count how many times we see 1, 2, …, 6. We’ll keep track of our results in an array of length 7, wasting a little bit of space since the 0th slot will always be 0. (Because the remainder when dividing a prime by a smaller number is always positive.)

    # Number of samples
    N = 1000000
    
    observed_primes = [0]*modulus
    observed_random = [0]*modulus

Next we “roll” our dice two ways, using prime remainders and using a pseudorandom number generator.

    for i in range(index, N+index):
        m = prime(i) % modulus
        observed_primes[m] += 1
        m = int(ceil(random()*num_sides))
        observed_random[m] += 1

The chi-square goodness of fit test depends on the observed number of events in each cell and the expected number. We expect 1/6th of the rolls to land in cell 1, 2, …, 6 for both the primes and the random numbers. But in a general application of the chi-square test, you could have a different expected number of observations in each cell.

    expected = [N/num_sides for i in range(1, modulus)]

The chi-square test statistic sums (O – E)2/E over all cells, where O stands for “observed” and E stands for “expected.”

    def chisq_stat(O, E):
        return sum( [(o - e)**2/e for (o, e) in zip(O, E)] )

Finally, we compute the chi-square statistic for both methods.

    ch = chisq_stat(observed_primes[1:], expected[1:])
    print(ch)

    ch = chisq_stat(observed_random[1:], expected[1:])
    print(ch)

Note that we chop off the first element of the observed and expected lists to get rid of the 0th element that we didn’t use.

When I ran this I got 0.01865 for the prime method and 5.0243 for the random method. Your results for the prime method should be the same, though you might have a different result for the random method.

Now, how do we interpret these results? Since we have six possible outcomes, our test statistics has a chi-square distribution with five degrees of freedom. It’s one less than the number of possibilities because the total counts have to sum to N; if you know how many times 1, 2, 3, 4, and 5 came up, you can calculate how many times 6 came up.

A chi-square distribution with ν degrees of freedom has expected value ν. In our case, we expect a value around 5, and so the chi-square value of 5.0243 is unremarkable. But the value of 0.01864 is remarkably small. A large chi-square statistics would indicate a poor fit, the observed numbers being suspiciously far from their expected values. But a small chi-square value suggests the fit is suspiciously good, closer to the expected values than we’d expect of a random process.

We can be precise about how common or unusual a chi-square statistic is by computing the probability that a sample from the chi square distribution would be larger or smaller. The cdf gives the probability of seeing a value this small or smaller, i.e. a fit this good or better. The sf gives the probability of seeing a value this larger or larger, i.e. a fit this bad or worse. (The scipy library uses sf for “survival function,” another name for the ccdf, complementary cumulative distribution function).

    from scipy.stats import chi2
    print(chi2.cdf(ch, num_sides-1), chi2.sf(ch, num_sides-1))

This says that for the random rolls, there’s about a 41% chance of seeing a better fit and a 59% chance of seeing a worse fit. Unremarkable.

But it says there’s only a 2.5 in a million chance of seeing a better fit than we get with prime numbers. The fit is suspiciously good. In a sense this is not surprising: prime numbers are not random! And yet in another sense it is surprising since there’s a heuristic that says primes act like random numbers unless there’s a good reason why in some context they don’t. This departure from randomness is the subject of research published just this year.

If you look at dice with 4 or 12 sides, you get a suspiciously good fit, but not as suspicious as with 6 sides. But with 8 or 20-sided dice you get a very bad fit, so bad that its probability underflows to 0. This is because the corresponding moduli, 9 and 21, are composite, which means some of the cells in our chi-square test will have no observations. (Suppose m has a proper factor a. Then if a prime p were congruent to a mod m, p would be have to be divisible by a.)

Update: See the next post for a systematic look at different moduli.

You don’t have to use “dice” that correspond to regular solids. You could consider 10-sided “dice,” for example. For such numbers it may be easier to think of spinners than dice, a spinner with 10 equal arc segments it could fall into.

Related post: Probability that a number is prime

Click to learn more about Bayesian statistics consulting

Alternating sums of factorials

Richard Guy’s Strong Law of Small Numbers says

There aren’t enough small numbers to meet the many demands made of them.

In his article by the same name [1] Guy illustrates his law with several examples of patterns that hold for small numbers but eventually fail. One of these examples is

3! – 2! + 1! = 5

4! – 3! + 2! – 1! = 19

5! – 4! + 3! – 2! + 1! = 101

6! – 5! + 4! – 3! + 2! – 1! = 619

7! – 6! + 5! – 4! + 3! – 2! + 1! = 4421

8! – 7! + 6! – 5! + 4! – 3! + 2! – 1! = 35899

If we let f(n) be the alternating factorial sum starting with nf(n) is prime for n = 3, 4, 5, 6, 7, 8, but not for n = 9. So the alternating sums aren’t all prime. Is f(n) usually prime? f(10) is, so maybe 9 is the odd one. Let’s write a code to find out.

    from sympy import factorial, isprime

    def altfact(n):
        sign = 1
        sum = 0
        while n > 0:
            sum += sign*factorial(n)
            sign *= -1
            n -= 1
        return sum

    numprimes = 0
    for i in range(3, 1000):
        if isprime( altfact(i) ):
            print(i)
            numprimes += 1
    print(numprimes)

You could speed up this code by noticing that

    altfact(n+1) = factorial(n+1) - altfact(n)

and tabulating the values of altfact. The code above corresponds directly to the math, though it takes a little while to run.

So it turns out the alternating factorial sum is only prime for 15 values less than 1000. In addition to the values of n mentioned above, the other values are 15, 19, 41, 59, 61, 105, 160, and 601.

* * *

[1] The Strong Law of Small Numbers, Richard K. Guy, The American Mathematical Monthly, Vol. 95, No. 8 (Oct., 1988), pp. 697-712.

For daily tips on Python and scientific computing, follow @SciPyTip on Twitter.

Scipytip twitter icon

Last digits of Fibonacci numbers

If you write out a sequence of Fibonacci numbers, you can see that the last digits repeat every 60 numbers.

The 61st Fibonacci number is 2504730781961. The 62nd is 4052739537881. Since these end in 1 and 1, the 63rd Fibonacci number must end in 2, etc. and so the pattern starts over.

It’s not obvious that the cycle should have length 60, but it is fairly easy to see that there must be a cycle. There are only 10*10 possibilities for two consecutive digits. Since the Fibonacci numbers are determined by a two-term recurrence, and since the last digit of a sum is determined by the sum of the last digits, the sequence of last digits must repeat eventually. Here “eventually” means after at most 10*10 terms.

Replace “10” by any other base in the paragraph above to show that the sequence of last digits must be cyclic in any base. In base 16, for example, the period is 24. In hexadecimal notation the 25th Fibonacci number is 12511 and the 26th is 1DA31, so the 27th must end in 2, etc.

Here’s a little Python code to find the period of the last digits of Fibonacci numbers working in any base b.

from sympy import fibonacci as f

def period(b):
    for i in range(1, b*b+1):
        if f(i)%b == 0 and f(i+1)%b == 1:
            return(i)

This shows that in base 100 the period is 300. So in base 10 the last two digits repeat every 300 terms.

The period seems to vary erratically with base as shown in the graph below.

Related: Applied number theory

Numerators of harmonic numbers

Harmonic numbers

The nth harmonic number, Hn, is the sum of the reciprocals of the integers up to and including n. For example,

H4 = 1 + 1/2 + 1/3 + 1/4 = 25/12.

Here’s a curious fact about harmonic numbers, known as Wolstenholme’s theorem:

For a prime p > 3, the numerator of Hp-1 is divisible by p2.

The example above shows this for p = 5. In that case, the numerator is not just divisible by p2, it is p2, though this doesn’t hold in general. For example, H10 = 7381/2520. The numerator 7381 is divisible by 112 = 121, but it’s not equal to 121.

Generalized harmonic numbers

The generalized harmonic numbers Hn,m are the sums of the reciprocals of the first n positive integers, each raised to the power m. Wolstenholme’s theorem also says something about these numbers too:

For a prime p > 3, the numerator of Hp-1,2 is divisible by p.

For example, H4,2 = 205/144, and the numerator is clearly divisible by 5.

Computing with Python

You can play with harmonic numbers and generalized harmonic numbers in Python using SymPy. Start with the import statement

from sympy.functions.combinatorial.numbers import harmonic

Then you can get the nth harmonic number with harmonic(n) and generalized harmonic numbers with harmonic(n, m).

To extract the numerators, you can use the method as_numer_denom to turn the fractions into (numerator, denominator) pairs. For example, you can create a list of the numerators of the first 10 harmonic numbers with

[harmonic(n).as_numer_denom()[0] for n in range(10)]

What about 0?

You might notice that harmonic(0) returns 0, as it should. The sum defining the harmonic numbers is empty in this case, and empty sums are defined to be zero.

 

Counting primitive bit strings

A string of bits is called primitive if it is not the repetition of several copies of a smaller string of bits. For example, the 101101 is not primitive because it can be broken down into two copies of the string 101. In Python notation, you could produce 101101 by "101"*2. The string 11001101, on the other hand, is primitive. (It contains substrings that are not primitive, but the string as a whole cannot be factored into multiple copies of a single string.)

For a given n, let’s count how many primitive bit strings there are of length n. Call this f(n). There are 2n bit strings of length n, and f(n) of these are primitive. For example, there are f(12) primitive bit strings of length 12. The strings that are not primitive are made of copies of primitive strings: two copies of a primitive string of length 6, three copies of a primitive string of length 4, etc. This says

 2^{12} = f(12) + f(6) + f(4) + f(3) + f(2) + f(1)

and in general

2^n = \sum_{d \mid n} f(d)

Here the sum is over all positive integers d that divide n.

Unfortunately this formula is backward. It gives is a formula for something well known, 2n, as a sum of things we’re trying to calculate. The Möbius inversion formula is just what we need to turn this formula around so that the new thing is on the left and sums of old things are on the right. It tells us that

f(n) = \sum_{d \mid n} \mu\left(\frac{n}{d}\right) 2^d

where μ is the Möbius function.

We could compute f(n) with Python as follows:

from sympy.ntheory import mobius, divisors

def num_primitive(n):
    return sum( [mobius(n/d)*2**d for d in divisors(n)] )

The latest version of SymPy, version 0.7.6, comes with a function mobius for computing the Möbius function. If you’re using an earlier version of SymPy, you can roll your own mobius function:

from sympy.ntheory import factorint

def mobius(n):
    exponents = factorint(n).values()
    lenexp = len(exponents)
    m = 0 if lenexp == 0 else max(exponents)
    return 0 if m > 1 else (-1)**lenexp

The version of mobius that comes with SymPy 0.7.6 may be more efficient. It could, for example, stop the factorization process early if it discovers a square factor.

Related: Applied number theory

Number theory determinant and SymPy

Let σ(n) be the sum of the positive divisors of n and let gcd(a, b) be the greatest common divisor of a and b.

Form an n by n matrix M whose (i, j) entry is σ(gcd(i, j)). Then the determinant of M is n!.

The following code shows that the theorem is true for a few values of n and shows how to do some common number theory calculations in SymPy.

from sympy import gcd, divisors, Matrix, factorial

def f(i, j):
    return sum( divisors( gcd(i, j) ) )

def test(n):
    r = range(1, n+1)
    M = Matrix( [ [f(i, j) for j in r] for i in r] )
    return M.det() - factorial(n)

for n in range(1, 11):
    print( test(n) )

As expected, the test function returns zeros.

If we replace the function σ above by τ where τ(n) is the number of positive divisors of n, the corresponding determinant is 1. To test this, replace sum by len in the definition of f and replace factorial(n) by 1.

In case you’re curious, both results are special cases of the following more general theorem. I don’t know whose theorem it is. I found it here.

For any arithmetic function f(m), let g(m) be defined for all positive integers m by

g(m) = \sum_{d \,\mid \,m} \mu(d) f\left(\frac{m}{d}\right)

Let M be the square matrix of order n with ij element f(gcd(i, j)). Then

\det M = \prod_i^n g(j)

Here μ is the Möbius function. The two special cases above correspond to g(m) = m and g(m) = 1.

* * *

For daily tips on Python and scientific computing, follow @SciPyTip on Twitter.

Scipytip twitter icon

A prime-generating formula and SymPy

Mills’ constant is a number θ such that the integer part of θ raised to a power of 3 is always a prime. We’ll see if we can verify this computationally with SymPy.

from sympy import floor, isprime
from sympy.mpmath import mp, mpf

# set working precision to 200 decimal places
mp.dps = 200

# Mills' constant http://oeis.org/A051021
theta = mpf("1.30637788386308069046861449260260571291678458515671364436805375996643405376682659882150140370119739570729")

for i in range(1, 7):
    n = floor(theta**3**i)
    print i, n, isprime(n)

Note that the line of code defining theta wraps Mill’s constant as a string and passes it as an argument to mpf. If we just wrote theta = 1.30637788386308… then theta would be truncated to an ordinary floating point number and most of the digits would be lost. Not that I did that in the first version of the code. 🙂

This code says that the first five outputs are prime but that the sixth is not. That’s because we are not using Mills’ constant to enough precision. The integer part of θ^3^6 is 85 digits long, and we don’t have enough precision to compute the last digit correctly.

If we use the value of Mills’ constant given here to 640 decimal places, and increase mp.dps to 1000, we can correctly compute the sixth term, but not the seventh.

Mill’s formula is just a curiosity with no practical use that I’m aware of, except possibly as an illustration of impractical formulas.

A magic square filled with consecutive primes

In 1988 Martin Gardner offered a $100 prize for the first person to produce a magic square filled with consecutive primes. Later that year, Harry Nelson found 22 solutions using a Cray computer.

1,480,028,201 1,480,028,129 1,480,028,183
1,480,028,153 1,480,028,171 1,480,028,189
1,480,028,159 1,480,028,213 1,480,028,141

Gardner said that the square above is “almost certainly the one with the lowest constant possible for such a square.”

It’s easy to verify that the numbers above are consecutive primes. Here’s a little Python code for the job. The function nextprime(x, i) gives the next i primes after x. We call the function with x equal to one less than the smallest entry in the square and it prints out all the entries.

from sympy import nextprime
for i in range(1,10):
print( nextprime(148028128, i) )

If you’re looking for more than a challenge, verify whether Gardner’s assertion was correct that the square above uses the smallest possible set of consecutive primes.

By the way, assuming Harry Nelson’s Cray was the Y-MP model that came out in 1988, here are its specs according to Wikipedia:

The Y-MP could be equipped with two, four or eight vector processors, with two functional units each and a clock cycle time of 6 ns (167 MHz). Peak performance was thus 333 megaflops per processor. Main memory comprised 128, 256 or 512 MB of SRAM.

Related posts:

Magic squares from a knight’s tour and a king’s tour.

For daily tweets on algebra and other math, follow @AlgebraFact on Twitter.

AlgebraFact logo

Golden strings and the rabbit constant

Golden strings are analogous to Fibonacci numbers, except one uses concatenation rather than addition.

Start with s1 = “1” and s2 = “10”. Then define sn = sn-1 + sn-2 where “+” means string concatenation.

The first few golden strings are

  • “1”
  • “10”
  • “101”
  • “10110”
  • “10110101”

The length of sn is Fn+1, the n+1st Fibonacci number. Also, sn contains Fn 1’s and Fn-1 0’s. (Source: The Glorious Golden Ratio).

If we interpret the sn as the fractional part of a binary number, the sequence converges to the rabbit constant R = 0.7098034428612913…

It turns out that R is related to the golden ratio φ by

R = \sum_{i=1}^\infty 2^{-\lfloor i \phi \rfloor}

where ⌊i φ⌋ is the largest integer no greater than iφ.

Here’s a little Python code to print out the first few golden strings and an approximation to the rabbit constant.

from sympy.mpmath import mp, fraction

a = "1"
b = "10"
for i in range(10):
    b, a = b+a, b
    print(b)

n = len(b)
mp.dps = n
denom = 2**n
num = int(b, 2)

rabbit = fraction(num, denom)
print(rabbit)

Note that the code sets the number of decimal places, mp.dps, to the length of the string b. That’s because it takes up to n decimal places to exactly represent a rational number with denominator 2n.

Related posts:

SymPy book

There’s a new book on SymPy, a Python library for symbolic math.

The book is Instant SymPy Starter by Ronan Lamy. As far as I know, this is the only book just on SymPy. It’s only about 50 pages, which is nice. It’s not a replacement for the online documentation but just a quick start guide.

The online SymPy documentation is good, but I think it would be easier to start with this book. And although I’ve been using SymPy off and on for a while, I learned a few things from the book.

 

Need a 12-digit prime?

You may have seen the joke “Enter any 12-digit prime number to continue.” I’ve seen it floating around as the punchline in several contexts.

So what do you do if you need a 12-digit prime? Here’s how to find the smallest one using Python.

>>> from sympy import nextprime
>>> nextprime(10**11)
100000000003L

The function nextprime gives the smallest prime larger than its argument. (Note that the smallest 12-digit number is 1011, not 1012. Great opportunity for an off-by-one mistake.)

Optionally you can provide an addition argument to nextprime to get primes further down the list. For example, this gives the second prime larger than 1011.

>>> nextprime(10**11, 2)
100000000019L

What if you wanted the largest 12-digit prime rather than the smallest?

>>> from sympy import prevprime
>>> prevprime(10**12)
999999999989L

Finally, suppose you want to know how many 12-digit primes there are. SymPy has a function primepi that returns the number of primes less than its argument. Unfortunately, it fails for large arguments. It works for arguments as big as 2**27 but throws a memory error for 2**28.

The number of primes less than n is approximately n / log n, so there are about 32 billion primes between 1011 and 1012. According to Wolfram Alpha, the exact number of 12-digit primes is 33,489,857,205. So if you try 12-digit numbers at random, your chances are about 1 in 30 of getting a prime. If you’re clever enough to just pick odd numbers, your chances go up to 1 in 15.

* * *

For daily tweets on algebra and other math, follow @AlgebraFact on Twitter.

AlgebraFact logo

Recognizing numbers

I was playing around with SymPy, a symbolic math package for Python, and ran across nsimplify. It takes a floating point number and tries to simplify it: as a fraction with a small denominator, square root of a small integer, an expression involving famous constants, etc.

For example, suppose some calculation returned 4.242640687119286 and you suspect there’s something special about that number. Here’s how you might test where it came from.

>>> from sympy import *
>>> nsimplify(4.242640687119286)
3*sqrt(2)

Maybe you do a calculation numerically, find a simple expression for the result, and that suggests an analytical solution.

I think a more common application of nsimplify might be to help you remember half-forgotten formulas. For example, maybe you’re rusty on your trig identities, but you remember that cos(π/6) is something special.

>>> nsimplify(cos(pi/6))
sqrt(3)/2

Or to take a more advanced example, suppose that you vaguely remember that the gamma function takes on recognizable values at half integer values, but you don’t quite remember how. Maybe something involving π or e. You can suggest that nsimplify include expressions with π and e in its search.

>>> nsimplify(gamma(3.5), constants=[pi, E])
15*sqrt(pi)/8

You can also give nsimplify a tolerance, asking it to find a simple representation within a neighborhood of the number. For example, here’s a way to find approximations to π.

>>> nsimplify(pi, tolerance=1e-5)
355/113

With a wider tolerance, it will return a simpler approximation.

>>> nsimplify(pi, tolerance=1e-2)
22/7

Finally, here’s higher precision approximation to π that isn’t exactly simple:

>>> nsimplify(pi, tolerance=1e-7)
exp(141/895 + sqrt(780631)/895)

* * *

For daily tips on Python and scientific computing, follow @SciPyTip on Twitter.

Scipytip twitter icon

Rolling dice for normal samples: Python version

A handful of dice can make a decent normal random number generator, good enough for classroom demonstrations. I wrote about this a while ago.

My original post included Mathematica code for calculating how close to normal the distribution of the sum of the dice is. Here I’d like to redo the code in Python to show how to do the same calculations using SymPy. [Update: I’ll also give a solution that does not use SymPy and that scales much better.]

If you roll five dice and add up the spots, the probability of getting a sum of k is the coefficient of xk in the expansion of

(x + x2 + x3 + x4 + x5 + x6)5 / 65.

Here’s code to find the probabilities by expanding the polynomial and taking coefficients.

from sympy import Symbol

sides = 6
dice = 5
rolls = range( dice*sides + 1 )

# Tell SymPy that we want to use x as a symbol, not a number
x = Symbol('x')

# p(x) = (x + x^2 + ... + x^m)^n
# where m = number of sides per die
# and n = number of dice
p = sum([x**i for i in range(1, sides + 1)])**dice

# Extract the coefficients of p(x) and divide by sides**dice
pmf = [sides**(-dice) * p.expand().coeff(x, i) for i in rolls]

If you’d like to compare the CDF of the dice sum to a normal CDF you could add this.

from scipy import array, sqrt
from scipy.stats import norm

cdf = array(pmf).cumsum()

# Normal CDF for comparison
mean = 0.5*(sides + 1)*dice
variance = dice*(sides**2 -1)/12.0
temp = [norm.cdf(i, mean, sqrt(variance)) for i in roles]
norm_cdf = array(temp)

diff = abs(cdf - norm_cdf)
# Print the maximum error and where it occurs
print diff.max(), diff.argmax()

Question: Now suppose you want a better approximation to a normal distribution. Would it be better to increase the number of dice or the number of sides per dice? For example, would you be better off with 10 six-sided dice or 5 twelve-sided dice? Think about it before reading the solution.

Update: The SymPy code does not scale well. When I tried the code with 50 six-sided dice, it ran out of memory. Based on Andre’s comment, I rewrote the code using polypow. SymPy offers much more symbolic calculation functionality than NumPy, but in this case NumPy contains all we need. It is much faster and it doesn’t run out of memory.

from numpy.polynomial.polynomial import polypow
from numpy import ones

sides = 6
dice = 100

# Create an array of polynomial coefficients for
# x + x^2 + ... + x^sides
p = ones(sides + 1)
p[0] = 0

# Extract the coefficients of p(x)**dice and divide by sides**dice
pmf = sides**(-dice) * polypow(p, dice)
cdf = pmf.cumsum()

That solution works for up to 398 dice. What’s up with that? With 399 dice, the largest polynomial coefficient overflows. If we divide by the number of dice before raising the polynomial to the power dice, the code becomes a little simpler and scales further.

p = ones(sides + 1)
p[0] = 0
p /= sides
pmf = polypow(p, dice)
cdf = pmf.cumsum()

I tried this last approach on 10,000 dice with no problem.

* * *

For daily posts on probability, follow @ProbFact on Twitter.

ProbFact twitter icon

Suffix primes

MathUpdate tweeted this afternoon that

Any number made by removing the first n digits of 646216567629137 is still prime.

and links to sequence A012885 in the Online Encyclopedia of Integer Sequences (OEIS). The OEIS heading for the sequence is

Suffixes of 357686312646216567629137 (all primes)

which implies you can start with an even larger number, cutting off the first digit each time and producing a sequence of primes.

The following Python code verifies that this is indeed the case.

    from sympy.ntheory import isprime

    x = "357686312646216567629137"

    while x:
        print isprime(int(x))
        x = x[1:]

Update: lucio wrote a program to show that the prime given here is the longest one with the suffix property.

    def extend_prime(n, result):
        for i in range(10):
            nn = int(str(i) + str(n))
            if nn == n: continue
            if isprime(nn):
                result.append(nn)
                extend_prime(nn, result)
        return result        

    print "Max Prefix Prime:", max(extend_prime("", []))

One minor suggestion: by using range(1, 10) rather than range(10) above, i.e. eliminating 0, the line if nn == n: continue could be eliminated.

Instead of calling max, you could call len to find that there are 4260 suffix primes.

Here’s a list of all suffix primes created by the code above and sorting the output.

Other novelty primes:

For daily tweets on algebra and other math, follow @AlgebraFact on Twitter.

AlgebraFact logo