The most common way to test whether a large number is prime is the **Miller-Rabin test**. If the test says a number is composite, it’s definitely composite. Otherwise the number is very likely, but not certain, to be prime. A **pseudoprime** is a composite number that slips past the Miller-Rabin test. (Actually, a *strong* pseudoprime. More on that below.)

## Miller-Rabin test

The Miller-Rabin test is actually a sequence of tests, one for each prime number. First you run the test associated with 2, then the test associated with 3, then the one associated with 5, etc. If we knew the smallest numbers for which these tests fail, then for smaller numbers we know for certain that they’re prime if they pass. In other words, we can turn the Miller-Rabin test for *probable* primes into test for *provable* primes.

## Lower bound on failure

A recent result by Yupeng Jiang and Yingpu Deng finds the smallest number for which the Miller-Rabin test fails for the first nine primes. This number is

*N* = 3,825,123,056,546,413,051

or more than 3.8 quintillion. So if a number passes the first nine Miller-Rabin tests, and it’s less than *N*, then it’s prime. Not just a probable prime, but definitely prime. For a number *n* < *N*, this will be more efficient than running previously known deterministic primality tests on *n*.

## Python implementation

Let’s play with this in Python. The SymPy library implements the Miller-Rabin test in a function `mr`

.

The following shows that *N* is composite, and that it is a false positive for the first nine Miller-Rabin tests.

from sympy.ntheory.primetest import mr N = 3825123056546413051 assert(N == 149491*747451*34233211) ps = [2, 3, 5, 7, 11, 13, 17, 19, 23] print( mr(N, ps) )

This doesn’t prove that *N* is the *smallest* number with these properties; we need the proof of Jiang and Deng for that. But assuming their result is right, here’s an efficient deterministic primality test that works for all *n* less than *N*.

def is_prime(n): N = 3825123056546413051 assert(n < N) ps = [2, 3, 5, 7, 11, 13, 17, 19, 23] return mr(n, ps)

Jiang and Deng assert that *N* is also the smallest composite number to slip by the first 10 and 11 Miller-Rabin tests. We can show that *N* is indeed a strong pseudoprime for the 10th and 11th primes, but not for the 12th prime.

print( mr(N, [29, 31]) ) print( mr(N, [37]) )

This code prints `True`

for the first test and `False`

for the second. That is, *N* is a strong pseudoprime for bases 29 and 31, but not for 37.

## Pseudoprimes and strong pseudoprimes

Fermat’s little theorem says that if *n* is prime, then

*a*^{n−1} = 1 mod *n*

for all 0 < *a* < *n*. This gives a necessary but not sufficient test for primality. A (Fermat) pseudoprime for base *a* is a composite number *n* such that the above holds, an example of where the test is not sufficient.

The Miller-Rabin test refines Fermat’s test by looking at additional necessary conditions for a number being prime. Often a composite number will fail one of these conditions, but not always. The composite numbers that slip by are called *strong* pseudoprimes or sometimes Miller-Rabin pseudoprimes.

Miller and Rabin’s extra testing starts by factoring *n*-1 into 2^{s}*d* where *d* is odd. If *n* is prime, then for all 0 < *a* < *n* either

*a ^{d}* = 1 mod

*n*

or

*a*^{2kd} = −1 mod *n*

for all *k* satisfying 0 ≤ *k* < *s*. If one of these two conditions holds for a particular *a*, then *n* passes the Miller-Rabin test for the base *a*.

It wouldn’t be hard to write your own implementation of the Miller-Rabin test. You’d need a way to work with large integers and to compute modular exponents, both of which are included in Python without having to use SymPy.

## Example

561 is a pseudoprime for base 2. In fact, 561 is a pseudoprime for every base relatively prime to 561, i.e. it’s a Carmichael number. But it is not a strong pseudoprime for 2 because 560 = 16 × 35, so *d* = 35 and

2^{35} = 263 mod 561,

which is not congruent to 1 or to -1. In Python,

>>> pow(2, 560, 561) 1 >>> pow(2, 35, 561) 263

Jiang & Deng’s paper seems to be behind a paywall, but a preprint is available at arXiv: https://arxiv.org/abs/1207.0063

Your explanation of the Miller-Rabin test says that we need a^{2^kd}=-1 for *all* k while actually it requires only a^{2^kd}=-1 for *some* k.

In particular, for a=2 and n=561 the problem is that 2^{4*35}≡61 (mod 561) while 2^{8*35}≡1 (mod 561).

I have deterministic Miller-Rabin code in plain Python, with no imports, here: https://gist.github.com/PM2Ring/2c9f988b5537044f5e2cdfb0bf7f1df9 It works for all n < 3317044064679887385961981.

On a related note, here's a Python implementation of Bradley Berg's M-R algorithm which can prime test any 32 bit integer with a single SPRP (strong probable prime) test, with the base selected by a simple hash function. https://gist.github.com/PM2Ring/98df2ee730c4d250a52434523cc080fb It actually works beyond that range: the first false positive is 4302347941 = 46381 * 92761. My code uses trial division of small primes for efficiency reasons, but Bradley's algorithm works correctly without the trial division.

This hash-based technique was devised by Steve Worley, who discovered numerous useful SPRP sets, see https://miller-rabin.appspot.com/