Race between primes of the forms 4k + 1 and 4k + 3

The last few posts have looked at expressing an odd prime p as a sum of two squares. This is possible if and only if p is of the form 4k + 1. I illustrated an algorithm for finding the squares with p = 2255 − 19, a prime that is used in cryptography. It is being used in bringing this page to you if the TLS connection between my server and your browser is uses Curve25519 or Ed25519.

World records

I thought about illustrating the algorithm with a larger prime too, such as a world record. But then I realized all the latest record primes have been of the form 4k + 3 and so cannot be written as a sum of squares. Why is p mod 4 equal to 3 for all the records? Are more primes congruent to 3 than to 1 mod 4? The answer to that question is subtle; more on that shortly.

More record primes are congruent to 3 mod 4 because Mersenne primes are easier to find, and that’s because there’s an algorithm, the Lucas-Lehmer test, that can test whether a Mersenne number is prime more efficiently than testing general numbers. Lucas developed his test in 1878 and Lehmer refined it in 1930.

Since the time Lucas first developed his test, the largest known prime has always been a Mersenne prime, with exceptions in 1951 and in 1989.

Chebyshev bias

So, are more primes congruent to 3 mod 4 than are congruent to 1 mod 4?

Define the function f(n) to be the ratio of the number of primes in each residue class.

f(n) = (# primes p < n with p = 3 mod 4) / (# primes p < n with p = 1 mod 4)

As n goes to infinity, the function f(n) converges to 1. So in that sense the number of primes in each category are equal.

If we look at the difference rather than the ratio we get a more subtle story. Define the lead function to be how much the count of primes equal to 3 mod 4 leads the number of primes equal to 1 mod 4.

g(n) = (# primes p < n with p = 3 mod 4) − (# primes p < n with p = 1 mod 4)

For any nf(n) > 1 if and only if g(n) > 0. However, as n goes to infinity the function g(n) does not converge. It oscillates between positive and negative infinitely often. But g(n) is positive for long stretches. This phenomena is known as Chebyshev bias.

Visualizing the lead function

We can calculate the lead function at primes with the following code.

from numpy import zeros
from sympy import primepi, primerange

N = 1_000_000
leads = zeros(primepi(N) + 1)
for index, prime in enumerate(primerange(2, N), start=1):
    leads[index] = leads[index - 1] + prime % 4 - 2

Here is a list of the primes at which the lead function is zero, i.e. when it changes sign.

[   0,     1,     3,     7,    13,    89,  2943,  2945,  2947,
 2949,  2951,  2953, 50371, 50375, 50377, 50379, 50381, 50393,
50413, 50423, 50425, 50427, 50429, 50431, 50433, 50435, 50437,
50439, 50445, 50449, 50451, 50503, 50507, 50515, 50517, 50821,
50843, 50853, 50855, 50857, 50859, 50861, 50865, 50893, 50899,
50901, 50903, 50905, 50907, 50909, 50911, 50913, 50915, 50917,
50919, 50921, 50927, 50929, 51119, 51121, 51123, 51127, 51151,
51155, 51157, 51159, 51161, 51163, 51177, 51185, 51187, 51189,
51195, 51227, 51261, 51263, 51285, 51287, 51289, 51291, 51293,
51297, 51299, 51319, 51321, 51389, 51391, 51395, 51397, 51505,
51535, 51537, 51543, 51547, 51551, 51553, 51557, 51559, 51567,
51573, 51575, 51577, 51595, 51599, 51607, 51609, 51611, 51615,
51617, 51619, 51621, 51623, 51627]

This is OEIS sequence A038691.

Because the lead function changes more often in some regions than others, it’s best to plot the function over multiple ranges.

The lead function is more often positive than negative. And yet it is zero infinitely often. So while the count of primes with remainder 3 mod 4 is usually ahead, the counts equal out infinitely often.

Wagon’s algorithm in Python

The last three posts have been about Stan Wagon’s algorithm for finding x and y satisfying

x² + y² = p

where p is an odd prime.

The first post in the series gives Gauss’ formula for a solution, but shows why it is impractical for large p. The bottom of this post introduces Wagon’s algorithm and says that it requires two things: finding a quadratic non-residue mod p and a variation on the Euclidean algorithm.

The next post shows how to find a quadratic non-residue.

The reason Wagon requires a non-residue is because he needs to find a square root of −1 mod p. The previous post showed how that’s done.

In this post we will complete Wagon’s algorithm by writing the modified version of the euclidean algorithm.

Suppose p is an odd prime, and we’ve found x such that x² = −1 mod p as in the previous posts. The last step in Wagon’s algorithm is to apply the Euclidean algorithm to x and p and stop when the numbers are both less than √p.

When we’re working with large integers, how do we find square roots? Maybe p and even √p are too big to represent as a floating point number, so we can’t just apply the sqrt function. Maybe p is less than the largest floating point number (around 10308) but the sqrt function doesn’t have enough precision. Floats only have 53 bits of precision, so an integer larger than 253 cannot necessarily be represented entirely accurately.

The solution is to use the isqrt function, introduced in Python 3.8. It returns the largest integer less than the square root of its argument.

Now we have everything necessary to finish implementing Wagon’s algorithm.

from sympy import legendre_symbol, nextprime
from math import isqrt

def find_nonresidue(p):
    q = 2
    while legendre_symbol(q, p) == 1:
        q = nextprime(q)
    return q

def my_euclidean_algorithm(a, b, stop):
    while a > stop:
        a, b = b, a % b
    return (a, b)

def find_ab(p):
    assert(p % 4 == 1)
    k = p // 4
    c = find_nonresidue(p)
    x = pow(c, k, p)
    return my_euclidean_algorithm(p, x, isqrt(p))

Let’s use this to find a and b such that x² + y² = p where p = 2255 − 19.

>>> a, b = find_ab(p := 2**255 - 19)
>>> a
230614434303103947632580767254119327050
>>> b
68651491678749784955913861047835464643
>>> a**2 + b**2 - p
0

Finis.

Finding a square root of -1 mod p

If p is an odd prime, there is a theorem that says

x² = −1 mod p

has a solution if and only if p = 1 mod 4. When a solution x exists, how do you find it?

The previous two posts have discussed Stan Wagon’s algorithm for expressing an odd prime p as a sum of two squares. This is possible if and only if p = 1 mod 4, the same condition on p for −1 to have a square root.

In the previous post we discussed how to find c such that c does not have a square root mod p. This is most of the work for finding a square root of −1. Once you have c, set

xck

where p = 4k + 1.

For example, let’s find a square root of −1 mod p where p = 2255 − 19. We found in the previous post that c = 2 is a non-residue for this value of p.

>>> p = 2**255 - 19
>>> k = p // 4
>>> x = pow(2, k, p)

Let’s view x and verify that it is a solution.

>>> x
19681161376707505956807079304988542015446066515923890162744021073123829784752
>>> (x**2 + 1) % p
0

Sometimes you’ll see a square root of −1 mod p written as i. This makes sense in context, but it’s a little jarring at first since here i is an integer, not a complex number.

The next post completes this series, giving a full implementation of Wagon’s algorithm.

Finding a non-square mod p

The previous post briefly mentioned Stan Wagon’s algorithm for expressing an odd prime p as a sum of two squares when it is possible (i.e. when p = 1 mod 4). Wagon’s algorithm requires first finding a non-square mod p, i.e. a number c such that cd² mod p for any d in 1, 2, 3, … p − 1.

Wagon recommends searching for c by testing consecutive primes q as candidates. You can test whether a number q is a square mod p by computing the Legendre symbol, which is implemented in SymPy as legendre_symbol(q, p). This returns 1 if q is a quadratic residue mod p and −1 if it is not.

Here’s Python code for doing the search.

from sympy import *

def find_nonresidue(p):
    q = 2
    while legendre_symbol(q, p) == 1:
        q = nextprime(q)
    return q

Let’s start with p = 2255 − 19. This prime comes up often in cryptography, such as Curve25519. Now p = 1 mod 4, so Wagon’s algorithm applies.

The code above returns 2, i.e. the first thing we tried worked. That was kinda disappointingly easy.

Here’s another large prime used in cryptography, in the NIST curve P-384.

p = 2384 − 2128 − 296 + 232 − 1

For this value of p, find_nonresidue(p) returns 19.

Why test prime as candidates for non-residues? You could pick candidates at random, and there’s a probability 1/2 that any candidate will work, since half the integers less than p are residues and half are non-residues. It’s very likely that you’d find a solution quickly, but it’s not guaranteed. In principle, you could try a thousand candidates before one works, though that’s vanishingly unlikely. If you test consecutive primes, there is a theoretical limit on how many guesses you need to make, if the Extended Riemann Hypothesis is true.

The next post explains why we wanted to find a non-residue.

Expressing a prime as the sum of two squares

I saw where Elon Musk posted Grok’s answer to the prompt “What are the most beautiful theorems.” I looked at the list, and there were no surprises, as you’d expect from a program that works by predicting the most likely sequence of words based on analyzing web pages.

There’s only one theorem on the list that hasn’t appeared on this blog, as far as I can recall, and that’s Fermat’s theorem that an odd prime p can be written as the sum of two squares if and only if p = 1 mod 4. The “only if” direction is easy [1] but the “if” direction takes more effort to prove.

If p is a prime and p = 1 mod 4, Fermat’s theorem guarantees the existence of x and y such that

x^2 + y^2 = p

Gauss’ formula

Stan Wagon [2] gave an algorithm for finding a pair (xy) to satisfy the equation above [2]. He also presents “a beautiful formula due to Gauss” which “does not seem to be of any value for computation.” Gauss’ formula says that if p = 4k + 1, then a solution is

\begin{align*} x &= \frac{1}{2} \binom{2k}{k} \pmod p \\ y &= (2k)\!!\, x \pmod p \end{align*}

For x and y we choose the residues mod p with |x| and |y| less than p/2.

Why would Wagon say Gauss’ formula is computationally useless? The number of multiplications required is apparently on the order of p and the size of the numbers involved grows like p!.

You can get around the problem of intermediate numbers getting too large by carrying out all calculations mod p, but I don’t see a way of implementing Gauss’ formula with less than O(p) modular multiplications [3].

Wagon’s algorithm

If we want to express a large prime p as a sum of two squares, an algorithm requiring O(p) multiplications is impractical. Wagon’s algorithm is much more efficient.

You can find the details of Wagon’s algorithm in [3], but the two key components are finding a quadratic non-residue mod p (a number c such that cx² mod p for any x) and the Euclidean algorithm. Since half the numbers between 1 and p − 1 are quadratic non-residues, you’re very likely to find a non-residue after a few attempts.

 

[1] The square of an integer is either equal to 0 or 1 mod 4, so the sum of two squares cannot equal 3 mod 4.

[2] Stan Wagon. The Euclidean Algorithm Strikes Again. The American Mathematical Monthly, Vol. 97, No. 2 (Feb., 1990), pp. 125-129.

[3] Wilson’s theorem gives a fast way to compute (n − 1)! mod n. Maybe there’s some analogous identity that could speed up the calculation of the necessary factorials mod p, but I don’t know what it would be.

 

Computing large Fibonacci numbers

The previous post discussed two ways to compute the nth Fibonacci number. The first is to compute all the Fibonacci numbers up to the nth iteratively using the defining property of Fibonacci numbers

Fn + 2 = Fn + Fn + 1

with extended integer arithmetic.

The second approach is to use Binet’s formula

Fn = round( φn / √ 5 )

where φ is the golden ratio.

It’s not clear which approach is more efficient. You might naively say that the iterative approach has run time O(n) while Binet’s formula is O(1). That doesn’t take into account how much work goes into each step, but it does suggest that eventually Binet wins.

The relative efficiency of each algorithm depends on how it is implemented. In this post I will compare using Python’s integer arithmetic and the mpmath library for floating point. Here’s my code for both methods.

from math import log10
import mpmath as mp

def fib_iterate(n):
    a, b = 0, 1
    for _ in range(n):
        a, b = b, a + b
    return a

def digits_needed(n):

    phi = (1 + 5**0.5) / 2
    return int(n*log10(phi) - 0.5*log10(5)) + 1

def fib_mpmath(n, guard_digits=30):

    digits = digits_needed(n)

    # Set decimal digits of precision
    mp.mp.dps = digits + guard_digits

    sqrt5 = mp.sqrt(5)
    phi = (1 + sqrt5) / 2
    x = (phi ** n) / sqrt5

    return int(mp.nint(x))

Next, here’s some code to compare the run times.

def compare(n):
    
    start = time.perf_counter()
    x = fib_iterate(n)
    elapsed = time.perf_counter() - start
    print(elapsed)

    start = time.perf_counter()
    y = fib_mpmath(n)
    elapsed = time.perf_counter() - start
    print(elapsed)

    if (x != y):
        print("Methods produced different results.")

This code shows that the iterate approach is faster for n = 1,000 but Binet’s method is faster for n = 10,000.

>>> compare(1_000)
0.0002502090001144097
0.0009207079999669077
>>> compare(10_000)
0.0036547919999065925
0.002145750000181579

For larger n, the efficiency advantage of Binet’s formula becomes more apparent.

>>> compare(1_000_000)
11.169050417000108
2.0719056249999994

Guard digits and correctness

There is one unsettling problem with the function fib_mpmath above: how many guard digits do you need? To compute a number correctly to 100 significant figures, for example, requires more than 100 digits of working precision. How many more? It depends on the calculation. What about our calculation?

If we compute the 10,000th Fibonacci number using fib_mpmath(10_000, 2), i.e. with 2 guard digits, we get a result that is incorrect in the last digit. To compute the 1,000,000th Fibonacci number correctly, we need 5 guard digits.

We don’t need many guard digits, but we’re guessing at how many we need. How might we test whether we’ve guessed correctly? One way would be to compute the result using fib_iterate and compare results, but that defeats the purpose of using the more efficient fib_mpmath.

If floating point calculations produce an incorrect result, the error is likely to be in the least significant digits. If we knew that the last digit was correct, that would give us more confidence that all the digits are correct. More generally, we could test the result mod m. I discussed Fibonacci numbers mod m in this post.

When m = 10, the last digits of the Fibonacci numbers have a cycle of 60. So the Fibonacci numbers with index n and with index n mod 60 should be the same.

The post I just mentioned links to a paper by Niederreiter that says the Fibonacci numbers ore evenly distributed mod m if and only if m is a power of 5, in which case the cycle length is 4m.

The following code could be used as a sanity check on the result of fig_mpmath.

def mod_check(n, Fn):
    k = 3
    base = 5**k
    period = 4*base
    return Fn % base == fib_iterate(n % period) % base

With k = 3, we are checking the result of our calculation mod 125. It is unlikely that an incorrect result would be correct mod 125. It’s hard to say just how unlikely. Naively we could say there’s 1 chance in 125, but that ignores the fact that the errors are most likely in the least significant bits. The chances of an incorrect result being correct mod 125 would be much less than 1 in 125. For more assurance you could use a larger power of 5.

Fibonacci number certificates

Suppose I give you a big number F and claim that F is a Fibonacci number. How could you confirm this?

Before I go further, let me say what this post is really about. It’s not about Fibonacci numbers so much as it is about proofs and certificates. There’s no market for large Fibonacci numbers, and certainly no need to quickly verify that a number is a Fibonacci number.

You could write a program to generate Fibonacci numbers, and run it until it either produces F , in which case you know F is a Fibonacci number, or the program produces a larger number than F without having produced F, in which case you know it’s not a Fibonacci number. But there’s a faster way.

A certificate is data that allows you to confirm a solution to a problem in less time, usually far less time, than it took to generate the solution. For example, Pratt certificates give you a way to prove that a number is prime. For a large prime, you could verify its Pratt certificate much faster than directly trying to prove the number is prime.

There is a theorem that says a number f is a Fibonacci number if and only if one of 5f2 ± 4 is a perfect square. So in addition to F another number r that is a certificate that F is a Fibonacci number. You compute

N = 5F² − r²

and if N is equal to 4 or −4, you know that F is a Fibonacci number. Otherwise it is not.

Here’s a small example. Suppose I give you (12586269025, 28143753123) and claim that the first number is a Fibonacci number and the second number is its certificate. You can compute

5 × 12586269025² − 28143753123²

and get −4, verifying the claim.

Certificates are all about the amount of computation the verifier needs to do. The prover, i.e. the person producing the certificate, has to do extra work to provide a certificate in addition to a problem solution. This trade-off is acceptable, for example, in a blockchain where a user posts one transaction but many miners will verify many transactions.

Related posts

Prime gaps and Gapcoin

The previous post looked at tightly clustered primes. This post looks at the opposite, large gaps between primes.

Riecoin is a cryptocurrency that uses finding prime clusters as its proof of work task. Gapcoin uses finding prime gaps as its proof of work task.

There’s some nuance to defining prime gaps. It’s trivial to produce a gap of any size. For example, [n! + 2, n! + n] is an interval of length n − 1 that contains no primes. It is more interesting to find gaps that are large relative to the size of the endpoints. The merit of a gap is the ratio of the gap length to the log of the initial number in the interval.

To be specific, suppose p and q are consecutive primes. The length of the gap between them is defined to be qp and the merit of that gap is (qp) / log p. For large p, the average gap between primes near p is log p and so the merit function measures how large the gap is relative to what you would expect for the size of p.

The following code will compute the merit function.

>>> from sympy import nextprime, log, N
>>> merit = lambda p: (nextprime(p) - p)/log(p)

Gapcoin adjusts its mining difficulty by adjusting the minimum merit value the miner must search for. Gapcoin miners must find a prime p of the form

ph × 2a + b

where h is the SHA256 hash of the previous block in the blockchain and b < 2a.

The prime gap with the largest known merit is [pp + 8350] where

p = 293703234068022590158723766104419463425709075574811762098588798217895728858676728143227

The code

>>> N(merit(p))

shows that the merit is 41.94.

This record was found by the Gapcoin network. I don’t know the backstory, but I presume the mining task wasn’t to find a world record gap. Instead, the miner got lucky and found a much larger gap than necessary.

Related posts

Prime clusters and Riecoin

Prime clusters are sets of primes that appear as close together as is generally possible.

There is one pair of consecutive prime numbers, 2 and 3, but there cannot be any more: in any larger pair of consecutive numbers, one of the pair will be even. But there are a lot of twin primes, perhaps infinitely many, and so a prime cluster of size two is a pair of primes whose difference is 2.

How close together can a set of three primes be? The set {2, 3, 5} has diameter 3, i.e. the difference between the largest and smallest element is 3. And the set {3, 5, 7} has diameter 4. But in general the diameter of a set of three primes must be at least 6. If the smallest element is bigger than 3, then all the elements are odd, but they cannot be consecutive odd numbers or else one of them would be divisible by 3. But there are many prime clusters of diameter 6. For example, {13, 17, 19} and {37, 41, 43}.

Formal definition and motivation

There is some fuzziness in the discussion above regarding what is generally possible. This section will make our definitions more rigorous.

In general a pair of primes cannot be consecutive numbers because one of the pair must be even. Stated more abstractly, every pair of integers larger than {2, 3} will contain a complete residue class mod 2, i.e. one of the numbers will be congruent to 0 mod 2 and one of the numbers will be congruent to 1 mod 2.

Now let’s look at sets of three primes. The example {2, 3, 5} is exceptional because it contains a complete residue class mod 2, i.e. it contains even and odd numbers. The example {3, 5, 7} is exceptional because it contains a complete residue class mod 3.

We say that a set of k primes is a cluster if it does not contain a full residue class modulo any prime. We could say it does not contain a full residue class modulo any prime qk because trivially no set of k elements can have set of more than k elements. For example, the cluster {13, 17, 19} does not contain a full set of residues mod 2 or 3, but it also does not have a full set of residues mod 5, 7, 11, ….

We say a cluster is maximally dense if it has the minimum diameter for a cluster of its number of primes.

Informally, we will call a maximally dense cluster simply a cluster. A maximally dense prime cluster is also sometimes called a prime constellation.

Riecoin cryptocurrency

I’ve written several posts about the Primecoin cryptocurrency whose proof of work task is finding prime chains. The Riecoin cryptocurrency requires miners to find prime clusters.

Primecoin is far from a major cryptocurrency, with a market cap of around $2.3M. Riecoin is about four times smaller than Primecoin. There is another number-theoretic cryptocurrency, Gapcoin, whose market cap is about 10x smaller than that of Riecoin. Safe to say these three projects are of more interest to mathematicians than investment firms. All three of these prime-based cryptocurrencies were launched between 2013 and 2014.

A proof of work task should satisfy three criteria:

  1. Solutions should be difficult to find but easy to confirm.
  2. The time to find a solution should be roughly predictable.
  3. The difficulty should be adjustable.

Finding prime clusters requires a brute force search, but it’s easy to test whether a cluster has been found, and so the first property is satisfied.

The Hardy-Littlewood conjectures give an estimate of the difficulty in finding prime clusters of a given length. The difficulty can be adjusted by adjusting the required length. So the Hardy-Littlewood conjectures assist in satisfying the second and third properties. It does not matter if the conjectures are false somewhere out in asymptopia; they empirically give good estimates for the size of numbers used in mining Riecoin.

More about clusters

The definition of a maximally dense cluster depends on a function s(k) that says what the diameter of a cluster of k primes would need to be. We’ve said s(2) = 2 and s(3) = 6. More values of s are listed on the page for OEIS sequence A008407.

Related posts

Efficiently testing multiple primes at once

The previous post looked at a technique for inverting multiple integers mod m at the same time, using fewer compute cycles than inverting each integer individually. This post will do something analogous for prime chains, revisiting a post from a few days ago about testing prime chains.

A prime chain is a sequence of primes in which each is twice its predecessor, plus or minus 1. In a Cunningham chain of the first kind, it’s always plus, and in a Cunningham chain of the second kind, it’s always minus.

Primecoin is a cryptocurrency that uses finding prime chains as its proof-of-work (PoW) task. The miner has a choice of finding one of three kinds of prime chain: a Cunningham chain of the first or second kind, or a bi-twin chain. The length of the necessary chain varies over time to keep the difficulty relatively constant. Other PoW blockchains do something similar.

Some people say that Primecoin has miners search for primes for PoW. That’s not quite right. Miners have to find a chain of medium-sized primes rather than finding one big prime. This leads to more predictable compute times.

There is a way to test a candidate Cunningham chain of the second kind all at once. Henri Lifchitz gives his algorithm here. Given a sequence of numbers

n1, n2, n3, …, nk

where ni = 2ni−1 − 1 for each i and  n0 = 1 mod 4, all the numbers in the sequence are probably prime if

2^{n_{k-1} - 1} = 1 \bmod n_0 n_1 n_2 \cdots n_k

For example, consider the chain

31029721, 62059441, 124118881

Note that 31029721 mod 4 = 1 and 31029721 = 2*15514861 − 1. The following code demonstrates that the numbers in the chain are probable primes because it prints 1.

n0 = 15514861
n1 = 2*n0 - 1
n2 = 2*n1 - 1
n3 = 2*n2 - 1
prod = n0*n1*n2*n3
print( pow(2, n2 - 1, prod) )

Next I wanted to try the algorithm on much larger numbers where its efficiency would be more apparent, as in the previous post. But when I did, the test returned a result other than 1 on a known Cunningham chain of the second kind. For example, when I change the first two lines of code above to

n1 = 49325406476*primorial(9811, False) + 1
n0 = (n1 + 1) // 2

the code returns a large result. I verified that each of the numbers in the chain are prime using Sympy’s isprime function.

Usually a probable prime test can have false positives but never a false negative. I haven’t looked at Lifschitz method closely enough to tell whether it can have false negatives, but the code above suggests it can.