What is proof-of-work?

The idea of proof of work (PoW) was first explained in a paper Cynthia Dwork and Moni Naor [1], though the term “proof of work” came later [2]. It was first proposed as a way to deter spam, but it’s better known these days through its association with cryptocurrency.

If it cost more to send email, even a fraction of a cent per message, that could be enough to deter spammers. So suppose you want to charge anyone $0.005 to send you an email message. You don’t actually want to collect their money, you just want proof that they’d be willing to spend something to email you. You’re not even trying to block robots, you just want to block cheap robots.

So instead of asking for a micropayment, you could ask the sender to solve a puzzle, something that would require around $0.005 worth of computing resources. If you’re still getting too much spam, you could increase your rate and by giving them a harder puzzle.

Dwork and Naor list several possible puzzles. The key is to find a puzzle that takes a fair amount of effort to solve but the solution is easy to verify.

Bitcoin uses hash problems for proof-of-work puzzles. Cryptographic hash functions are difficult to predict, and so you can’t do much better than brute force search if you want to come up with input whose hashed value has a specified pattern.

The goal is to add a fixed amount of additional text to a message such that when the hash function is applied, the resulting value is in some narrow range, such as requiring the first n bits to be zeros. The number n could be adjusted over time as needed to calibrate the problem difficulty. Verifying the solution requires computing only one hash, but finding the solution requires computing 2n hashes on average.

Related posts

[1] Cynthia Dwork and Noni Naor (1993). “Pricing via Processing, Or, Combatting Junk Mail, Advances in Cryptology”. CRYPTO’92: Lecture Notes in Computer Science No. 740. Springer: 139–147.

[2] Markus Jakobsson and Ari Juels (1999). “Proofs of Work and Bread Pudding Protocols”. Communications and Multimedia Security. Kluwer Academic Publishers: 258–272.

Continued fraction cryptography

Every rational number can be expanded into a continued fraction with positive integer coefficients. And the process can be reversed: given a sequence of positive integers, you can make them the coefficients in a continued fraction and reduce it to a simple fraction.

In 1954, Arthur Porges published a one-page article pointing out that continued fractions could be used to create a cipher. To encrypt your cleartext, convert it to a list of integers, use them as continued fraction coefficients, and report the resulting fraction. To decrypt, expand the fraction into a continued fraction and convert the coefficients back to text.

We can implement this in Mathematica as follows:

    
encode[text_] := FromContinuedFraction[ ToCharacterCode[ text ]]
decode[frac_] := FromCharacterCode[ ContinuedFraction[ frac ]]

So, for example, suppose we want to encrypt “adobe.” If we convert each letter to its ASCII code we get {97, 100, 111, 98, 101}. When we make these numbers coefficients in a continued fraction we get

\mathrm{

which reduces to 10661292093 / 109898899.

This isn’t a secure encryption scheme by any means, but it’s a cute one. It’s more of an encoding scheme, a (non-secret) way to convert a string of numbers into a pair of numbers.

Related posts

[1] Arthur Porges. A Continued Fraction Cipher. The American Mathematical Monthly, Vol. 59, No. 4 (Apr., 1952), p. 236

Fermat’s factoring trick and cryptography

Many encryption algorithms rely on the difficulty of factoring a large number n. If you want to make n hard to factor, you want it to have only two factors. Otherwise, the more factors n has, the smaller the smallest factor must be.

So if you want n to be the product of two large primes, p and q, you want to pick these primes to be roughly the same size so that the smaller factor is as large as possible. If you’re limited on the size of n, then you want p and q to be roughly of size √n. But not too close to √n. You may see in a description of a cryptographic algorithm, such as RSA, “Pick two large primes p and q, but not too close together, …” Why is that?

The answer goes back to Fermat (1607–1665). His factoring trick is to start with an odd composite n and look for numbers a and b such that

n = a² – b²

because if you can do that, then

n = (ab)(a – b).

This trick always works [1], but it’s only practical when the factors are close together. If they are close together, you can do a brute force search for a and b. But otherwise you’re better off doing something else.

Small example

To give an example, suppose we want to factor n = 12319. Then √n = 110.99, so we can start looking for a and b by trying a = 111. We increment a until a² – n is a square, b².

Now 111² – 12319 = 2, so that didn’t work. But 112² – 12319 = 225, which is a square, and so we take a = 112 and b = 15. This tells us p = 112+15 = 127 and q = 112 – 15 = 97.

Larger example with Python code

Now let’s do a larger example, too big to do by hand and more similar to a real application.

    from sympy import sqrt, log, ceiling, Integer

    def is_square(n):
        return type(sqrt(n)) == Integer

    def fermat_factor(n):
        num_digits = int(log(n, 10).evalf() + 1)
        a = ceiling( sqrt(n).evalf(num_digits) )

        counter = 0
        while not is_square(a*a - n):
            a += 1
            counter += 1

        b = sqrt(a*a - n)
        return(a+b, a-b, counter)

    p = 314159200000000028138418196395985880850000485810513
    q = 314159200000000028138415196395985880850000485810479
    print( fermat_factor(p*q) )

This recovers p and q in 3,580 iterations. Note that the difference in p and q is large in absolute terms, approximately 3 × 1027, but small relative to p and q.

Related posts

[1] If n = pq, then you can set a = (p + q)/2 and b = (p – q)/2.

 

Format-preserving encryption (FPE) for privacy

The idea of format-preserving encryption is to encrypt data while keeping its form, a sort of encryption in kind. An encrypted credit card number would look like a credit card number, a string of text would be replaced with a string of text, etc.

Format preserving encryption (FPE) is useful in creating a test or demo database. You want realistic data without having accurate data (at least for sensitive data fields), and using FPE on real data might be the way to go.

If a field is supposed to contain a 10-digit number, say a phone number, you want the test data to also contain a 10-digit number. Otherwise validation software might break. And if that number is a key that links tables together, simply substituting a random number would break the relationships unless the same random replacement was used everywhere. Also, two clear numbers could be replaced with the same randomly chosen value. FPE would be a simple way to avoid these problems.

FPE is a two-edged sword. It may be desirable to preserve formatting, but it could also cause problems. Using any form of encryption, format-preserving or not, to preserve the database structure could reveal information you don’t intend to reveal.

It’s quite possible to encrypt data and still compromise privacy. If you encrypt data, but not metadata, then you might keep enough information to re-identify individuals. For example, if you encrypt someone’s messages but retain the time stamp on the messages, that might be enough to identify that person.

The meaning of “format-preserving” can vary, and that could create inadvertently leak information. What does it mean to encrypt English text in a format-preserving way? It could mean that English words are replaced with English words. If this is done simplistically, then the number of words in the clear text is revealed. If a set of English words is replaced with a variable number of English words, you’re still revealing that the original text was English.

FPE may not reveal anything that wasn’t already known. If you know that a field in a database contains a 9-digit number, then encrypting it as a 9-digit number doesn’t reveal anything per se. But it might be a problem if it reveals that two numbers are the same number.

What about errors? What happens if a field that is supposed to be a 9-digit number isn’t? Maybe it contains a non-digit character, or it contains more than 9 digits. The encryption software should report an error. But if it doesn’t, maybe the encrypted output is not a 9-digit number, revealing that there was an error in the input. Maybe that’s a problem, maybe not. Depends on context.

 

Three applications of Euler’s theorem

Fermat’s little theorem says that if p is a prime and a is not a multiple of p, then

ap-1 = 1 (mod p).

Euler’s generalization of Fermat’s little theorem says that if a is relatively prime to m, then

aφ(m) = 1 (mod m)

where φ(m) is Euler’s so-called totient function. This function counts the number of positive integers less than m and relatively prime to m. For a prime number p, φ(p) = p-1, and to Euler’s theorem generalizes Fermat’s theorem.

Euler’s totient function is multiplicative, that is, if a and b are relatively prime, then φ(ab) = φ(a) φ(b). We will need this fact below.

This post looks at three applications of Fermat’s little theorem and Euler’s generalization:

  • Primality testing
  • Party tricks
  • RSA public key encryption

Primality testing

The contrapositive of Fermat’s little theorem is useful in primality testing: if the congruence

ap-1 = 1 (mod p)

does not hold, then either p is not prime or a is a multiple of p. In practice, a is much smaller than p, and so one can conclude that p is not prime.

Technically this is a test for non-primality: it can only prove that a number is not prime. For example, if 2p-1 is not congruent to 1 (mod p) then we know p is not a prime. But if 2p-1 is congruent to 1 (mod p) then all we know is that we haven’t failed the test; it’s still conceivable that p is prime. So we try another value of a, say 3, and see whether 3p-1 is congruent to 1 (mod p).

If we haven’t disproved that p is prime after several attempts, we have reason to believe p is probably prime.[1]. There are pseudoprimes, a.k.a. Carmichael numbers that are not prime but pass the primality test above for all values of a. But these numbers are much less common than primes.

By the way, if p is a huge number, say with hundreds or thousands of digits, doesn’t it seem odd that we would want to compute numbers to the power p? Actually computing ap would be impossible. But because we’re computing mod p, this is actually easy. We can apply the fast exponentiation algorithm and take remainders by p at every step, so we’re never working with numbers more than twice as long as p.

Fifth root party trick

A few days ago I wrote about the fifth root party trick. If someone raises a two-digit number to the fifth power, you can quickly tell what the number was. Part of what makes the trick work is that in base 10, any number n and its fifth power end in the same digit. You can prove this by trying all 10 possible last digits, but if you want to generalize the trick to other bases, it helps to use Euler’s theorem. For example, you could use 9th powers in base 15.

Euler’s theorem shows why raising a to the power  φ(m) + 1 in base m keeps the last digit the same, but only if a is relatively prime to m. To extend the fifth root trick to other bases you’ll have a little more work to do.

RSA encryption

The original [2] RSA public key cryptography algorithm was a clever use of Euler’s theorem.

Search for two enormous prime numbers p and q [3]. Keep p and q private, but make npq public. Pick a private key d and solve for a public key e such that de = 1 (mod φ(n)).

Since you know p and q, you can compute φ(n) = (p – 1)(q – 1), and so you can compute the public key e. But someone who doesn’t know p and q, but only their product n, will have a hard time solving for d from knowing e. Or at least that’s the hope! Being able to factor n is sufficient to break the encryption scheme, but it’s not logically necessary. Maybe recovering private keys is much easier than factoring, though that doesn’t seem to be the case.

So where does Euler come in? Someone who has your public key e and wants to send you a message m computes

me (mod n)

and sends you the result [4]. Now, because you know d, you can take the encrypted message me and compute

(me)d = mφ(n) = 1 (mod n)

by Euler’s theorem.

This is the basic idea of RSA encryption, but there are many practical details to implement the RSA algorithm well. For example, you don’t want p and q to be primes that make pq easier than usual to factor, so you want to use safe primes.

Related posts

[1] Saying that a number is “probably prime” makes sense the first time you see it. But then after a while it might bother you. This post examines and resolves the difficulties in saying that a number is “probably prime.”

[2] The original RSA paper used Euler’s totient function φ(n) = (p – 1)(q – 1), but current implementations use Carmichael’s totient function λ(n) = lcm(p – 1, q – 1). Yes, same Carmichael as Carmichael numbers mentioned above, Robert Daniel Carmichael (1879–1967).

[3] How long does it take to find big primes? See here. One of the steps in the process it to weed out composite numbers that fail the primality test above based on Fermat’s little theorem.

[4] This assumes the message has been broken down into segments shorter than n. In practice, RSA encryption is used to send keys for non-public key (symmetric) encryption methods because these methods are more computationally efficient.

RSA numbers and factoring

It’s much easier to multiply numbers together than to factor them apart. That’s the basis of RSA encryption.

In particular, the RSA encryption scheme rests on the assumption that given two large primes p and q, one can quickly find the product pq but it is much harder to recover the factors p and q. For the size numbers you’ll see in math homework, say two or three digits, factoring is a little harder than multiplication. But when you get into numbers that are hundreds of digits long, factoring is orders of magnitude more difficult. Or so it seems. There’s no proof that factoring has to be as difficult as it appears to be. And sometimes products have special structure that makes factoring much easier, hence the need for so-called safe primes.

The ability to factor large products quickly is sufficient to break RSA, but we don’t know whether it’s necessary. Conceivably there is an efficient algorithm for breaking RSA encryption that does not require factoring, or that could be turned into an efficient algorithm for factoring. But no such algorithm is publicly known, and it people who use RSA are betting that such an algorithm doesn’t exist.

Factoring challenges

How hard is it to factor large products? We can get some idea from contest results. In 1991, RSA Laboratories published a list of factoring challenges, the so-called RSA numbers. The smallest of these, RSA-100, was a 100-digit number that was factored shortly after the challenge was announced.

Last week, Noblis, Inc. announced that their company had factored RSA-230, factoring a 230-digit number into two 115-digit primes. Because multiplication is so much easier than factoring, it’s simple to verify the result even though it was hard to produce it.

RSA232 = 17969491597941066732916128449573246156367561808012600070888918835531726460341490933493372247868650755230855864199929221814436684722874052065257937495694348389263171152522525654410980819170611742509702440718010364831638288518852689
p = 4528450358010492026612439739120166758911246047493700040073956759261590397250033699357694507193523000343088601688589
q = 3968132623150957588532394439049887341769533966621957829426966084093049516953598120833228447171744337427374763106901

if RSA232 == p*q:
    print("Check!")

A slightly larger RSA challenge problem, RSA-232 had previously been factored. The RSA numbers larger than RSA-232 have not been factored at the time of writing this post.

(RSA-232 has 232 decimal digits. But some RSA numbers are named after their length in bits and some are named after their length in digits. For example, RSA-704 is smaller than RSA-230 because the former has 704 bits and the latter has 230 decimal digits.)

Number field sieve

The expected time required to factor a semiprime (product of two primes) using the general number field sieve is O( exp(c log(n)1/3 log log(n)2/3) ) where the value of c is a little elusive. Some sources say c = (64/9)1/3 = 1.92, but as I understand it that value is based on a heuristic and the smallest rigorously proven value is c = 2.77.

Using the more optimistic value of c, we estimate about 1023 operations would be necessary to factor RSA-230.

Shor’s quantum algorithm

In theory, Shor’s algorithm could factor semiprimes very quickly using a quantum computer. At this time, the largest number that has been factored with Shor’s algorithm is 21. So it’s not much of a threat to crypto security yet, but it has enormous potential. If a large-scale quantum computer were available. Shor’s algorithm could factor primes in O( log(n)² log(log(n)) log(log(log(n))) ) time [1].

This means that Shor’s algorithm could factor RSA-230 in about 3 million operations.

Related posts

[1]  Obligatory old joke: What sound does a number theorist make when drowning? log log log …

A tale of two elliptic curves

A few days ago I blogged about the elliptic curve secp256k1 and its use in Bitcoin. This curve has a sibling, secp256r1. Note the “r” in the penultimate position rather than a “k”. Both are defined in SEC 2: Recommended Elliptic Curve Domain Parameters. Both are elliptic curves over a field zp where p is a 256-bit prime (though different primes for each curve).

The “k” in sepc256k1 stands for Koblitz and the “r” in sepc256r1 stands for random. A Koblitz elliptic curve has some special properties that make it possible to implement the group operation more efficiently. It is believed that there is a small security trade-off, that more “randomly” selected parameters are more secure. However, some people suspect that the random coefficients may have been selected to provide a back door.

Both elliptic curves are of the form y² = x³ + ax + b. In the Koblitz curve, we have

a = 0
b = 7

and in the random case we have

a = FFFFFFFF 00000001 00000000 00000000 00000000 FFFFFFFF FFFFFFFF FFFFFFFC
b = 5AC635D8 AA3A93E7 B3EBBD55 769886BC 651D06B0 CC53B0F6 3BCE3C3E 27D2604B

You can find the rest of the elliptic curve parameters in the SEC 2 report. For some help understanding what the parameters mean and how to decode them, see my earlier post.

The NSA recommends the random curve for government use. It is also known as P-256. Or rather it did recommend P-256 as part of its Suite B of cryptography recommendations. In August 21015 the NSA announced its concern that in the future, quantum computing could render the Suite B methods insecure. As far as we know, quantum computing at scale is years, maybe decades, away. But it takes a long time to develop quality encryption methods, and so the NSA and NIST are urging people to think ahead.

Bitcoin chose to use the less popular Koblitz curve for the reasons mentioned above, namely efficiency and concerns over a possible back door in the random curve. Before Bitcoin, secp256k1 was not widely used.

Related post: RSA numbers and factoring

Schnorr groups

Claus Schnorr

Schnorr groups, named after Claus Schnorr, are multiplicative subgroups of a finite field. They are designed for cryptographic application, namely as a setting for hard discrete logarithm problems.

The application of Schnorr groups to Schnorr signatures was patented, but the patent ran out in 2008.

There has been a proposal to include Schnorr signatures in Bitcoin, but it has not been accepted at this point. (The proposal would use Schnorr signatures but over an elliptic curve rather than a Schnorr group.)

Group construction

Pick a large prime p. Then the integers mod p form a finite field. The non-zero elements form an Abelian group of order p-1 under multiplication. Then p – 1 is composite, and so it can be factored into qr where q is prime. We want q to be large as well because it will be the order of our Schnorr group.

Pick any h such that hr is not congruent to 1 mod p. Then g = hr is the generator of our Schnorr group. Why does g have order q? A simple calculation shows

g^q \equiv h^{qr} \equiv h^{p-1} \equiv 1 \pmod p

where the last step follows from Fermat’s little theorem. This shows that the order of g is either q or a divisor of q, but by construction g is not congruent to 1 (mod p), and q has no other factors since it is prime.

Python code

Here’s a toy example using small numbers. Let p = 23, q = 11 and r = 2. Then we can pick h to be any number that is not a root of 1 (mod 23), i.e. any number except 1 or 22. Say we pick h = 7. Then g = 49 = 3 (mod 23).

Here’s a Python script to verify our claims and print out the elements of our Schnorr group.

    from sympy import isprime
    
    p, q, r, h = 23, 11, 2, 7
    
    assert(isprime(p))
    assert(isprime(q))
    assert(p-1 == q*r)
    
    g = h**r % p
    assert(g != 1)
    assert(g**q % p == 1)
    
    for i in range(q):
        print( g**i % p )

This shows that our group elements are {1, 3, 9, 4, 12, 13, 16, 2, 6, 18, 8}.

In theory we could use the same script with much larger numbers, but in that case we wouldn’t want to print out the elements. Also, the code would take forever. We naively computed ab (mod m) by computing ab first, then taking the remainder modulo m. It is far more efficient to reduce modulo m along the way. That’s what Python’s three-argument pow function does.

Here’s a revised version that runs quickly with large numbers.

    from sympy import isprime
    
    p = 115792089237316195423570985008687907852837564279074904382605163141518161494337
    q = 341948486974166000522343609283189
    r = 338624364920977752681389262317185522840540224
    h = 3141592653589793238462643383279502884197
    
    assert(isprime(p))
    assert(isprime(q))
    assert(p-1 == q*r)
    
    g = pow(h, r, p)
    assert(g != 1)
    assert(pow(g, q, p) == 1)

Related posts

Photo of Claus Schnorr by Konrad Jacobs CC BY-SA 2.0 de

Bitcoin key mechanism and elliptic curves over finite fields

Bitcoin uses the Elliptic Curve Digital Signature Algorithm (ECDSA) based on elliptic curve cryptography. The particular elliptic curve is known as secp256k1, which is the curve

y² = x³ + 7

over a finite field to be described shortly.

graph of elliptic curve y^2 = x^3 + 7

Addition on elliptic curves in the plane is defined geometrically in terms of where lines intercept the curve. We won’t go into the geometry here, except to say that it boils down to a set of equations involving real numbers. But we’re not working over real numbers; we’re working over a finite field.

Finite field modulus

The idea is to take the equations motivated by the geometry in the plane then use those equations to define addition when you’re not working over real numbers but over a different field. In the case of secp256k1, the field is the finite field of integers mod p where

p = 2256 – 232 – 977

Here p was chosen to be relatively close to 2256. It’s not the largest prime less than 2256; there are a lot of primes between p and 2256. Other factors also went into the choice p. Note that we’re not working in the integers mod p per se; we’re working in an Abelian group whose addition law is defined by an elliptic curve over the integers mod p.

(Update: Here’s another post about secp256k1’s sister curve, secp256r1, another curve modulo a 256-bit prime, but with different structure.)

Base point

Next, we pick a base point g on the elliptic curve. The standard defining secp256k1 says that g is

0279BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798

in “compressed form” or

040x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8

in “uncompressed form”.

The base point is a specially chosen point on the elliptic curve, and so it is a pair of numbers mod p, not a single number. How do you extract x and y from these compressed or uncompressed forms?

Compressed form

The compressed form only gives x and you’re supposed to solve for y. The uncompressed form gives you x and y. However, the numbers are slightly encoded. In compressed form, the string either starts with “o2” or “o3” and the rest of the string is the hexadecimal representation of x. There will be two values of y satisfying

y² = x³ + 7 mod p

and the “o2” or “03” tells you which one to pick. If the compressed form starts with 02, pick the root whose least significant bit is even. And if the compressed form starts with 03, pick the root whose least significant bit is odd. (The two roots will add to p, and p is odd, so one of the roots will be even and one will be odd.)

Uncompressed form

The uncompressed form will always start with 04. After this follow the hexadecimal representations of x and y concatenated together.

In either case we have

x = 79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798

and

y = 483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8

We can verify this with a little Python code:

    x = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
    y = 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
    p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
    assert((y*y - x*x*x - 7) % p == 0)

Exponentiation over elliptic curve

Starting with our base point g, define kg to be g added to itself k times. Note again that the sense of “addition” here is addition in the elliptic curve, not addition in the field of integers mod p. The key to elliptic curve cryptography is that kg can be computed efficiently, but solving for k starting from the product kg cannot. You can compute kg using the fast exponentiation algorithm, but solving for k requires computing discrete logarithms. (This is the ECDLP: Elliptic Curve Discrete Logarithm Problem.)

Why is this called “exponentiation” and not “multiplication”? Arithmetic on the elliptic curve is commutative, and in commutative (i.e. Abelian) groups the group operation is usually denoted as addition. And repeated addition is called multiplication.

But in general group theory, the group operation is denoted as multiplication, and repeated application of the group operation is called  exponentiation. It’s conventional to use the general term “exponentiation” even though over an Abelian group it makes more sense to call it multiplication.

You undo exponentiation by taking logarithms, so the process of solving for k is called the discrete logarithm problem. The security of elliptic curve cryptography depends on the difficulty of computing discrete logarithms.

Counting bits of security

The best algorithms for solving discrete logarithm problem in a group of size n currently require O(√n) operations. How big is n in our case?

The base point g was chosen to have a large order, and in fact its order is approximately 2256.  Specifically, the order of g written in hexadecimal is

n = FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141.

This means that we get approximately 256/2 = 128 bits of security because √(2256) = 2128.

Related posts

A cryptographically secure random number generator

A random number generator can have excellent statistical properties and yet not be suited for use in cryptography. I’ve written a few posts to demonstrate this. For example, this post shows how to discover the seed of an LCG random number generator.

This is not possible with a secure random number generator. Or more precisely, it is not practical. It may be theoretically possible, but doing so requires solving a problem currently believed to be extremely time-consuming. (Lots of weasel words here. That’s the nature of cryptography. Security often depends on the assumption that a problem is as hard to solve as experts generally believe it is.)

Blum Blum Shub algorithm

The Blum Blum Shub algorithm for generating random bits rests on the assumption that a certain number theory problem, the quadratic residuosity problem, is hard to solve. The algorithm is simple. Let M = pq where p and q are large primes, both congruent to 3 mod 4. Pick a seed x0 between 1 and M and relatively prime to M. Now for n > 0, set

xn+1 = xn² mod M

and return the least significant bit of xn+1. (Yes, that’s a lot of work for just one bit. If you don’t need cryptographic security, there are much faster random number generators.)

Python implementation

Here’s some Python code to illustrate using the generator. The code is intended to be clear, not efficient.

Given two large (not necessarily prime) numbers x and y, the code below finds primes p and q for the algorithm and checks that the seed is OK to use.

    import sympy

    # super secret large numbers
    x = 3*10**200
    y = 4*10**200
    seed = 5*10**300

    def next_usable_prime(x):
        # Find the next prime congruent to 3 mod 4 following x.
        p = sympy.nextprime(x)
        while (p % 4 != 3):
            p = sympy.nextprime(p)
        return p

    p = next_usable_prime(x)
    q = next_usable_prime(y)
    M = p*q

    assert(1 < seed < M)
    assert(seed % p != 0)
    assert(seed % q != 0)

There’s a little bit of a chicken-and-egg problem here: how do you pick x, y, and seed? Well, you could use a cryptographically secure random number generator ….

Now let’s generate a long string of bits:

# Number of random numbers to generate
N = 100000     

x = seed
bit_string = ""
for _ in range(N):
    x = x*x % M
    b = x % 2
    bit_string += str(b)

Testing

I did not test the output thoroughly; I didn’t use anything like DIEHARDER or PractRand as in previous posts, but just ran a couple simple tests described here.

First I look at the balance of 0’s and 1’s.

    Number of 1's: 50171
    Expected: 49683.7 to 50316.2

Then the longest run. (See this post for a discussion of expected run length.)

    Run length: 16
    Expected: 12.7 to 18.5

Nothing unusual here.

The Blums

The first two authors of Blum Blum Shub are Lenore and Manuel Blum. The third author is Michael Shub.

I had a chance to meet the Blums at the Heidelberg Laureate Forum in 2014. Manuel Blum gave a talk that year on mental cryptography that I blogged about here and followed up here. He and his wife Lenore were very pleasant to talk with.

Bayesian methods at Bletchley Park

From Nick Patterson’s interview on Talking Machines:

GCHQ in the ’70s, we thought of ourselves as completely Bayesian statisticians. All our data analysis was completely Bayesian, and that was a direct inheritance from Alan Turing. I’m not sure this has ever really been published, but Turing, almost as a sideline during his cryptoanalytic work, reinvented Bayesian statistics for himself. The work against Enigma and other German ciphers was fully Bayesian. …

Bayesian statistics was an extreme minority discipline in the ’70s. In academia, I only really know of two people who were working majorly in the field, Jimmy Savage … in the States and Dennis Lindley in Britain. And they were regarded as fringe figures in the statistics community. It’s extremely different now. The reason is that Bayesian statistics works. So eventually truth will out. There are many, many problems where Bayesian methods are obviously the right thing to do. But in the ’70s we understood that already in Britain in the classified environment.

Alan Turing

SHA1 is no longer recommended, but hardly a failure

The web is all abuzz about how SHA1 is “broken”, “a failure,” “obsolete”, etc.

It is supposed to be computationally impractical to create two documents that have the same secure hash code, and yet Google has demonstrated that they have done just that for the SHA1 algorithm.

I’d like to make two simple observations to put this in perspective.

This is not a surprise. Cryptography experts have suspected since 2005 that SHA1 was vulnerable and recommended using other algorithms. The security community has been gleeful about Google’s announcement. They feel vindicated for telling people for years not to use SHA1.

This took a lot of work, both in terms of research and computing. Crypto researchers have been trying to break SHA1 for 22 years. And according to their announcement, these are the resources Google had to use to break SHA1:

  • Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
  • 6,500 years of CPU computation to complete the attack first phase
  • 110 years of GPU computation to complete the second phase

While SHA1 is no longer recommended, it’s hardly a failure. I don’t imagine it would take 22 years of research and millions of CPU hours to break into my car.

I’m not saying people should use SHA1. Just a few weeks ago I advised a client not to use SHA1. Why not use a better algorithm (or at least what is currently widely believed to be a better algorithm) like SHA3? But I am saying it’s easy to exaggerate what it means to say SHA1 has failed.

 

Safe primes, Sylow theorems, and Cryptography

A logarithm is the inverse of an exponential, and so we can generalize the idea of a logarithm wherever we can generalize the idea of an exponential. In particular, we can raise elements to exponents in a discrete group, and that leads to the definition of a discrete logarithm.

Diffie-Hellman public key cryptography is based on the assumption that discrete logarithms are hard to compute. There are algorithms to compute discrete logarithms that are much faster than brute force. For example, baby-step giant-step is a fairly simple algorithm. There are more efficient algorithms as well, but the best known algorithms are still much slower than raising numbers to powers. Whenever you find something that is much harder to undo than to do, it might be useful in cryptography, and that is the case with discrete logs.

Diffie-Hellman encryption requires users to compute exponentials and presumably requires attackers to compute discrete logs. I say “presumably” because it’s a fatal error in cryptography to assume an attacker has to solve the problem you think he’d have to solve. For example, you can create a simple encryption scheme by permuting the alphabet and encrypting each letter to its counterpart in the permutation. Someone might naively think “No one can break this because they’d have to try 26! permutations of the alphabet, over 400 million million million million possibilities!” Except that’s not how anyone approaches a substitution cipher. If it were, you wouldn’t see cryptograms in puzzle books.

As far as we know, discrete logarithms are hard to compute when working over integers mod p where p is a large prime, except for primes that have certain properties. We’ll look at what those properties are below and how to avoid them.

For a prime p, the integers mod p form a finite field. They are a group under addition, and the non-zero elements form a group under multiplication. It’s the multiplicative group we care about here. This group has order p-1, i.e. it has p-1 elements.

A group of prime order has no proper subgroups. But a group of composite order does. And our multiplicative group has order p-1, which is composite. (Except for p = 3, and cryptography depends on primes far, far bigger than 3.)

Sylow’s theorems tell us something about what kinds of subgroups a group must have. If s is prime and sk is a factor of the order of our group, then the group has a subgroup of order sk. We don’t want our multiplicative group to have any small-order subgroups because these would make it easier to compute discrete logarithms.

A safe prime p has the form 2q + 1 where q is also prime [1]. Diffie-Hellman chooses safe primes for moduli because this means the multiplicative group of order p-1 = 2q has no small subgroups. (It has two small subgroups, {1} and {1, -1}, but these can easily be avoided. The algorithm requires picking a generator g, and as long as you don’t pick g to be 1 or -1 mod p, then g generates a group of order q, and if p is gigantic, so is q.) Because q is prime, the subgroup of order q does not have any further subgroups.

Related post: Probability that a number is prime

[1]  If q and p = 2q + 1 are both prime, q is called a Sophie Germain prime and p is a safe prime.

Probability of secure hash collisions

A hash function maps arbitrarily long input strings to fixed-length outputs. For example, SHA-256 maps its input to a string of 256 bits. A cryptographically secure hash function h is a one-way function, i.e. given a message m it’s easy to compute h(m) but it’s not practical to go the other way, to learn anything about m from h(m). Secure hash functions are useful for message authentication codes because it is practically impossible to modify m without changing h(m).

Ideally, a secure hash is “indistinguishable from a random mapping.” [1]  So if a hash function has a range of size N, how many items can we send through the hash function before we can expect two items to have same hash value? By the pigeon hole principle, we know that if we hash N+1 items, two of them are certain to have the same hash value. But it’s likely that a much smaller number of inputs will lead to a collision, two items with the same hash value.

The famous birthday problem illustrates this. You could think of birthdays as a random mapping of people into 366 possible values [2]. In a room of less than 366 people, it’s possible that everyone has a different birthday. But in a group of 23 people, there are even odds that two people have the same birthday.

Variations on the birthday problem come up frequently. For example, in seeding random number generators. And importantly for this post, the birthday problem comes up in attacking hash functions.

When N is large, it is likely that hashing √N values will lead to a collision. We prove this below.

Proof

The proof below is a little informal. It could be made more formal by replacing the approximate equalities with equalities and adding the necessary little-o terms.

Suppose we’re hashing n items to a range of size N = n2. The exact probability that all n items have unique hash values is given in here. Taking the log of both sides gives us the first line of the proof below.

\log( \mbox{Prob(unique)}) &=& \log (\Gamma(n^2)) - \log (\Gamma(n^2 - n)) - n \log( n^2 )\\ &\approx& (n^2 - n) \log(n^2) - (n^2 - n) \log(n^2 - n) - n \\ &=& (n^2 - n) \log\left( \frac{n^2}{n^2 -n} \right) - n \\ &=& (n^2 -n) \log\left( 1 + \frac{1}{n-1} \right) - n \\ &\approx& (n^2 - n) \left( \frac{1}{n-1} - \frac{1}{2}\frac{1}{(n-1)^2} \right) \\ &=& -\frac{n}{2(n-1)} \\ &\approx& -\frac{1}{2}

The first approximation is based on the first three terms in the asymptotic expansion for log Γ given here, applied to both log gamma expressions. (The third terms from the two asymptotic series are the same so they cancel out.) The second line isn’t exactly what you’d get by applying the asymptotic expansion. It’s been simplified a little. The neglected terms are not a mistake but terms that can be left out because they go to zero.

The second approximation comes from using the first two terms in the power series for log(1 + x). One term isn’t enough since that would reduce to zero. The final approximation is simply taking the limit as n goes to infinity. Concretely, we’re willing to say that a billion and one divided by a billion is essentially 1.

Conclusions

So the probability of no collisions is exp(-1/2) or about 60%, which means there’s a 40% chance of at least one collision. As a rule of thumb, a hash function with range of size N can hash on the order of √N values before running into collisions.

This means that with a 64-bit hash function, there’s about a 40% chance of collisions when hashing 232 or about 4 billion items. If the output of the hash function is discernibly different from random, the probability of collisions may be higher. A 64-bit hash function cannot be secure since an attacker could easily hash 4 billion items. A 256-bit or 512-bit hash could in principle be secure since one could expect to hash far more items before collisions are likely. Whether a particular algorithm like SHA3-512 is actually secure is a matter for cryptologists, but it is at least feasible that a hash with a 512-bit range could be secure, based on the size of its range, while a 64-bit hash cannot be.

Numerical calculation

We used an asymptotic argument above rather than numerically evaluating the probabilities because this way we get a more general result. But even if we were only interested in a fix but large n, we’d run into numerical problems. This is one of those not uncommon cases where a pencil-and-paper approximation is actually more accurate than direct calculation with no (explicit) approximations.

There are numerous numerical problems with direct calculation of the collision probability. First, without taking logarithms we’d run into overflow and underflow. Second, for large enough n, n2n = n2 in floating point representation. IEEE 754 doubles have 53 bits of precision. This isn’t enough to distinguish values that differ, say, in the 128th bit. Finally, the two log gamma terms are large, nearly equal numbers. The cardinal rule of numerical analysis is to avoid subtracting nearly equal numbers. If two numbers agree to k bits, you could lose k bits of precision in carrying out their difference. See Avoiding overflow, underflow, and loss of precision for more along these lines.

 

Click to find out more about consulting for numerical computing

 

Notes

[1] Cryptography Engineering by Ferguson, Schneier, and Kohno

[2] Leap year of course complicates things since February 29 birthdays are less common than other birthdays. Another complication is that birthdays are not entirely evenly distributed for the other days of the year. But these complications don’t ruin the party trick: In a room of 30 people, two people usually share a birthday.

Computing discrete logarithms with baby-step giant-step algorithm

At first “discrete logarithm” sounds like a contradiction in terms. Logarithms aren’t discrete, not as we usually think of them. But you can define and compute logarithms in modular arithmetic.

What is a logarithm? It’s the solution to an exponential equation. For example, the logarithm base 10 of 2 is the solution to the equation 10x = 2, and so x =0.30103. Similarly, you could look for the logarithm base 10 of 2 modulo 19. This is an integer value of x such that 10x = 2 mod 19, and you can show that 17 is a solution.

If we work modulo an integer n, the discrete logarithm base a of a number y is an integer x such that ax = y. For some values of a there may not be a solution, but there will be a solution if a is a generator of the integers mod n.

Brute force the simplest algorithm for finding discrete logarithms: try x = 1, 2, 3, …, n until you find a value of x satisfying ax = y. The problem with brute force is that the expected run time is on the order of n, and n is often very large in application.

Discrete logarithms are expensive to compute, but we can do better than brute force. (Cryptographic applications take advantage of the fact that discrete logarithms are hard to compute.) There’s a simple algorithm by Daniel Shanks, known as the baby-step giant-step algorithm, that reduces the run time from order n to order roughly √n. (Actually O(√n log n) for reasons we’ll see soon.)

Let s be the ceiling of the square root of n. The idea is to search for x by taking giant steps forward (multiples of s) and baby (single) steps backward.

Let G be the set of pairs (ags mod n, gs) where g ranges from 1 to s. These are the giant steps.

Let B be the set of pairs (yab mod n, b) where b ranges from 0 to s-1. These are the baby steps.

Now look for a pair in G and a pair in B with the matching first components, i.e. yab = ags. Then x = gsb is the required solution.

Constructing the sets G and requires O(s) operations, and matching the two sets takes O(s log s) operations.

Here’s an example, going back to the problem above of finding the discrete log base 10 of 2 mod 19, using a little Python code to help with the calculations.

The square root of 19 is between 4 and 5 so s = 5.

     >>> [(2*10**r % 19, r) for r in range(5)]
     [(2, 0), (1, 1), (10, 2), (5, 3), (12, 4)]
     >>> [(10**(4*t) % 19, 4*t) for t in range(1,6)]
     [(6, 4), (17, 8), (7, 12), (4, 16), (5, 20)]

The number 5 appears as the first element of a pair in both B and G. We have (5, 3) in B and (5, 20) in G so x = 20 – 3 = 17.

Related: Applied number theory