Using one RNG to sample another

Suppose you have two pseudorandom bit generators. They’re both fast, but not suitable for cryptographic use. How might you combine them into one generator that is suitable for cryptography?

Coppersmith et al [1] had a simple but effective approach which they call the shrinking generator, also called irregular decimation. The idea is to use one bit stream to sample the other. Suppose the two bit streams are ai and bi. If ai = 1, then output bi. Otherwise, throw it away. In NumPy or R notation, this is simply b[a > 0].

Examples in Python and R

For example, in Python

    >>> import numpy as np
    >>> a = np.array([1,0,1,1,0,0,0,1])
    >>> b = np.array([1,0,1,0,0,1,1,0])
    >>> b[a > 0]
    array([1, 1, 0, 0])

Here’s the same example in R.

    > a = c(1,0,1,1,0,0,0,1)
    > b = c(1,0,1,0,0,1,1,0)
    > b[a>0]
    [1] 1 1 0 0

Linear Feedback Shift Registers

Coppersmith and colleagues were concerned specifically with linear feedback shift register (LFSR) streams. These are efficient sources of pseudorandom bits because they lend themselves to hardware implementation or low-level software implementation. But the problem is that linear feedback shift registers are linear. They have an algebraic structure that enables simple cryptographic attacks. But the procedure above is nonlinear and so less vulnerable to attack.

A potential problem is that the shrinking generator outputs bits at an irregular rate, and a timing attack might reveal something about the sampling sequence a unless some sort of buffering conceals this.

Other stream sources

While the shrinking generator was developed in the context of LFSRs, it seems like it could be used to combine any two PRNGs in hopes that the combination is better than the components. At a minimum, it doesn’t seem it should make things worse [2].

There is a problem of efficiency, however, because on average the shrinking generator has to generate 4n bits to output n bits. For very efficient generators like LFSRs this isn’t a problem, but it could be a problem for other generators.

Self-shrinking generators

A variation on the shrinking generator is the self-shrinking generator. The idea is to use half the bits of a stream to sample the other half. Specifically, look at pairs of bits, and if the first bit is a 1, return the second bit. If the first bit is a 0, return nothing.

Use in stream ciphers

The eSTREAM competition for cryptographically secure random bit generators included one entry, Decim v2, that uses shrinking generators. The competition had two categories, methods intended for software implementation and methods intended for hardware implementation. Decim was entered in the hardware category. According to the portfolio document on the competition site,

Decim contains a unique component in eSTREAM, that of irregular decimation, and is an interesting addition to the field of stream ciphers.

That sounds like it was the only method to use irregular decimation (i.e. shrinking generators).

The first version of Decim had some flaws, but the document says “no adverse cryptanalytic results are known” for the second version. Decim v2 made it to the second round of the eSTREAM competition but was not chosen as a finalist because

… the cipher doesn’t seem to deliver such a satisfying performance profile, so while there might be some very interesting elements to the Decim construction, we feel that the current proposal doesn’t compare too well to the other submissions for the hardware profile.

That seems to imply Decim might be competitive with a different implementation or in some context with different trade-offs.

Related posts

[1] Coppersmith, D. Krawczyk, H. Mansour, Y. The Shrinking Generator. Advances in Cryptology — CRYPTO ’93. Lecture Notes in Computer Scienc, vol. 773, pp. 22–39. Springer, Berlin.

[2] If a and b are good sources of random bits, it seems b sampled by a should be at least as good. In fact, if a is poor quality but b is good quality, b sampled by a should still be good. However, the reverse could be a problem. If b is biased, say it outputs more 1s than 0s, then if a is a quality sample, that sample will be biased in favor of 1s as well.

Inside the AES S-box

The AES (Advanced Encryption Standard) algorithm takes in blocks of 128 or more bits [1] and applies a sequence of substitutions and permutations. The substitutions employ an “S-box”, named the Rijndael S-box after its designers [2], an invertible nonlinear transformation that works on 8 bits at a time.

There are 256 = 16 × 16 possible 8-bit numbers, and so the S-box can be represented as a 16 by 16 table mapping inputs to outputs. You can find the tables representing the S-box and its inverse in this text file in org-mode format.

This post will look in detail at how the entries of that table are filled. A high-level description of the design is as follows. For an 8-bit number x,

  1. Invert in GF(28)
  2. Multiply by a matrix L
  3. Add a constant c.

Next we dive into what each of these steps mean. And at the end we’ll work an example in detail.

My source is The Block Cipher Companion by Knudsen and Robshaw.

Inversion in GF(28)

Steps 1 and 3 take the inverse of a number as a member of the finite field GF(28), a finite field with 28 elements.

The number of elements in a finite field determines the field, up to isomorphism. That is, in a sense there is only one field with 28 = 256 elements. In fact there are 30 different fields with 256 elements (see this post for where that number came from). The 30 fields are isomorphic, but when we’re doing actual calculations, rather than abstract theory, we need to specify which representation we’re using.

Finite fields can be specified as polynomials modulo an irreducible polynomial. To carry out our calculations we need to specify a particular irreducible 8th degree polynomial with binary coefficients. The one that AES uses is

p(x) = x8 + x4 + x3 + x + 1.

So by taking the inverse in GF(28) we mean to take an 8-bit number y, interpret it as a polynomial with binary coefficients, and find another 8-bit number x-1 such that when we multiply them as polynomials, and take the remainder after dividing by p(x) we get the polynomial 1.

There’s one wrinkle in this procedure: only 255 of the 256 elements of GF(28) have an inverse. There is no inverse for 0, but for our purposes we’ll take the inverse of 0 to be 0.

Multiplication by L

The matrix L we need here is the 8 by 8 binary matrix whose entries are

        10001111
        11000111
        11100011
        11110001
        11111000
        01111100
        00111110
        00011111

When we say to multiply x-1 by L we mean to think of x-1 as a vector over the field of two elements, and carry out matrix multiplication in that context.

Additive constant

The constant that we add is 0x63. The reason an additive constant was chosen was so that a zero input would not map to a zero output. Note that “addition” here is vector addition, and is carried out over the field of two elements, just as the multiplication above. This amounts to XOR of the bits.

Manual calculation example

To make everything above more concrete, we’ll calculate one entry of the table by hand.

Lets start with input y = 0x11 = 0b10001. We represent y as the polynomial f(x) = x4 + 1 and look for a polynomial g(x) such that the remainder when f(x) g(x) is divided by p(x) defined above is 1.

The process of finding g is complicated—maybe I’ll blog about it in the future—but I claim

g(x) = x7 + x5 + x4 + x2

which means the inverse of y in GF(28), represented in binary, is 0b10110100 = 0xB4. You can find a table of inverses here.

Next we multiply the matrix L by the vector made from the bits of y-1. However, there is a gotcha here. When Knudsen and Robshaw say to multiply the bits of y-1 by L, they assume the bits are arranged from least significant to most significant. Since the bits of y-1 are 10110100, we multiply L by the vector

[0, 0, 1, 0, 1, 1, 0, 1].

This multiplication gives us the vector

[1, 0, 0, 0, 0, 1, 1, 1].

Next we add the vector formed from the bits of 0x63, again from least significant to most significant. That means we lay out 0x63 = 0b01100011 as

[1, 1, 0, 0, 0, 1, 1, 0].

This gives us the result

[0, 1, 0, 0, 0, 0, 0, 1].

Reading the bits from least significant to most, this gives 0x82.

In sum, we’ve verified that the AES S-box takes 0x11 to 0x82, as stated in the table.

Related posts

[1] The Rijndael block cipher operates on blocks whose size is a multiple of 32 bits. The AES standard adopted Rijndael with block sizes 128, 192, and 256.

[2] “Rijndael” is a play on the designers of the cipher, Vincent Rijmen and Joan Daemen.

Between now and quantum

The National Security Agency has stated clearly that they believe this is the time to start moving to quantum-resistant encryption. Even the most optimistic enthusiasts for quantum computing believe that practical quantum computers are years away, but so is the standardization of post-quantum encryption methods.

The NSA has also made some suggestions for what to do in the mean time [1]. Last year the agency replaced its Suite B cryptography recommendations with the CNSA: Commercial National Security Algorithm Suite.

In a nutshell: use well-established methods for now but with longer keys.

In a little larger nutshell, the recommendations are:

  • SHA-384 for secure hashing
  • AES-256 for symmetric encryption
  • RSA with 3072 bit keys for digital signatures and for key exchange
  • Diffie Hellman (DH) with 3072 bit keys for key exchange

Each of these represents a 50% or 100% increase in key length:

  • from 128 to 256 for AES
  • from 256 to 384 for hashing and ECC
  • from 2048 to 3072 for RSA and DH.

If these are just stopgap measures, why not jump straight to quantum-resistant methods? There are quantum-resistant encryption methods available, but most of them haven’t been studied that long. As Koblitz and Menezes put it,

… most quantum-resistant systems that have been proposed are complicated, have criteria for parameter selection that are not completely clear, and in some cases (such as NTRU) have a history of successful attacks on earlier versions.

Some methods do have a long history but have other drawbacks. Robert McEliece’s encryption method, for example, dates back to 1978 and has held up well, but it requires a megabyte key to achieve 128-bit security. There is a variation on McEliece’s method that has radically smaller keys, but it’s only been around for six years. In short, the dust hasn’t settled regarding post-quantum encryption methods.

Related posts

[1] People are naturally suspicious of algorithm recommendations coming from the NSA. Wouldn’t the agency like for everyone to use encryption methods that it could break? Of course. But the agency also wants US companies and government agencies to use encryption methods that foreign agencies cannot break.

There’s little downside to using established methods with longer keys. However, key length may not the weakest link. If you’re vulnerable to timing attacks, for example, doubling your key length may create a false sense of security.

Strong primes

There are a couple different definitions of a strong prime. In number theory, a strong prime is one that is closer to the next prime than to the previous prime. For example, 11 is a strong prime because it is closer to 13 than to 7.

In cryptography, a strong primes are roughly speaking primes whose products are likely to be hard to factor. More specifically, though still not too specific, p is a strong prime if

  1. p is large
  2. p – 1 has a large prime factor q
  3. q – 1 has a large prime factor r
  4. p + 1 has a large prime factor s

The meaning of “large” is not precise, and varies over time. In (1), large means large enough that it is suitable for use in cryptography, such as in RSA encryption. This standard increases over time due to increases in computational power and improvements in factoring technology. The meaning of “large” in (2), (3), and (4) is not precise, but makes sense in relative terms. For example in (2), the smaller the ratio (p – 1)/q the better.

Relation between the definitions

The Wikipedia article on strong primes makes the following claim without any details:

A computationally large safe prime is likely to be a cryptographically strong prime.

I don’t know whether this has been proven, or even if it’s true, but I’d like to explore it empirically. (Update: see the section on safe primes below. I misread “safe” above as “strong.” Just as well: it lead to an interesting investigation.)

We’ll need some way to quantify whether a prime is strong in the cryptographic sense. This has probably been done before, but for my purposes I’ll use the sum of the logarithms of q, r, and s. We should look at these relative to the size of p, but all the p‘s I generate will be roughly the same size.

Python code

I’ll generate 100-bit primes just so my script will run quickly. These primes are too small for use in practice, but hopefully the results here will be representative of larger primes.

    from sympy import nextprime, prevprime, factorint, randprime
    import numpy as np
    
    # largest prime factor
    def lpf(n):
        return max(factorint(n).keys())
    
    def log2(n):
        np.log2(float(n))
    
    num_samples = 100
    data = np.zeros((num_samples, 5))
    
    bitsize = 100
    
    for i in range(num_samples):
        p = randprime(2**bitsize, 2**(bitsize+1))
        data[i,0] = 2*p > nextprime(p) + prevprime(p)
        q = lpf(p-1)
        r = lpf(q-1)
        s = lpf(p+1)
        data[i,1] = log2(q)
        data[i,2] = log2(r)
        data[i,3] = log2(s)
        data[i,4] = log2(q*r*s)      

The columns of our matrix correspond to whether the prime is strong in the number theory sense, the number of bits in qr, and s, and the total bits in the three numbers. (Technically the log base 2 rather than the number of bits.)

Results

There were 75 strong primes and 25 non-strong primes. Here were the averages:

    |-----+--------+------------|
    |     | strong | not strong |
    |-----+--------+------------|
    | q   |   63.6 |       58.8 |
    | r   |   41.2 |       37.0 |
    | s   |   66.3 |       64.3 |
    | sum |  171.0 |      160.1 |
    |-----+--------+------------|

The numbers are consistently higher for strong primes. However, the differences are small relative to the standard deviations of the values. Here are the standard deviations:

    |-----+--------+------------|
    |     | strong | not strong |
    |-----+--------+------------|
    | q   |   20.7 |       15.6 |
    | r   |   19.8 |       12.3 |
    | s   |   18.7 |       19.9 |
    | sum |   30.8 |       41.9 |
    |-----+--------+------------|

Safe primes

I realized after publishing this post that the Wikipedia quote didn’t say what I thought it did. It said that safe primes are likely to be cryptographically strong primes. I misread that as strong primes. But the investigation above remains valid. It shows weak evidence that strong primes in the number theoretical sense are also strong primes in the cryptographic sense.

Note that safe does not imply strong; it only implies the second criterion in the definition of strong. Also, strong does not imply safe.

To test empirically whether safe primes are likely to be cryptographically strong, I modified my code to generate safe primes and compute the strength as before, the sum of the logs base 2 of qr, and s. We should expect the strength to be larger since the largest factor of p will always be as large as possible, (p – 1)/2. But there’s no obvious reason why r or s should be large.

For 100-bit safe primes, I got an average strength of 225.4 with standard deviation 22.8, much larger than in my first experiment, and with less variance.

Related posts

Goldilocks and the three multiplications

Illustration by Arthur Rackham, 1918. Public domain.

Mike Hamburg designed an elliptic curve for use in cryptography he calls Ed448-Goldilocks. The prefix Ed refers to the fact that it’s an Edwards curve. The number 448 refers to the fact that the curve is over a prime field where the prime p has size 448 bits. But why Goldilocks?

Golden primes and Goldilocks

The prime in this case is

p = 2448 – 2224 – 1,

which has the same form as the NIST primes. Hamburg says in his paper

I call this the “Goldilocks” prime because its form defines the golden ratio φ = 2224.

That sentence puzzled me. What does this have to do with the golden ratio? The connection is that Hamburg’s prime is of the form

φ² – φ – 1.

The roots of this polynomial are the golden ratio and its conjugate. But instead of looking for real numbers where the polynomial is zero, we’re looking for integers where the polynomial takes on a prime value. (See the followup post on golden ratio primes.)

The particular prime that Hamburg uses is the “Goldilocks” prime by analogy with the fairy tale: the middle term 2224 is just the right size. He explains

Because 224 = 32*7 = 28*8 = 56*4, this prime supports fast arithmetic in radix 228 or 232 (on 32-bit machines) or 256 (on 64-bit machines). With 16, 28-bit limbs it works well on vector units such as NEON. Furthermore, radix-264 implementations are possible with greater efficiency than most of the NIST primes.

Karatsuba multiplication

The title of this post is “Goldilocks and the three multiplications.” Where do the three multiplications come in? It’s an allusion to an algorithm for multi-precision multiplication that lets you get by with three multiplications where the most direct approach would require four. The algorithm is called Karatsuba multiplication [1].

Hamburg says “The main advantage of a golden-ratio prime is fast Karatsuba multiplication” and that if we set φ = 2224 then

\begin{align*} (a + b\phi)(c + d\phi) &= ac + (ad+bc)\phi + bd\phi^2 \\ &\equiv (ac+bd) + (ad+bc+bd)\phi \pmod{p} \\ &= (ac + bd) +((a+b)(c+d) - ac)\phi \end{align*}

Note that the first line on the right side involves four multiplications, but the bottom line involves three. Since the variables represent 224-bit numbers, removing a multiplication at the expense of an extra addition and subtraction is a net savings [2].

The most important line of the calculation above, and the only one that isn’t routine, is the second. That’s where the special form of p comes in. When you eliminate common terms from both sides, the calculation boils down to showing that

bd(\phi^2 - \phi - 1) \equiv 0 \pmod{p}

which is obviously true since p = φ² – φ – 1.

Curve Ed448-Goldilocks

Edwards curves only have one free parameter d (besides the choice of field) since they have the form

x² + y² = 1 + d x² y².

Hamburg chose d = -39081 for reasons explained in the paper.

Most elliptic curves used in ECC currently work over prime fields of order 256 bits, providing 128 bits of security. The motivation for developed Ed448 was much the same as for developing P-384. Both work over larger fields and so provide more bits of security, 224 and 192 bits respectively.

Unlike P-384, Ed448 is a safe curve, meaning that it lends itself to a secure practical implementation.

Related posts

[1] Here we’ve just applied the Karatsuba algorithm one time. You could apply it recursively to multiply two n-bit numbers in O(nk) time, where k = log2 3 ≈ 1.86. This algorithm, discovered in 1960, was the first multiplication algorithm faster than O(n²).

[2] Addition and subtraction are O(n) operations. And what about multiplication? That’s not an easy question. It’s no worse than O(n1.68) by virtue of the Karatsuba algorithm. In fact, it’s O(n log n), but only for impractically large numbers. See the discussion here. But in any case, multiplication is slower than addition for multiprecision numbers.

Tricks for arithmetic modulo NIST primes

The US National Institute of Standards and Technology (NIST) originally recommended 15 elliptic curves for use in elliptic curve cryptography [1]. Ten of these are over a field of size 2n. The other five are over prime fields. The sizes of these fields are known as the NIST primes.

The NIST curves over prime fields are named after the number of bits in the prime: the name is “P-” followed by the number of bits. The primes themselves are named p with a subscript for the number of bits.

The five NIST primes are

p192 = 2192 – 264 – 1
p224 = 2224 – 296 + 1
p256 = 2256 – 2244 + 2192 + 296 – 1
p384 = 2384 – 2128 – 296 + 232 – 1
p521 = 2521 – 1

The largest of these, p521, is a Mersenne prime, and the rest are generalized Mersenne primes.

Except for p521, the exponents of 2 in the definitions of the NIST primes are all multiples of 32 or 64. This leads to efficient tricks for arithmetic modulo these primes carried out with 32-bit or 64-bit integers. You can find pseudocode implementations for these tricks in Mathematical routines for the NIST prime elliptic curves.

The elliptic curve Ed448 “Goldilocks” was not part of the original set of recommended curves from NIST but has been added. It employs a multiplication trick in the same spirit as the routines referenced above, but simpler. Ed448 uses

p = 2448 – 2224 – 1

which has the special form φ² – φ – 1 where φ = 2224. This enables a trick known as Karatsuba multiplication. More on that here.

Related posts

[1] FIPS PUB 186-4. This publication is dated 2013, but the curve definitions are older. I haven’t found for certain when the curves were defined. I’ve seen one source that says 1997 and another that says 1999.

Elliptic curve P-384

The various elliptic curves used in ellitpic curve cryptography (ECC) have different properties, and we’ve looked at several of them before. For example, Curve25519 is implemented very efficiently, and the parameters were transparently chosen. Curve1174 is interesting because it’s an Edwards curve and has a special addition formula.

This post looks at curve P-384. What’s special about this curve? It’s the elliptic curve that the NSA recommends everyone use until post-quantum methods have been standardized. It provides 192 bits of security, whereas more commonly used curves provide 128 bits.

Does the NSA recommend this method because they know how to get around it? Possibly, but they also need to recommend methods that they believe foreign governments cannot break.

The equation of the P-384 curve is

y² = x³ + ax + b

working over the field of integers modulo a prime p. We will go into each of the specific parameters ab, and p, and discuss how they were chosen.

Modulus p

Consisting with the naming conventions for elliptic curves used in cryptography, the name “P-384” tells you that the curve is over a prime field where the prime is a 384-bit integer. Specifically, the order of the field is

p = 2384 – 2128 – 296 + 232 – 1

For a given number of bits, in this case 384, you want to pick a prime that’s relatively near the maximum size for that number of bits. In our case, our prime p is a prime near 2384 with a convenient bit pattern. (The special pattern allows implementation tricks that increase efficiency.)

Hasse’s theorem says that the number of points on a curve modulo a large prime is on the order of magnitude equal to the prime, so P-384 contains approximately 2384 points. In fact, the number of points n on the curve is

39402006196394479212279040100143613805079739270465446667946905279627659399113263569398956308152294913554433653942643

or approximately 2384 – 2190. The number n is a prime, and so it is the order of P-384 as a group.

Linear coefficient a

According to a footnote in the standard defining P-384, FIPS PUB 186-4,

The selection a ≡ -3 for the coefficient of x was made for reasons of efficiency; see IEEE Std 1363-2000.

All the NIST elliptic curves over prime fields use a = -3 because this makes it possible to use special algorithms for elliptic curve arithmetic.

Constant coefficient b

The curve P-384 has Weierstrass form

y² = x³ – 3x + b

where b is

27580193559959705877849011840389048093056905856361568521428707301988689241309860865136260764883745107765439761230575.

The parameter b is between 2383 and 2384 but doesn’t have any particular binary pattern:

101100110011000100101111101001111110001000111110111001111110010010011000100011100000010101101011111000111111100000101101000110010001100000011101100111000110111011111110100000010100000100010010000000110001010000001000100011110101000000010011100001110101101011000110010101100011100110001101100010100010111011010001100111010010101010000101110010001110110111010011111011000010101011101111

The specification says that b was chosen at random. How can you convince someone that you chose a parameter at random?

The standard gives a 160-bit seed s, and a hash-based algorithm that s was run through to create a 384-bit parameter c. Then b is the solution to

b² c = -27 mod p.

The algorithm going from the s to c is given in Appendix D.6 and is a sort of key-stretching algorithm. The standard cites ANS X9.62 and IEEE Standard 1363-2000 as the source of the algorithm.

If b was designed to have a back door, presumably a tremendous amount of computation had to go into reverse engineering the seed s.

Koblitz and Menezes wrote a paper in which they suggest a way that the NSA might have picked seeds that lead to weak elliptic curves, but then argue against it.

It is far-fetched to speculate that the NSA would have deliberately selected weak elliptic curves in 1997 for U.S. government usage … confident that no one else would be able to discover the weakness in these curves in the ensuing decades. Such a risky move by the NSA would have been inconsistent with the Agency’s mission.

Related posts

Isogeny-based encryption

If and when large quantum computers become practical, all currently widely deployed method for public key cryptography will break. Even the most optimistic proponents of quantum computing believe such computers are years away, maybe decades. But it also takes years, maybe decades, to develop, test, and deploy new encryption methods, and so researchers are working now to have quantum-resistant encryption methods in place by the time they are needed.

What’s special about isogeny-based encryption?

One class of quantum-resistant encryption methods is isogeny-based encryption. This class stands out for at least a couple methods:

  • it uses the shortest keys, and
  • it uses the most sophisticated math.

Most post-quantum encryption schemes require much longer keys to maintain current levels of protection, two or three orders of magnitude longer. Isogeny-based encryption uses the shortest keys of any proposed post-quantum encryption methods, requiring keys roughly the same size as are currently in use.

The mathematics behind isogeny-based cryptography is deep. Even a high-level description requires quite a bit of background. I’ll take a shot at exploring the prerequisites starting with this blog post.

Elliptic curves

Elliptic curve cryptography is widely used today, and partly for one of the reasons listed above: short keys. To achieve a level of security comparable to 128-bit AES, you need a 256-bit key using elliptic curve cryptography, but a 3072-bit key using RSA.

Quantum computers could solve the elliptic curve discrete logarithm problem efficiently, and so elliptic curve cryptography as currently practiced is not quantum resistant. Isogeny-based encryption is based on elliptic curves, but not as directly as current ECC methods. While current ECC methods perform computations on a elliptic curves, isogeny methods are based on networks of functions between elliptic curves.

SIKE

NIST is sponsoring a competition for post-quantum encryption methods, and only one of the contestants is related to elliptic curves, and that’s SIKE. The name stands for Supersingular Isogeny Key Encapsulation. “Supersingular” describes a class of elliptic curves, and SIKE is based on isogenies between these curves.

Future posts

This post raises a lot of questions. First and foremost, what is an isogeny? That’s the next post. And what are “supersingular” elliptic curves? I hope to go over that in a future post. Then after exploring the building blocks, where does encryption come in?

Past posts

I’ve written several related blot posts leading up to this topic from two directions: post-quantum encryption and elliptic curves.

Post-quantum encryption links

Elliptic curve links

Mixing error-correcting codes and cryptography

Secret codes and error-correcting codes have nothing to do with each other. Except when they do!

Error-correcting codes

Error correcting code make digital communication possible. Without some way to detect and correct errors, the corruption of a single bit could wreak havoc. A simple example of an error-detection code is check sums. A more sophisticated example would be erasure codes, a method used by data centers to protect customer data against hard drive failures or even entire data centers going offline.

People who work in coding theory are quick to point out that they do not work in cryptography. “No, not that kind of code. Error-correcting codes, not secret codes.” The goal isn’t secrecy. The goal is maximize the probability of correctly transmitting data while minimizing the amount of extra information added.

Codes and ciphers

You don’t hear the word “code” used in connection with cryptography much anymore. People used to refer to “codes and ciphers” in one breath. Historically, the technical distinction was that a code operated on words, while a cipher operated on characters. Codes in this sense have long been obsolete, but people still speak of codes colloquially.

David Kahn’s classic book on pre-modern cryptography is entitled The Codebreakers, not the Cipherbreakers, because the public at the time was more familiar with the term code than the term cipher. Maybe that’s still the case because, for example, Jason Fagone entitled his biography of Elizabeth Friedman The Woman Who Smashed Codes. Perhaps the author suggested The Woman Who Smashed Ciphers and an editor objected.

Code-based cryptography

If you’re accustomed to the older use of “codes,” the term “code-based cryptography” is redundant. But it means something specific in modern usage: cryptographic systems that incorporate error-correction codes. So error-correcting codes and secret “codes” do have something to do with each other after all!

Robert McEliece had this idea back in 1978. His encryption method starts with a particular error-correcting code, a binary Goppa code, and scrambles it with an invertible linear transformation. At a very high level, McEliece’s method boils down to a secret factorization, sorta like RSA but even more like oil and vinegar. The public key is the product of the Goppa code and the linear transformation, but only the owner knows the factorization of this key.

To encrypt a message with McEliece’s method, the sender adds a specific amount of random noise, noise that the Goppa code can remove. An attacker faces a challenging computational problem to recover the message without knowing how to factor the public key.

Post-quantum cryptography

McEliece’s method did not attract much interest at the time because it requires much larger public keys than other methods, such as RSA. However, there is renewed interest in McEliece’s approach because his scheme is apparently quantum-resistant whereas RSA and other popular public key systems are not.

If and when large quantum computers become practical, they could factor the product of large primes efficiently, and thus break RSA. They could also solve the discrete logarithm and elliptic discrete logarithm problems, breaking Diffie-Hellman and elliptic curve cryptosystems. All public key cryptosystems now in common use would be broken.

Why worry about this now while quantum computers don’t exist? (They exist, but only as prototypes. So far the largest number a quantum computer has been able to factor is 21.) The reason is that it takes a long time to develop, analyze, standardize, and deploy encryption methods. There’s also the matter of forward security: someone could store encrypted messages with the hope of decrypting them in the future. This doesn’t matter for cat photos transmitted over TLS, but it could matter for state secrets; governments may be encrypting documents that they wish to keep secret for decades.

NIST is sponsoring a competition to develop and standardize quantum-resistant encryption methods. Two months ago NIST announced the candidates that advanced to the second round. Seven of these methods use code-based cryptography, including the classic McEliece method and six variations: BIKE, HQC, LEDAcrypt, NTS-KEM, ROLLO, and RQC.

Related posts

Digital signatures with oil and vinegar

“Unbalanced oil and vinegar” is a colorful name for a cryptographic signature method. This post will give a high-level description of the method and explain where the name comes from.

The RSA encryption algorithm depends on the fact that computers can easily multiply enormous numbers, but they cannot efficiently factor the product of two enormous primes. Whenever you have something that’s easy to do but hard to undo, you might be able to make an encryption algorithm out of it.

The unbalanced oil and vinegar (UOV) digital signature algorithm is analogous to RSA in that it also depends on the difficulty of factoring. But UOV is based on the difficulty of factoring the composition of a linear and nonlinear operator, not multiplying prime numbers. One advantage of UOV over RSA is that UOV is quantum-resistant. That is, if large quantum computers become practical, UOV signatures will remain hard to forge (or so it is currently believed) whereas RSA signatures would be easy to forge.

Solving large systems of multivariate polynomial equations over finite fields is hard, provably NP-hard, unless there’s some special structure that makes things easier. Several proposed post-quantum digital signature algorithms are based on this, such as the LUOV variant on UOV.

The idea behind UOV is to create systems of equations that have a special structure, with some “oil” variables and some “vinegar” variables, so named because they do not mix, or rather mix in a very simple, convenient way. This special structure is kept secret, and is obscured by composition with an invertible linear operator. This operator acts like a blender, thoroughly mixing the oil and vinegar. The term “unbalanced” refers to the fact that the scheme is more secure if you do not have equal numbers of “oil” and “vinegar” variables.

Polynomials over finite fields. Polynomials over finite fields everywhere!

Someone wanting to sign a file with the UOV algorithm knows the oil-and-vinegar structure and produces a vector that is mapped to a specified value, inverting the composition of the linear operator and the polynomials. They can do this because they know the factorization into this special structure. Someone wanting to verify a UOV signature only knows the (apparently unstructured) composition. They just see a large system of multivariate polynomial equations. They can stick a signature in and verify that the output is what it’s supposed to be, but they couldn’t produce a signature because they can’t invert the system. [1]

How large do these systems of polynomials need to be? On the order of a hundred equations and variables, though with more variables than polynomials. Not that large compared to linear systems, where one can efficiently solve systems with millions of equations and variables. And the polynomial are only quadratic. So in one sense the systems are small. But it takes several kilobytes [2] to describe such systems, which makes the public keys for UOV large relative to currently popular digital signature algorithms such as ECDSA. The signatures produced by UOV are small, but the public keys are large.

Related posts

[1] The system is not invertible in the sense of being one-to-one because it’s underdetermined. By inverting the system we mean producing any input that maps to the desired output. This solution is not generally unique.

[2] Representing m quadratic polynomials in n variables over a field of size b bits requires bmn²/2 bits. So 80 quadratic polynomials in 120 variables over GF(28) would require 8 × 80 × 120²/2 = 4,608,000 bits = 576 kilobytes. The LUOV variation on UOV mentioned above reduces the key sizes quite a bit, but it still requires larger public keys than ECDSA.