Using PractRand to test an RNG

Yesterday I wrote about my experience using NIST STS to test an entropy extractor, a filtering procedure that produces unbiased bits from biased sources. This post will look at testing the same entropy extractor using the Practically Random (PractRand) test suite. The results were much worse this time, which speaks to the limitations of both the entropy extractor and the STS test suite.

Difficulties with PractRand

I built PractRand on Linux with no difficulty. There were multiple warnings but no errors. But when I piped a file of bits into RNG_test I got a message saying “error reading standard input”.

A couple years ago I ran PractRand using my Mac with no problems (results here) so I switched over to the machine I’d used before. The latest version of PractRand (pre95) produces compile errors, but the version I’d used before (93) compiled successfully.

Piping bits to RNG_test from the command line worked when I used the program RNG_output included with PractRand, but piping my own file of bits to the test suite failed with the same error I saw on Linux. I applied a patch posted by Daniel Lemire and was able to pipe my file to the test program. I went back to Linux to see if I could get version 93 to work there, but no luck.

Testing bits from a file is unusual; it’s faster to generate random numbers and send them directly to the test without going to disk and back, and so naturally that’s what people will do more often. But I wanted to use bits from a file because I need to do that on another project.

Test results

The entropy extractor failed 47 out of 151 tests using 128 MB of data. Five other tests reported anomalies ranging from “mildly suspicious” to “very suspicious.” The tests that failed had p-values of 10-16 and smaller. Most of the p-values were less than can be represented by a 64-bit floating point number. PractRand must use extended precision to report just how infinitesimally small some of the p-values are.

To make sure the test failures weren’t due to some anomaly with piping the file into RNG_test I repeated my procedure using output from the Melissa O’Neill’s PCG generator. All tests passed with no anomalies.

First do no harm

What if you run the entropy extractor on a high-quality source of random bits? When I reran the experiment on three streams from PCG, the result passed all PractRand tests without any anomalies. This suggests that applying the μRNG entropy extractor improves random streams, though perhaps not as much as we’d like, and at a minimum it makes streams no worse.

More on random number generation

Testing entropy extractor with NIST STS

Around this time last year I wrote about the entropy extractor used in μRNG. It takes three biased random bit streams and returns an unbiased bit stream, provided each input stream as has least 1/3 of a bit of min-entropy.

I’ve had in the back of my mind that I should go back and run the output of the extractor through a standard test suite. I’ve been doing more RNG testing lately, and so while the software is fresh on my mind I wanted to go back and test the entropy extractor.

To create a biased bit stream, I first created an unbiased bit stream using the PCG random number generator, then took the bitwise OR of consecutive samples. In C notation, I created two 32-bit unsigned integers u and v and used u|v. The resulting bits are 1’s three out of four times since a bit is 0 only if both corresponding bits are 0’s. The min-entropy of the resulting stream is

-log2 max(0.25, 0.75) = 0.415

which is larger than 1/3, so we should be able to apply the μRNG entropy extractor. To do this, we create three bit streams. Let a, b, and c be 8-bit bytes, each from a different stream. Then we combine these into

a×b + c

and use that as one byte of output. So it takes three bytes of input to produce one byte of output, which is to be expected since we’re starting with sources that may contain only 1/3 of a bit of min-entropy per bit of output.

The multiplication and addition above are carried out in the Galois field GF(28). This means that multiplication may be like nothing you expect, and addition is XOR, i.e. bitwise exclusive OR. The multiplication is the same at that used in AES encryption.

NIST Statistical Test Suite

There are several test suites we could use—DIEHARDER, PractRand, TestU01, etc.—and expect I’ll write more about those before long, but for this post we’ll focus on the NIST Statistical Test Suite or STS. (Update: the extractor fails hard on PractRand.) The STS suite includes the following 15 tests.

  • Frequency (monobit)
  • Frequency test within a block
  • Runs test
  • Test for the longest run of ones in a block
  • Binary matrix rank test
  • Discrete Fourier Transform (spectral) test
  • Non-overlapping template matching test
  • Overlapping template matching test
  • Maurer’s universal statistical test
  • Linear complexity test
  • Serial test
  • Approximate entropy test
  • Cumulative sums (cumsum) test
  • Random excursions test
  • Random excursions variant test

See NIST Special Publication 800-22: A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications.

When we run the input of PCG alone through STS, it passes with flying colors. If we run the output of PCG after OR’ing pairs of unsigned integers together, the test fails spectacularly, spewing underflow warnings. But if we take three streams of biased bits that would each fail and keep the extracted output, it passes all the tests, just as with the original output of PCG.

In each case I tested one million 32-bit unsigned integers. In the biased case I sampled three million integers

Note that in our example we know the amount of bias because we deliberately created the bias. You might not know the bias, or equivalently the min-entropy, in general. As long as the min-entropy is greater than 1/3, the entropy extractor works.

More on RNG testing

Detecting a short period in an RNG

The last couple posts have been looking at the Cliff random number generator. I introduce the generator here and look at its fixed points. These turn out to be less of a problem in practice than in theory.

Yesterday I posted about testing the generator with the DIEHARDER test suite, the successor to George Marsaglia’s DIEHARD battery of RNG tests. I don’t think anyone is using the Cliff RNG for serious work, but people are definitely using DIEHARDER.

The Cliff RNG has a short period, and yet many of the DIEHARDER tests passed. However, several of the tests failed as well, and perhaps these tests failed due to the short period, in particular rgb_lagged_sum. But at least some tests can pass despite a short period.

Finding the period

Since the Cliff RNG maintains internal state as a floating point number and outputs integers, the period is a bit subtle.

The state of the Cliff RNG is a floating point number between 0 and 1, and so it has 253 possible values. (See Anatomy of a floating point number.) That means the maximum possible period is 253. We use the internal state x to create 32-bit integers n by multiplying x by 232 and truncating to an integer value. The maximum period could conceivably be larger than 232 because an output value n could repeat but correspond to two different x‘s. The actual period, at least in my experiment, was between 222 and 223.

I seeded the Cliff RNG with π – 3 (why?) and found that starting with index i = 3,075,302, the output values repeat with period p = 6,705,743. So there was a burn-in period before the state entered a cycle, but it would repeat that cycle forever. Not only are the output values repeating, the state values x repeat. (Otherwise it would be possible for the integer outputs to cycle for a while then break out.)

It’s possible that starting with other seeds, the generator has other cycle lengths, longer or shorter. But I stumbled across one cycle of period 6,705,743.

Testing RNGs

I wrote a chapter for O’Reilly’s book Beautiful Testing in which I discuss How to test a random number generator. More specifically, now to test a non-uniform random number generator. That is, starting with a trusted uniform random number generator, transform the values to have some other probability distribution. This is the most common scenario. Few developers write their own core RNG, but it’s fairly common to have to use a core RNG to create a custom distribution.

If you do want to test a uniform random number generator, as I do in this post, there are test suites like DIEHARDER. This is one of the oldest and best known test suites. There are newer and more rigorous suites, like TestU01 that I blog about here. There’s also the NIST statistical test suite that I write about in the same post.

These tests are challenging to build and use. I run these tests for clients and help them interpret the results, and you’d like for me to do that for you, let’s talk.

Testing Cliff RNG with DIEHARDER

My previous post introduced the Cliff random number generator. The post showed how to find starting seeds where the generator will start out by producing approximately equal numbers. Despite this flaw, the generator works well by some criteria.

I produced a file of s billion 32-bit integers by multiplying the output values, which were floating point numbers between 0 and 1, by 232 and truncating to integer. Then I ran the DIEHARDER random number generator test suite.

The results were interesting. Before running the tests, I thought the tests would nearly all pass or nearly all fail, more likely the latter. But what happened was that many tests passed and some failed hard [1].

Here’s a list of the tests that passed:

  • diehard_birthdays
  • diehard_rank_32x32
  • diehard_rank_6x8
  • diehard_bitstream
  • diehard_oqso
  • diehard_dna
  • diehard_count_1s_str
  • diehard_count_1s_byt
  • diehard_runs
  • sts_monobit
  • sts_serial
  • rgb_bitdist
  • rgb_kstest_test
  • dab_dct
  • dab_filltree2
  • dab_monobit2

The tests that failed were:

  • diehard_parking_lot
  • diehard_2sphere
  • diehard_3sphere
  • diehard_squeeze
  • diehard_craps
  • marsaglia_tsang_gcd
  • rgb_lagged_sum
  • dab_bytedistrib

I’ve left out a few test results that were ambiguous as well as tests that were described as “Suspect” and “Do not use” on the DIEHARDER website.

The site I mentioned in the previous post where I ran across this generator said that it passed a spherical generation test. I assume the implementation of that test was less demanding that the version included in DIEHARD. But the generator does well by other tests.

The lagged sum test tests for autocorrelation. There’s a good reason for this: the generator has a short period, as I discuss here. The lagged sum test fails because the output has perfect autocorrelation at a lag equal to the period.

This generator demonstrates how passing a few RNG tests can be misleading. Or to look at it a different way, it shows how a generator can be useful for some tasks and terrible for other tasks.

More on testing RNGs

[1] By “failed hard” I mean the test return a p-value of zero. The p-value couldn’t actually be zero, but it was close enough that it the displayed value was exactly zero.

Fixed points of the Cliff random number generator

I ran across the Cliff random number generator yesterday. Given a starting value x0 in the open interval (0, 1), the generator proceeds by

xn+1 = | 100 log(xn) mod 1 |

for n > 0. The article linked to above says that this generator passes a test of randomness based on generating points on a sphere.

Real numbers

While the long term distribution of the generator may be good, it has a problem with its sequential behavior, namely that it has an infinite number of fixed points. If the generator ever reaches one of these points, it gets stuck forever.

Here’s a proof. Since x is between 0 and 1, log(x) is always negative. So we can replace the absolute value above with a negative. A fixed point of the generator is a solution to

x = -100 log(x) – k

for some integer k. Define

f(x, k) = -100 log(x) – xk.

For each non-negative k, the limit of f(x, k) as x goes to 0 is ∞ and the limit as x goes to 1 is negative, so somewhere in between it is zero.

Floating point numbers

So in exact operations over the reals, there is a fixed point for each non-negative integer k. However, when implemented in finite precision arithmetic, it manages to get itself unstuck as the following Python code shows with k arbitrarily chosen to be 20.

    from numpy import log
    from scipy.optimize import bisect

    r = bisect(lambda x: -100*log(x) - x - 20, 0.4, 0.999)
    print(r)
    for i in range(10):
        r = abs(-100*log(r)) % 1
        print(r)

This produces

    0.8121086949937715
    0.8121086949252678
    0.8121087033605505
    0.8121076646716787
    0.8122355649800639
    0.7964876237426814
    0.7543688054472710
    0.1873898667258977
    0.4563983607272064
    0.4389252746489802
    0.3426097596058071

In infinite precision, r above would be a fixed point. In floating point precision, r is not. But it does start out nearly fixed. The first iteration only changes r in the 11th decimal place, and it doesn’t move far for the next few iterations.

Update: See the next post for how the random number generator does on the DIEHARDER test suite.

Plotting fixed points

The kth fixed point is the solution to f(x, k) = 0. The following code plots the fixed points as a function of k.

    t = arange(300)
    y = [bisect(
            lambda x: -100*log(x)-x-k,
            0.001,
            0.999)
        for k in t]

    plt.plot(t, y)
    plt.xlabel("k")
    plt.ylabel("fixed point")

fixed points of Cliff random number generator

The fixed points cluster toward zero, or they would in infinite precision arithmetic. As we showed above, the Cliff random number generator performs better in practice than in theory. Maybe the generator does well after wandering close to zero, but I wouldn’t be surprised if it has a bias toward the low end of the interval.

More RNG posts

Using one RNG to sample another

Suppose you have two pseudorandom bit generators. They’re both fast, but not suitable for cryptographic use. How might you combine them into one generator that is suitable for cryptography? In more technical terms, how can you combine two PRNGS to create a CSPRNG?

Coppersmith et al [1] had a simple but effective approach which they call the shrinking generator, also called irregular decimation. The idea is to use one bit stream to sample the other. Suppose the two bit streams are ai and bi. If ai = 1, then output bi. Otherwise, throw it away. In NumPy or R notation, this is simply b[a > 0].

Examples in Python and R

For example, in Python

    >>> import numpy as np
    >>> a = np.array([1,0,1,1,0,0,0,1])
    >>> b = np.array([1,0,1,0,0,1,1,0])
    >>> b[a > 0]
    array([1, 1, 0, 0])

Here’s the same example in R.

    > a = c(1,0,1,1,0,0,0,1)
    > b = c(1,0,1,0,0,1,1,0)
    > b[a>0]
    [1] 1 1 0 0

Linear Feedback Shift Registers

Coppersmith and colleagues were concerned specifically with linear feedback shift register (LFSR) streams. These are efficient sources of pseudorandom bits because they lend themselves to hardware implementation or low-level software implementation. But the problem is that linear feedback shift registers are linear. They have an algebraic structure that enables simple cryptographic attacks. But the procedure above is nonlinear and so less vulnerable to attack.

A potential problem is that the shrinking generator outputs bits at an irregular rate, and a timing attack might reveal something about the sampling sequence a unless some sort of buffering conceals this.

Other stream sources

While the shrinking generator was developed in the context of LFSRs, it seems like it could be used to combine any two PRNGs in hopes that the combination is better than the components. At a minimum, it doesn’t seem it should make things worse [2].

There is a problem of efficiency, however, because on average the shrinking generator has to generate 4n bits to output n bits. For very efficient generators like LFSRs this isn’t a problem, but it could be a problem for other generators.

Self-shrinking generators

A variation on the shrinking generator is the self-shrinking generator. The idea is to use half the bits of a stream to sample the other half. Specifically, look at pairs of bits, and if the first bit is a 1, return the second bit. If the first bit is a 0, return nothing.

Use in stream ciphers

The eSTREAM competition for cryptographically secure random bit generators included one entry, Decim v2, that uses shrinking generators. The competition had two categories, methods intended for software implementation and methods intended for hardware implementation. Decim was entered in the hardware category. According to the portfolio PDF [3] on the competition site,

Decim contains a unique component in eSTREAM, that of irregular decimation, and is an interesting addition to the field of stream ciphers.

That sounds like it was the only method to use irregular decimation (i.e. shrinking generators).

The first version of Decim had some flaws, but the document says “no adverse cryptanalytic results are known” for the second version. Decim v2 made it to the second round of the eSTREAM competition but was not chosen as a finalist because

… the cipher doesn’t seem to deliver such a satisfying performance profile, so while there might be some very interesting elements to the Decim construction, we feel that the current proposal doesn’t compare too well to the other submissions for the hardware profile.

That seems to imply Decim might be competitive with a different implementation or in some context with different trade-offs.

Related posts

[1] Coppersmith, D. Krawczyk, H. Mansour, Y. The Shrinking Generator. Advances in Cryptology — CRYPTO ’93. Lecture Notes in Computer Scienc, vol. 773, pp. 22–39. Springer, Berlin.

[2] If a and b are good sources of random bits, it seems b sampled by a should be at least as good. In fact, if a is poor quality but b is good quality, b sampled by a should still be good. However, the reverse could be a problem. If b is biased, say it outputs more 1s than 0s, then if a is a quality sample, that sample will be biased in favor of 1s as well.

[3] The link to this report sometimes works but often doesn’t. There’s something unstable about the site. In case it works, here’s the URL: http://www.ecrypt.eu.org/stream/portfolio.pdf

A truly horrible random number generator

I needed a bad random number generator for an illustration, and chose RANDU, possibly the worst random number generator that was ever widely deployed. Donald Knuth comments on RANDU in the second volume of his magnum opus.

When this chapter was first written in the late 1960’s, a truly horrible random number generator called RANDU was commonly used on most of the world’s computers.

This generator starts with an odd seed x0 and generates subsequent values via

xn+1 = 65539 xn mod 231

I needed to generate 8-bit numbers, and I took the lowest eight bits of each value from RANDU. (To be fair, the highest 8 bits would have been better, but my goal was to find a bad RNG, so I used the lowest.)

To demonstrate the generator’s (lack of) quality I made a histogram of sorts with a 16 by 16 grid, one cell for each possible 8-bit number. I generated a large number of samples and plotted the grid. Here’s what I got:

RANDU histogram

Here’s what I get with a better random number generator:

randint histogram

I was looking for something bad, but I didn’t realize RANDU was that bad. The white stripes represent the generated values and the black stripes values that are never generated. So out of 256 possible 8-bit numbers, this generator only ever outputs 64 of them. (I used 33 as my seed. I might have gotten different vertical stripes if I had chosen a different seed, but I’d still get stripes.) Also, the frequencies of the values it does take on have suspiciously little variation.

You can see the pattern of values RANDU takes on by printing out the first few generated values in hex:

    0x63
    0x29
    0x7b
    0x71
    0x53
    0xf9
    0xeb
    0xc1

The last hex digit cycles 3, 9, b, 1, 3, 9, b, 1, … producing only four possible values.

We could prove the statements empirically demonstrated above by noting that

xn = 65539n x0 mod 2k

for k = 31, but also for smaller k. We could set k = 8 and prove that there are only 64 values, or set k = 4 and prove that there are only four final hex digits.

More random number generation posts

Implementing the ChaCha RNG in Python

My previous post talked about the ChaCha random number generator (CSPRNG) and how Google is using it in a stream cipher for encryption on low-end devices. This post talks about how to implement ChaCha in pure Python.

First of all, the only reason to implement ChaCha in pure Python is to play with it. It would be more natural and more efficient to implement ChaCha in C.

RFC 8439 gives detailed, language-neutral directions for how to implement ChaCha, including test cases for intermediate results. At its core is the function that does a “quarter round” operation on four unsigned integers. This function depends on three operations:

  • addition mod 232, denoted +
  • bitwise XOR, denoted ^, and
  • bit rotation, denoted <<<=n.

In C, the += operator on unsigned integers would do what the RFC denotes by +=, but in Python working with (signed) integers we need to explicitly take remainders mod 232. The Python bitwise-or operator ^ can be used directly. We’ll write a function roll that corresponds to <<<=.

So the following line of pseudocode from the RFC

    a += b; d ^= a; d <<<= 16;

becomes

    a = (a+b) % 2**32; d = roll(d^a, 16)

in Python. One way to implement roll would be to use the bitstring library:

    from bitstring import Bits

    def roll(x, n):
        bits = Bits(uint=x, length=32)
        return (bits[n:] + bits[:n]).uint

Another approach, a little harder to understand but not needing an external library, would be

    def roll2(x, n):
        return (x << n) % (2 << 31) + (x >> (32-n))

So here’s an implementation of the ChaCha quarter round:

    def quarter_round(a, b, c, d):
        a = (a+b) % 2**32; d = roll(d^a, 16)
        c = (c+d) % 2**32; b = roll(b^c, 12)
        a = (a+b) % 2**32; d = roll(d^a,  8)
        c = (c+d) % 2**32; b = roll(b^c,  7)
        return a, b, c, d

ChaCha has a state consisting of 16 unsigned integers. A “round” of ChaCha consists of four quarter rounds, operating on four of these integers at a time. All the details are in the RFC.

Incidentally, the inner workings of the BLAKE2 secure hash function are similar to those of ChaCha.

More RNG posts

Google Adiantum and the ChaCha RNG

The ChaCha cryptographic random number generator (CSPRNG) is in the news thanks to Google’s Adiantum project. I’ll discuss what’s going on, but first a little background.

Adiantum maidenhead fern

The name of the project comes from a genus of fern. More on that below as well.

One-time pads

The one-time pad is a provably unbreakable way to encrypt things. You create a sheet of random bits and give your counterpart an exact copy. Then when it comes time for you to send an encrypted message, you convert your message to a stream of bits, XOR your message with the random bits you exchanged previously, and send the result. The recipient then takes the XOR of the received message with the pad of random bits, and recovers the original message.

This is called a one-time pad because it’s a pad of bits that you can only use one time. If you reuse a pad, it’s no longer unbreakable.

One-time pads are impractical for a couple reasons. First, it’s hard to generate truly random bits, especially in bulk. Second, exchanging the pads is almost as difficult as exchanging messages.

Stream ciphers

So here’s a bright idea: we’ll get around both of the problems with one-time pads by using pseudorandom bits rather than random bits! The both parties can generate their own random bits.

Many people have had this idea, and it’s not necessarily a bad one. It’s called a stream cipher. The problem is that most pseudorandom number generators are not up to the task. You need a cryptographically secure RNG, and most RNGs are far from secure. The ChaCha RNG, however, appears to be good enough to use in a stream cipher, given enough rounds of scrambling [1], and Google is using it for full disk encryption in Android devices.

Full disk encryption

If you forget your password to your computer, you may not be able to access your data, but a thief still could by removing the hard drive and accessing it from another computer. That is, unless the disk is encrypted.

Full disk encryption on a laptop, such as BitLocker on Windows or FileVault on OSX, is usually implemented via AES encryption with hardware acceleration. If you don’t have special hardware for encryption, AES can be too slow.

Adiantum: ChaCha encryption on Android

On low-end devices, ChaCha encryption can be around 5x faster than AES. So Google is using ChaCha for Android devices, using what it calls Adiantum.

You can read the technical details in [2], and you can read more about the ChaCha random number generator in [3].

So where does the name Adiantum come from? It’s a Victorian name for a genus of maidenhair ferns, symbolic of sincerity and discretion.

More on CSPRNGs

[1] Adiantum using ChaCha with 12 rounds. TLS 1.3 uses ChaCha with 20 rounds.

[2] Adiantum: length-preserving encryption for entry-level processors by Google employees Paul Crowley and Eric Biggers.

[3] IRTF RFC 8439: ChaCha20 and Poly1305 for IETF Protocols

Entropy extractor used in μRNG

Yesterday I mentioned μRNG, a true random number generator (TRNG) that takes physical sources of randomness as input. These sources are independent but non-uniform. This post will present the entropy extractor μRNG uses to take non-uniform bits as input and produce uniform bits as output.

We will present Python code for playing with the entropy extractor. (μRNG is extremely efficient, but the Python code here is not; it’s just for illustration.) The code will show how to use the pyfinite library to do arithmetic over a finite field.

Entropy extractor

The μRNG generator starts with three bit streams—X, Y, and Z—each with at least 1/3 bit min-entropy per bit.

Min-entropy is Rényi entropy with α = ∞. For a Bernoulli random variable, that takes on two values, one with probability p and the other with probability 1-p, the min-entropy is

-log2 max(p, 1-p).

So requiring min-entropy of at least 1/3 means the two probabilities are less than 2-1/3 = 0.7937.

Take eight bits (one byte) at a time from XY, and Z, and interpret each byte as an element of the finite field with 28 elements. Then compute

X*YZ

in this field. The resulting stream of bits will be independent and uniformly distributed, or very nearly so.

Purified noise

Just a quick aside. Normally you want to remove noise from data to reveal a signal. Said another way, you want to split the data into signal and noise so you can throw out the noise. Here the goal is the opposite: we want to remove any unwanted signal in order to create pure noise!

Python implementation

We will need the bernoulli class for generating our input bit streams, and the pyfinite for doing finite field arithmetic on the bits.

    from scipy.stats import bernoulli
    from pyfinite import ffield

And we will need a couple bit manipulation functions.

    def bits_to_num(a):
        "Convert an array of bits to an integer."
    
        x = 0
        for i in range(len(a)):
            x += a[i]*2**i
        return x

    def bitCount(n):
        "Count how many bits are set to 1."
        count = 0
        while(n):
            n &= n - 1
            count += 1
        return count 

The following function generates random bytes using the entropy extractor. The input bit streams have p = 0.79, corresponding to min-entropy 0.34.

    def generate_byte():
        "Generate bytes using the entropy extractor."
    
        b = bernoulli(0.79)
    
        x = bits_to_num(b.rvs(8))
        y = bits_to_num(b.rvs(8))
        z = bits_to_num(b.rvs(8)) 

        F = ffield.FField(8)
        return F.Add(F.Multiply(x, y), z)

Note that 79% of the bits produced by the Bernoulli generator will be 1’s. But we can see that the output bytes are about half 1’s and half 0’s.

    s = 0
    N = 1000
    for _ in range(N):
        s += bitCount( generate_byte() )
    print( s/(8*N) )

This returned 0.50375 the first time I ran it and 0.49925 the second time.

For more details see the μRNG paper.

Update: RNG test suite results

I ran an experiment, creating streams of biased data and running them through the entropy extractor. The first post in the series, NIST STS, explains the set up. The last post in the series, using TestU01, summarizes the results. In a nutshell, the extractor passes STS and DIEHARDER, but fails PractRand and TestU01.

Related posts