Factored random numbers

A couple days ago Michael Nielsen posted an image of a one-page paper that gives an algorithm for generating factored random numbers, uniformly distributed from 1 to some designated N.

The algorithm does not generate random numbers then factor them. It’s more efficient than that, generating the factorization along with the final result. It does require testing for whether a number is prime, but this is more efficient than factorization.

I thought about trying to code up the algorithm in Python, but then I see that @iconjack beat me to it.

from sympy import isprime
from random import random, randint

def randfacts(N):
    while True:
        n, r, s = N, 1, []
        while n > 1:
            if r > N: break
            if isprime(n := randint(1,n)):
                r *= n
                s.append(n)
        else:
            if random() < r/N:
                return r, s

What is a Pentanomial GFSR random number generator?

The ISO random number generation standard, ISO 28640, speaks of a “Pentanomial GFSR method” for generating random variates. What is this? We’ll break it down, starting with GFSR.

GFSR

In short, a GFSR random number generator is what is now more commonly called a linear feedback shift register, or LFSR. The terminology “GFSR” was already dated when the ISO standard was published in 2010.

Tausworthe [1] came up with LFSR random variate generators in 1965. His paper looked at linear feedback shift registers but doesn’t use the term LFSR. LFSRs are bit sequences in which the current bit is a linear combination of certain previous bits, something like a Fibonacci sequence. The algebra is being carried out mod 2, so addition amounts to XOR.

As a toy example, consider

xn+5 = xn+2 + xn.

The first five bits would be the seed for the generator, then the recurrence above determines the rest. So, for example, we might start with 01101. Then the sequence of bits would be

011011101010000100101100111110001101110101…

It seems the term GFSR dates to 1973 with Lewis and Payne [2]. As far as I can tell, between 1965 and 1973 Tausworthe’s idea was implemented in practice using two previous terms, and this specialization of LFSRs became known as FSRs. Then [2] generalized this to include possibly more previous terms, restoring the generality of [1].

That explains “GFSR method” but what is this “pentanomial”?

Pentanomial

Recurrence relations can be described in terms of their characteristic polynomials. For example, in the familiar Fibonacci sequence

Fn+2 = Fn+1 + Fn

the characteristic polynomial is

x² – x – 1.

The characteristic polynomial of the toy generator above would be

x5x2 – 1.

And since -1 is the same as 1 when working mod 2, we’d usually write this polynomial as

x5 + x2 + 1.

Between 1965 and 1973 FSRs were implemented by terms that had a trinomial as the characteristic polynomial. Lewis and Payne looked at recurrences that depended on more previous terms, and in particular the choice of 4 previous terms, corresponding to a polynomial with 5 non-zero coefficients, had nice properties, hence the term “pentanomial.”

Primitive polynomials

The polynomials come before the recurrences. That is, we don’t write down the polynomial after designing the recurrence; the recurrence is designed by picking characteristic polynomials with special algebraic properties. This is because a primitive polynomial corresponds to the longest possible sequence of bits before things repeat.

The period of a LFSR corresponding to an nth degree primitive polynomial is 2n-1. This is the maximum possible because we cannot seed an LFSR with all zeros. We could, but the sequence would only produce zeros. As long as at least one bit in the seed is set, the LFSR will produce a sequence of 2n-1 bits before repeating.

Practicalities

In practice LFSRs are based on higher-degree polynomials than our toy example. For example, one might use

xn+607 = xn + xn+167 + xn+307 + xn+461

after seeding it with 607 random bits. This corresponds to the primitive pentanomial

x607 + x461 + x307 + x167 + 1.

This LFSR would have period 2607 – 1, or on the order of 10183 bits.

LFSRs are very fast, but they’re unsuitable for cryptography because linear feedback is linear and cryptanalysis can exploit linear structure. So in order to use LFSRs as secure random number generators they have to be combined with some sort of nonlinear processing, such as described here.

Related links

[1] Robert C. Tausworthe. Random Numbers Generated by Linear Recurrence Modulo Two. Mathematics of Computation, 19. 1965 pp. 201–209.

[2] T. G. Lewis and W. H. Payne. Generalized Feedback Shift Register Pseudorandom Number Algorithm. Journal of the ACM. Vol 20, No. 3. July 1973. pp. 456–468

Uniform sampling from an ellipse

There is a simple way to randomly sample points from an ellipse, but it is not uniform.

Assume your ellipse is parameterized by

\begin{align*} x(t) &= a \cos(t) \\ y(t) &= b \sin(t) \end{align*}

with t running from 0 to 2π. The naive approach would be to take uniform samples from t and stick them into the equations above.

Rather than looking at random sampling, this post will look at even sampling because it’s easier to tell visually whether even samples are correct. Random samples could be biased in a way that isn’t easy to see. We’ll use an ellipse with a= 10 and b = 3 because it’s easier to see the unevenness of the naive approach when the ellipse is more eccentric.

Let’s choose our t values by dividing the interval from 0 to 2π into 64 intervals. Here’s what we get when we push this up onto the ellipse.

samples clumped at the ends

The spacing of the dots is not uniform, not on the scale of arc length. The spacing of the dots is uniform, but on the scale of our parameter t rather than on the geometrically relevant scale.

So how do you create evenly spaced dots on an ellipse? You unroll the ellipse into a line segment, sample from that segment, then roll the ellipse back up.

The previous post showed that the perimeter of an ellipse equals 4 E(1 – b²/a²) where E is the elliptic integral of the second kind. This tells us how long a segment we’ll get when we cut our ellipse and flatten it out to a segment. We take evenly spaced samples from the segment, then invert the arc length to project the points onto the ellipse.

If we redo our example with the procedure described above we get the following.

Evenly spaced dots around an ellipse

Here’s how the dots in the plot above were computed. Describe the position of a point on the ellipse by the length of the arc from (0, a) to the point. Then you can find the corresponding parameter t with the following code.

    from scipy.special import ellipe, ellipeinc
    from numpy import pi, sin, cos
    from scipy.optimize import newton

    def E_inverse(z, m):
        em = ellipe(m)
        t = (z/em)*(pi/2)
        f = lambda y: ellipeinc(y, m) - z
        r = newton(f, t)
        return r

    def t_from_length(length, a, b):
        m = 1 - (b/a)**2
        T = 0.5*pi - E_inverse(ellipe(m) - length/a, m)
        return T

The same procedure applies to random samples. First randomly sample an interval as long as the perimeter of the ellipse to get a set of arc lengths. Then use t_from_length to get the values of t to stick into the parameterization of the ellipse.

I imagine there are more efficient ways to generate uniform samples from an ellipse, but this works. If this were the bottleneck in some application I’d try out other approaches.

Related posts

The quality of an RNG depends on the application

A random number generator can be good for some purposes and not for others. This isn’t surprising given the fundamentally impossible task such generators are supposed to perform. Technically a random number generator is a pseudo random number generator because it cannot produce random numbers. But random is as random does, and for many purposes the output of a pseudorandom number generator is random for practical purposes. But this brings us back to purposes.

Let β be an irrational number and consider the sequence

xnnβ mod 1

for n = 0, 1, 2, … That is, we start at 0 and repeatedly add β, taking the fractional part each time. This gives us a sequence of points in the unit interval. If you think of bending the interval into a circle by joining the 1 end to 0, then our sequence goes around the circle, each time moving β/360°.

Is this a good random number generator? For some purposes yes. For others no. We’ll give an example of each.

Integration

If your purpose is Monte Carlo integration, then yes it is. Our sequence has low discrepancy. You can approximate the integral of a function f over [0, 1] by taking the average of f(xn) over the first N elements of the sequence. Doing Monte Carlo integration with this particular RNG amounts to quasi Monte Carlo (QMC) integration, which is often more efficient than Monte Carlo integration.

Here’s an example using β = e.

    import numpy as np
    
    # Integrate f with N steps of (quasi) Monte Carlo 
    def f(x): return 1 + np.sin(2*np.pi*x)
    N = 1000
    
    # Quasi Monte Carlo
    sum = 0
    x = 0
    e = np.exp(1) 
    for n in range(N):
        sum += f(x)
        x = (x + e) % 1
    print(sum/N)
    
    # Monte Carlo
    sum = 0
    np.random.seed(20220623)
    for _ in range(N):
        sum += f(np.random.random())
    print(sum/N)

This code prints

    0.99901...
    0.99568...

The exact value of the integral is 1, and so the error using QMC between 4 and 5 times smaller than the error using MC. To put it another way, integration using our simple RNG is much more accurate than using the generally better RNG that NumPy uses.

Simulation

Now suppose we’re doing some sort of simulation that requires computing the gaps between consecutive random numbers. Let’s look at the set of gaps we get using our simple RNG.

    gapset = set()
    x = 0
    for _ in range(N):
        newx = (x + e) % 1
        gap = np.abs(newx - x)
        x = newx
        gapset.add(np.round(gap, 10))
    print( gapset )

Here we rounded the gaps to 10 decimal places so we don’t have minuscule gaps caused by floating point error.

And here’s our output:

    {0.7182818285, 0.2817181715}

There are only two gap sizes! This is a consequence of the three-gap theorem that Greg Egan mentioned on Twitter this morning. Our situation is a slightly different in that we’re looking at gaps between consecutive terms, not the gaps that the interval is divided into. That’s why we have two gaps rather than three.

If we use NumPy’s random number generator, we get 1000 different gap sizes.

Histogram of gap sizes

Cryptography

Random number generators with excellent statistical properties may be completely unsuitable for use in cryptography. A lot of people don’t know this or don’t believe it. I’ve seen examples of insecure encryption systems that use random number generators with good statistical properties but bad cryptographic properties.

These systems violate Kirchoff’s principle that the security of an encryption system should reside in the key, not in the algorithm. Kirchoff assumed that encryption algorithms will eventually be known, and difficult to change, and so the strength of the system should rely on the keys it uses. At best these algorithms provide security by obscurity: they’re easily breakable knowing the algorithm, but the algorithm is obscure. But these systems may not even provide security by obscurity because it may be possible to infer the algorithm from the output.

Fit for purpose

The random number generator in this post would be terrible for encryption because the sequence is trivially predictable. It would also fail some statistical tests, though it would pass others. It passes at least one statistical test, namely using the sequence for accurate Monte Carlo integration.

Even so, the sequence would pass a one-sided test but not a two-sided test. If you tested whether the sequence, when used in Monte Carlo integration, produced results with error below some threshold, it would pass. But if you looked at the distribution of the integration errors, you’d see that they’re smaller than would be expected from a random sequence. The sequence provides suspiciously good integration results, failing a test of randomness but suggesting the sequence might be useful in numerical integration.

Related posts

Evolution of random number generators

The previous post showed that the bits of prime powers look kinda chaotic. When you plot them, they form a triangular shape because the size of the numbers is growing. The numbers are growing geometrically, so their binary representations are growing linearly. Here’s the plot for powers of 5:

bits of powers of 5 plotted

You can crop the triangle so that you just see a rectangular portion of it, things look less regular. If we look at powers of p modulo a power of 2, say 232, then we’re chopping off the left side of the triangle.

Suppose we don’t start with 1, but start with some other number, call it a seed, and multiply by 5 every time. Would we get a chaotic pattern? Yes, and we have just invented the congruential random number generator. These RNGs have the form

xn+1 = a xn mod m

The RNG we implicitly described above would have a = 5 and m = 232. This is not a good choice of a and m, but it’s on to something. We can use different values of multiplier and modulus to make a decent RNG.

The RANDU random number generator, commonly used in the 1960’s, used a = 65539 and m = 231. This turned out to have notoriously bad statistical properties. We can see this with a small modification of the code used to produce the plot above.

    def out(s):
        s = s.replace('1', chr(0x2588))
        s = s.replace('0', ' ')
        print(s)    

    a, m = 65539, 2**31
    x = 7
    for i in range(32):
        x = a * x % m
        s = format(x, '32b')
        out(s)

Here’s the plot.

plotting bits of RANDU random number generator

The pattern on the right side looks like a roll of film. This shows that the lowest order bits are very regular. You can also see that we need to start with a large seed: starting with 7 created the large triangular holes at the top of the image. There are other patterns in the image that are hard to describe, but you get the sense something is not right, especially when you compare this image to a better alternative.

A much better choice of parameters is multiplier a = 48271 and modulus m = 231 -1. These are the parameters in the MINSTD random number generator. Here’s a plot of its bits, also starting with seed 7.

plot of MINSTD bits

The MINSTD generator is much better than RANDU, and adequate for some applications, but far from state of the art. You could do better by using a much larger multiplier and a modulus on the order of  264 or 2128.

You could do even better by adding a permutation step to your congruential generator. That’s the idea behind the PCG (permuted congruential generator) family of random number generators. These generators pass all the standard statistical tests with flying colors.

There are many other approaches to random number generation, but this post shows one thread of historical development, how hypothetically someone could go from noticing that prime powers have chaotic bit patterns to inventing PCG.

Related posts

Extracting independent random bits from dependent sources

Sometimes you have a poor quality source of randomness and you want to refine it into a better source. You might, for example, want to generate cryptographic keys from a hardware source that is more likely to produce 1’s than 0’s. Or maybe your source of bits is dependent, more likely to produce a 1 when it has just produced a 1.

Von Neumann’s method

If the bits are biased but independent, a simple algorithm by John von Neumann will produce unbiased bits. Assume each bit of input is a 1 with probability p and a 0 otherwise. As long as 0 < p < 1 and the bits are independent, it doesn’t matter what p is [1].

Von Neumann’s algorithm works on consecutive pairs bits. If the pair is 00 or 11, don’t output anything. When you see 10 output a 1, and when you see 01 output a 0. This will output a 0 or a 1 with equal probability.

Manuel Blum’s method

Manuel Blum built on von Neumann’s algorithm to extract independent bits from dependent sources. So von Neumann’s algorithm can remove bias, but not dependence. Blum’s algorithm can remove bias and dependence, if the dependence takes the form of a Markov chain.

Blum’s assumes each state in the Markov chain can be exited two ways, labeled 0 and 1. You could focus on a single state and use von Neumann’s algorithm, returning 1 when the state exits by the 1 path the first time and 0 the second time, and return 0 when it does the opposite. But if there are many states, this method is inefficient since it produces nothing when the chain is in all the other states.

Blum basically applies the same method to all states, not just one state. But there’s a catch! His paper shows that the naive implementation of this idea is wrong. He gives a pseudo-theorem and a pseudo-proof that the naive algorithm is correct before showing that it is not. Then he shows that the correct algorithm really is correct.

This is excellent exposition. It would have been easier to say nothing of the naive algorithm, or simply mention in passing that the most obvious approach is flawed. But instead he went to the effort of fully developing the wrong algorithm (with a warning up-front that it will be shown to be wrong).

Related posts

[1] It matters for efficiency. The probability that a pair of input bits will produce an output bit is 2p(1 – p), so the probability is maximized when p = 0.5. If p = 0.999, for example, the method will still work correctly, but you’ll have to generate 500 pairs of input on average to produce each output bit.

ChaCha RNG with fewer rounds

ChaCha is a CSPRING, a cryptographically secure pseudorandom number generator. When used in cryptography, ChaCha typically carries out 20 rounds of its internal scrambling process. Google’s Adiantum encryption system uses ChaCha with 12 rounds.

The runtime for ChaCha is proportional to the number of rounds, so you don’t want to do more rounds than necessary if you’re concerned with speed. Adiantum was developed for mobile devices and so Google wanted to reduce the number of rounds while maintaining a margin of cryptographic safety.

Cryptographer Jean-Philippe Aumasson suggests 8 rounds of ChaCha is plenty. He reports that there is a known attack on ChaCha with 6 rounds that requires on the order of 2116 operations, but that ChaCha with 5 rounds is definitely breakable, requiring on the order of only 216 operations.

Three rounds of ChaCha are not enough [1], but four rounds are enough to satisfy DIEHARDER, PractRand, and TestU01 [2]. This illustrates the gap between adequate statistical quality and adequate cryptographic quality: Four rounds of ChaCha are apparently enough to produce the former but five rounds are not enough to produce the latter.

There doesn’t seem to be any reason to use ChaCha with four rounds. If you don’t need cryptographic security, then there are faster RNGs you could use. If you do need security, four rounds are not enough.

ChaCha with six rounds seems like a good compromise if you want an RNG that is fast enough for general use and that that also has reasonably good cryptographic quality. If you want more safety margin for cryptographic quality, you might want to go up to eight rounds.

What a difference one round makes

One thing I find interesting about random number generation and block encryption is that a single round of obfuscation can make a huge difference. ChaCha(3) fails statistical tests but ChaCha(4) is fine. ChaCha(5) is easily broken but ChaCha(6) is not.

Interesting failures

Often random number generators are either good or bad; they pass the standard test suites or they clearly fail. ChaCha(3) is interesting in that it is somewhere in between. As the results in the footnotes show, ChaCha(3) is an intermediate case. DIEHARDER hints at problems, but small crush thinks everything is fine. The full crush battery however does find problems.

The decisive failure of the Fourier tests is understandable: low-quality generators often fail spectral tests. But the results of the simple poker test are harder to understand. What is it about simulating poker that makes ChaCha(3) fail spectacularly? And in both cases, one more round of ChaCha fixes the problems.

Related posts

[1] ChaCha(3) passes DIEHARDER, though five of the tests passed with a “weak” pass. Passes TestU01 small crush but fails full crush:

========= Summary results of Crush =========

 Version:          TestU01 1.2.3
 Generator:        32-bit stdin
 Number of statistics:  144
 Total CPU time:   00:39:05.05
 The following tests gave p-values outside [0.001, 0.9990]:
 (eps  means a value < 1.0e-300):
 (eps1 means a value < 1.0e-15):

       Test                          p-value
 ----------------------------------------------
 25  SimpPoker, d = 64                eps
 26  SimpPoker, d = 64                eps
 27  CouponCollector, d = 4          7.7e-4
 28  CouponCollector, d = 4          7.4e-4
 29  CouponCollector, d = 16          eps
 33  Gap, r = 0                      2.7e-8
 51  WeightDistrib, r = 0             eps
 52  WeightDistrib, r = 8           5.2e-15
 53  WeightDistrib, r = 16           2.1e-5
 55  SumCollector                     eps
 69  RandomWalk1 H (L = 10000)       1.1e-4
 74  Fourier3, r = 0               1.3e-144
 75  Fourier3, r = 20               6.9e-44
 80  HammingWeight2, r = 0           2.8e-6
 83  HammingCorr, L = 300           3.2e-10
 84  HammingCorr, L = 1200          6.1e-13
 ----------------------------------------------
 All other tests were passed

ChaCha(3) also fails PractRand decisively.

RNG_test using PractRand version 0.94
RNG = chacha(3), seed = 0x7221236f
test set = core, folding = standard (32 bit)

rng=chacha(3), seed=0x7221236f
length= 128 megabytes (2^27 bytes), time= 2.0 seconds
  Test Name                         Raw       Processed     Evaluation
  [Low1/32]BCFN(2+0,13-6,T)         R= +31.0  p =  4.7e-11   VERY SUSPICIOUS
  [Low1/32]BCFN(2+1,13-6,T)         R= +27.4  p =  8.3e-10   VERY SUSPICIOUS
  [Low1/32]BCFN(2+2,13-6,T)         R= +52.7  p =  1.8e-18    FAIL !
  [Low1/32]BCFN(2+3,13-6,T)         R= +47.6  p =  9.5e-17    FAIL !
  [Low1/32]BCFN(2+4,13-7,T)         R= +26.1  p =  1.7e-8   very suspicious
  [Low1/32]DC6-9x1Bytes-1           R= +26.3  p =  1.3e-14    FAIL !
  [Low1/32]FPF-14+6/16:all          R=  +8.6  p =  2.2e-7   very suspicious
  ...and 147 test result(s) without anomalies

[2] ChaCha(4) passed TestU01 small crush and full crush. It passed PractRand using up to 512 GB.

Inverse congruence RNG

Linear congruence random number generators have the form

xn+1 = a xn + b mod p

Inverse congruence generators have the form

xn+1 = a xn-1 + b mod p

were x-1 means the modular inverse of x, i.e. the value y such that xy = 1 mod p. It is possible that x = 0 and so it doesn’t have an inverse. In that case the generator returns b.

Linear congruence generators are quite common. Inverse congruence generators are much less common.

Inverse congruence generators were first proposed by J. Eichenauer and J. Lehn in 1986. (The authors call them inversive congruential generators; I find “inverse congruence” easier to say than “inversive congruential.”)

An advantage of such generators is that they have none of the lattice structure that can be troublesome for linear congruence generators. This could be useful, for example, in high-dimensional Monte Carlo integration.

These generators are slow, especially in Python. They could be implemented more efficiently in C, and would be fast enough for applications like Monte Carlo integration where random number generation isn’t typically the bottleneck.

The parameters a, b, and p have to satisfy certain algebraic properties. The parameters I’ll use in the post were taken from [1]. With these parameters the generator has maximal period p.

The generator naturally returns integers between 0 and p. If that’s what you want, then the state and the random output are the same.

If you want to generate uniformly distributed real numbers, replace x above with the RNG state, and let x be the state divided by p. Since p has 63 bits and a floating point number has 53 significant bits, all the bits of the result should be good.

Here’s a Python implementation for uniform floating point values. We pass in x and the state each time to avoid global variables.

    from sympy import mod_inverse

    p = 2**63 - 25
    a = 5520335699031059059
    b = 2752743153957480735

    def icg(pair):
        x, state = pair
        if state == 0:
            return (b/p, b)
        else:
            state = (a*mod_inverse(state, p) + b)%p
        return (state/p, state)

If you’re running Python 3.8 or later, you can replace

    mod_inverse(state, p)

with

    pow(state, -1, p)

and not import sympy.

Here’s an example of calling icg to print 10 floating point values.

    x, state = 1, 1
    for _ in range(10):
        (x, state) = icg((x, state))
        print(x)

If we want to generate random bits rather than floating point numbers, we have to work a little harder. If we just take the top 32 bits, our results will be slightly biased since our output is uniformly distributed between 0 and p, not between 0 and a power of 2. This might not be a problem, depending on the application, since p is so near a power of 2.

To prevent this bias, I used a acceptance-rejection sampling method, trying again when the method generates a value above the largest multiple of 232 less than p. The code should nearly always accept.

    T = 2**32
    m = p - p%T

    def icg32(pair):
        x, state = pair
        if state == 0:
            return (b % T, b)
        state = (a*mod_inverse(state, p) + b)%p
        while state > m:  # rare
            state = (a*mod_inverse(state, p) + b)%p
        return (state % T, state)

You could replace the last line above with

    return state >> (63-32)

which should be more efficient.

The generator passes the four most commonly used test suites

  • DIEHARDER
  • NIST STS
  • PractRand
  • TestU01

whether you reduce the state mod 32 or take the top 32 bits of the state. I ran the “small crush” battery of the TestU01 test suite and all tests passed. I didn’t take the time to run the larger “crush” and “big crush” batteries because they require two and three orders of magnitude longer.

More on random number generation

[1] Jürgen Eichenauer-Herrmann. Inversive Congruential Pseudorandom Numbers: A Tutorial. International Statistical Review, Vol. 60, No. 2, pp. 167–176.

TestU01 small crush test suite

crushed cars

In recent posts I’ve written about using RNG test suites on the output of the μRNG entropy extractor. This is probably the last post in the series. I’ve looked at NIST STS, PractRand, and DIEHARDER before. In this post I’ll be looking at TestU01.

TestU01 includes three batteries of tests: Small Crush, Crush, and Big Crush. The entropy extractor failed the smallest of the three, so I didn’t go on to the larger suites. Small Crush isn’t small; it used over 22 billion 32-bit samples as input, about 0.84 GB of data. Crush uses two orders of magnitude more data, and Big Crush uses another order of magnitude more data than Crush.

SmallCrush consists of 10 tests:

  • smarsa_BirthdaySpacings
  • sknuth_Collision
  • sknuth_Gap
  • sknuth_SimpPoker
  • sknuth_CouponCollector
  • sknuth_MaxOft
  • svaria_WeightDistrib
  • smarsa_MatrixRank
  • sstring_HammingIndep
  • swalk_RandomWalk1

The test names begin with s, followed by a prefix indicating the origin of the test. For example, knuth refers to Donald Knuth’s tests in volume 2 of TAOCP and marsa refers to George Marsaglia. The remainder of the name is more descriptive, such as SimpPoker for Knuth’s simple poker test.

The output of the entropy extractor failed four of the tests, failure being defined as producing a p-value less than 10-300. The other tests passed without issue, meaning they returned p-values in the range [0.001, 0.999].

Recall from earlier posts that μRNG entropy extractor takes three possibly biased bit streams and produces an unbiased bit stream, provided each of the input streams has min-entropy of at least 1/3. I produced biased streams by taking the bitwise OR of two consecutive values, producing a stream with probability 0.75 of being a 1 and probability 0.25 of being a 0. The result passed all STS and DIEHARDER tests, but failed some PractRand and Test01 SmallCrush tests. This is consistent with the generally held opinion that STS and DIEHARDER are relatively weak tests and PractRand and TestU01 are more rigorous tests.

I applied the entropy extractor to PCG without creating a biased stream, and the result passed PractRand and TestIU01 SmallCrush. Presumably it would have passed STS and DIEHARDER as well. This confirms that the extractor does no harm to a high-quality stream of pseudorandom bits. It largely removes the bias from biased streams, enough to pass the easier two test suites but not enough to pass the two more demanding test suites.

Previous posts in this series

DIEHARDER test suite

The first well-known suite of tests for random number generators was George Marsalia’s DIEHARD battery. The name was a pun on DieHard car batteries. Robert G. Brown took over the DIEHARD test suite and called it DIEHARDER, presumably a pun on the Bruce Willis movie.

I’ve written lately about an entropy extractor that creates a stream of unbiased bits from streams of biased bits. The output passes the NIST STS tests but fails the PractRand tests decisively. I was curious to see how it did on DIEHARDER, whether the results would be more like STS or like PractRand.

Turns out they’re more like STS. All tests pass, with the only minor anomaly being that one test out of 114 tests had a “weak” pass.

It’s remarkable that random number generators are even possible. They’re deterministic programs that produce output that for many practical purposes behaves as if it’s random. The entropy extractor convincingly imitates random output as far as the DIEHARDER tests are concerned, and as far as the STS tests are concerned, and even 2/3 of the PractRand tests. The remaining 1/3 of the PractRand tests, however, are not impressed.

It would be too simplistic to say that an RNG is “bad” because it fails a particular test. It would be even worse to assume and RNG is “good” because it passes a test or suite of tests. Good and bad depend on suitability for a given tasks. But all other things being equal, you can have more confidence in a generator that passes more tests.