Random sampling from a file

I recently learned about the Linux command line utility shuf from browsing The Art of Command Line. This could be useful for random sampling.

Given just a file name, shuf randomly permutes the lines of the file.

With the option -n you can specify how many lines to return. So it’s doing sampling without replacement. For example,

    shuf -n 10 foo.txt

would select 10 lines from foo.txt.

Actually, it would select at most 10 lines. You can’t select 10 lines without replacement from a file with less than 10 lines. If you ask for an impossible number of lines, the -n option is ignored.

You can also sample with replacement using the -r option. In that case you can select more lines than are in the file since lines may be reused. For example, you could run

    shuf -r -n 10 foo.txt

to select 10 lines drawn with replacement from foo.txt, regardless of how many lines foo.txt has. For example, when I ran the command above on a file containing

    alpha
    beta
    gamma

I got the output

    beta
    gamma
    gamma
    beta
    alpha
    alpha
    gamma
    gamma
    beta

I don’t know how shuf seeds its random generator. Maybe from the system time. But if you run it twice you will get different results. Probably.

Related

Cosmic rays flipping bits

A cosmic ray striking computer memory at just the right time can flip a bit, turning a 0 into a 1 or vice versa. While I knew that cosmic ray bit flips were a theoretical possibility, I didn’t know until recently that there had been documented instances on the ground [1].

Radiolab did an episode on the case of a cosmic bit flip changing the vote tally in a Belgian election in 2003. The error was caught because one candidate got more votes than was logically possible. A recount showed that the person in question got 4096 more votes in the first count than the second count. The difference of exactly 212 votes was a clue that there had been a bit flip. All the other counts remained unchanged when they reran the tally.

It’s interesting that the cosmic ray-induced error was discovered presumably because the software quality was high. All software is subject to cosmic bit flipping, but most of it is so buggy that you couldn’t rule out other sources of error.

Cosmic bit flipping is becoming more common because processors have become smaller and more energy efficient: the less energy it takes for a program to set a bit intentionally, the less energy it takes for radiation to set a bit accidentally.

Related post: Six sigma events

[1] Spacecraft are especially susceptible to bit flipping from cosmic rays because they are out from under the radiation shield we enjoy on Earth’s surface.

Improving on the sieve of Eratosthenes

Ancient algorithm

Eratosthenes had a good idea for finding all primes less than an upper bound N over 22 centuries ago.

Make a list of the numbers 2 to N. Circle 2, then scratch out all the larger multiples of 2 up to N. Then move on to 3. Circle it, and scratch out all the larger multiples of 3.

Every time you start over and find a number that hasn’t been scratched out before, that means it’s not divisible by any numbers smaller than itself, so it’s prime. Circle it and repeat. This algorithm is known as the sieve of Eratosthenes.

You could turn this into an algorithm for factoring every number less than N by not just scratching off composite numbers but keeping track of what numbers they were divisible by.

Not bad for an algorithm that predates Hindu-Arabic numerals by nine centuries. But as you might imagine, there have been a few improvements over the last couple millennia.

New algorithm

A paper by Helfgott published last week [1] gives a refined version of the sieve that takes less time and less space. Helfgott is not the first to improve on the sieve of Eratosthenes, but the latest.

His paper shows that it is possible to find all primes less than N in time

O(N log N)

and space

O(N1/3 (log N)2/3).

Furthermore, it is possible to factor all integers less than N in time

O(N log N)

and space

O(N1/3 (log N)5/3).

He also addresses finding all primes and factoring all integers in an interval [N – Δ, N + Δ] provided

N1/3 (log N)2/3 ≤ Δ ≤ N.

In the case of such an interval, one can find all the primes in the interval in time O(Δ log N) and space O(Δ). And one can factor all integers in the interval in time and space O(Δ log N).

Helfgott’s paper gives detailed pseudocode for all the algorithms.

Related posts

[1] Harald Andrés Helfgott. An improved sieve of Eratosthenes, Mathematics of Computation, https://doi.org/10.1090/mcom/3438. Published online April 23, 2019.

A truly horrible random number generator

I needed a bad random number generator for an illustration, and chose RANDU, possibly the worst random number generator that was ever widely deployed. Donald Knuth comments on RANDU in the second volume of his magnum opus.

When this chapter was first written in the late 1960’s, a truly horrible random number generator called RANDU was commonly used on most of the world’s computers.

This generator starts with an odd seed x0 and generates subsequent values via

xn+1 = 65539 xn mod 231

I needed to generate 8-bit numbers, and I took the lowest eight bits of each value from RANDU. (To be fair, the highest 8 bits would have been better, but my goal was to find a bad RNG, so I used the lowest.)

To demonstrate the generator’s (lack of) quality I made a histogram of sorts with a 16 by 16 grid, one cell for each possible 8-bit number. I generated a large number of samples and plotted the grid. Here’s what I got:

RANDU histogram

Here’s what I get with a better random number generator:

randint histogram

I was looking for something bad, but I didn’t realize RANDU was that bad. The white stripes represent the generated values and the black stripes values that are never generated. So out of 256 possible 8-bit numbers, this generator only ever outputs 64 of them. (I used 33 as my seed. I might have gotten different vertical stripes if I had chosen a different seed, but I’d still get stripes.) Also, the frequencies of the values it does take on have suspiciously little variation.

You can see the pattern of values RANDU takes on by printing out the first few generated values in hex:

    0x63
    0x29
    0x7b
    0x71
    0x53
    0xf9
    0xeb
    0xc1

The last hex digit cycles 3, 9, b, 1, 3, 9, b, 1, … producing only four possible values.

We could prove the statements empirically demonstrated above by noting that

xn = 65539n x0 mod 2k

for k = 31, but also for smaller k. We could set k = 8 and prove that there are only 64 values, or set k = 4 and prove that there are only four final hex digits.

Related posts

Assumed technologies

I just had a client ship me a laptop. We never discussed what OS the computer would run. I haven’t opened the box yet, but I imagine it’s running Windows 10.

I’ve had clients assume I run Windows, but also others who assume I run Linux or Mac. I don’t recall anyone asking me whether I used a particular operating system.

When clients say “I’ll send you some code” without specifying the language, it’s usually Python. Maybe they’ve seen Python code I’ve written, but my impression is that they assume everyone knows Python.

Other tools some will assume everyone uses:

  • Bash
  • Git and Github
  • Skype
  • MS Office
  • Signal
  • Google apps
  • Adobe Photoshop and Illustrator
  • Markdown
  • Jupyter notebooks

When I was in college, nearly every computer I saw ran Unix or Mac. I had no idea that the vast majority of the world’s computers ran Windows. I was in a bubble. And like most people in a bubble, I didn’t know I was in a bubble. I wish I had seen something like this blog post.

Digital signatures with oil and vinegar

“Unbalanced oil and vinegar” is a colorful name for a cryptographic signature method. This post will give a high-level description of the method and explain where the name comes from.

The RSA encryption algorithm depends on the fact that computers can easily multiply enormous numbers, but they cannot efficiently factor the product of two enormous primes. Whenever you have something that’s easy to do but hard to undo, you might be able to make an encryption algorithm out of it.

The unbalanced oil and vinegar (UOV) digital signature algorithm is analogous to RSA in that it also depends on the difficulty of factoring. But UOV is based on the difficulty of factoring the composition of a linear and nonlinear operator, not multiplying prime numbers. One advantage of UOV over RSA is that UOV is quantum-resistant. That is, if large quantum computers become practical, UOV signatures will remain hard to forge (or so it is currently believed) whereas RSA signatures would be easy to forge.

Solving large systems of multivariate polynomial equations over finite fields is hard, provably NP-hard, unless there’s some special structure that makes things easier. Several proposed post-quantum digital signature algorithms are based on this, such as the LUOV variant on UOV.

The idea behind UOV is to create systems of equations that have a special structure, with some “oil” variables and some “vinegar” variables, so named because they do not mix, or rather mix in a very simple, convenient way. This special structure is kept secret, and is obscured by composition with an invertible linear operator. This operator acts like a blender, thoroughly mixing the oil and vinegar. The term “unbalanced” refers to the fact that the scheme is more secure if you do not have equal numbers of “oil” and “vinegar” variables.

Polynomials over finite fields. Polynomials over finite fields everywhere!

Someone wanting to sign a file with the UOV algorithm knows the oil-and-vinegar structure and produces a vector that is mapped to a specified value, inverting the composition of the linear operator and the polynomials. They can do this because they know the factorization into this special structure. Someone wanting to verify a UOV signature only knows the (apparently unstructured) composition. They just see a large system of multivariate polynomial equations. They can stick a signature in and verify that the output is what it’s supposed to be, but they couldn’t produce a signature because they can’t invert the system. [1]

How large do these systems of polynomials need to be? On the order of a hundred equations and variables, though with more variables than polynomials. Not that large compared to linear systems, where one can efficiently solve systems with millions of equations and variables. And the polynomial are only quadratic. So in one sense the systems are small. But it takes several kilobytes [2] to describe such systems, which makes the public keys for UOV large relative to currently popular digital signature algorithms such as ECDSA. The signatures produced by UOV are small, but the public keys are large.

Related posts

[1] The system is not invertible in the sense of being one-to-one because it’s underdetermined. By inverting the system we mean producing any input that maps to the desired output. This solution is not generally unique.

[2] Representing m quadratic polynomials in n variables over a field of size b bits requires bmn²/2 bits. So 80 quadratic polynomials in 120 variables over GF(28) would require 8 × 80 × 120²/2 = 4,608,000 bits = 576 kilobytes. The LUOV variation on UOV mentioned above reduces the key sizes quite a bit, but it still requires larger public keys than ECDSA.

Why isn’t CPU time more valuable?

Here’s something I find puzzling: why isn’t CPU time more valuable?

I first thought about this when I was working for MD Anderson Cancer Center, maybe around 2002. Our research in adaptive clinical trial methods required bursts of CPU time. We might need hundreds of hours of CPU time for a simulation, then nothing while we figure out what to do next, then another hundreds hours to run a modification.

We were always looking for CPU resources, and we installed Condor to take advantage of idle PCs, something like the SETI at Home or GIMPS projects. Then we had CPU power to spare, sometimes. What could we do between simulations that was worthwhile but not urgent? We didn’t come up with anything.

Fast forward to 2019. You can rent CPU time from Amazon for about 2.5 cents per hour. To put it another way, it’s about 300 times cheaper per hour to rent a CPU than to hire a minimum wage employee in the US. Surely it should be possible to think of something for a computer to do that produces more than 2.5 cents per CPU hour of value. But is it?

Well, there’s cryptocurrency mining. How profitable is that? The answer depends on many factors: which currency you’re mining and its value at the moment, what equipment you’re using, what you’re paying for electricity, etc. I did a quick search, and one person said he sees a 30 to 50% return on investment. I suspect that’s high, but we’ll suppose for the sake of argument there’s a 50% ROI [1]. That means you can make a profit of 30 cents per CPU day.

Can we not thinking of anything for a CPU to do for a day that returns more than 30 cents profit?! That’s mind boggling for someone who can remember when access to CPU power was a bottleneck.

Sometimes computer time is very valuable. But the value of surplus computer time is negligible. I suppose it all has to do with bottlenecks. As soon as CPU time isn’t the bottleneck, its value plummets.

Update: According to the latest episode of the Security Now podcast, it has become unprofitable for hackers to steal CPU cycles in your browser for crypto mining, primarily because of a change in Monero. Even free cycles aren’t worth using for mining! Mining is only profitable on custom hardware.

***

[1] I imagine this person isn’t renting time from Amazon. He probably has his own hardware that he can run less expensively. But that means his profit margins are so thin that it would not be profitable to rent CPUs at 2.5 cents an hour.

Google Adiantum and the ChaCha RNG

The ChaCha cryptographic random number generator is in the news thanks to Google’s Adiantum project. I’ll discuss what’s going on, but first a little background.

Adiantum maidenhead fern

The name of the project comes from a genus of fern. More on that below as well.

One-time pads

The one-time pad is a provably unbreakable way to encrypt things. You create a sheet of random bits and give your counterpart an exact copy. Then when it comes time for you to send an encrypted message, you convert your message to a stream of bits, XOR your message with the random bits you exchanged previously, and send the result. The recipient then takes the XOR of the received message with the pad of random bits, and recovers the original message.

This is called a one-time pad because it’s a pad of bits that you can only use one time. If you reuse a pad, it’s no longer unbreakable.

One-time pads are impractical for a couple reasons. First, it’s hard to generate truly random bits, especially in bulk. Second, exchanging the pads is almost as difficult as exchanging messages.

Stream ciphers

So here’s a bright idea: we’ll get around both of the problems with one-time pads by using pseudorandom bits rather than random bits! The both parties can generate their own random bits.

Many people have had this idea, and it’s not necessarily a bad one. It’s called a stream cipher. The problem is that most pseudorandom number generators are not up to the task. You need a cryptographically secure RNG, and most RNGs are far from secure. The ChaCha RNG, however, appears to be good enough to use in a stream cipher, given enough rounds of scrambling [1], and Google is using it for full disk encryption in Android devices.

Full disk encryption

If you forget your password to your computer, you may not be able to access your data, but a thief still could by removing the hard drive and accessing it from another computer. That is, unless the disk is encrypted.

Full disk encryption on a laptop, such as BitLocker on Windows or FileVault on OSX, is usually implemented via AES encryption with hardware acceleration. If you don’t have special hardware for encryption, AES can be too slow.

Adiantum: ChaCha encryption on Android

On low-end devices, ChaCha encryption can be around 5x faster than AES. So Google is using ChaCha for Android devices, using what it calls Adiantum.

You can read the technical details in [2], and you can read more about the ChaCha random number generator in [3].

So where does the name Adiantum come from? It’s a Victorian name for a genus of maidenhair ferns, symbolic of sincerity and discretion.

Related posts

[1] Adiantum using ChaCha with 12 rounds. TLS 1.3 uses ChaCha with 20 rounds.

[2] Adiantum: length-preserving encryption for entry-level processors by Google employees Paul Crowley and Eric Biggers.

[3] IRTF RFC 8439: ChaCha20 and Poly1305 for IETF Protocols

Hash function menagerie

Here’s an oversimplified survey of cryptographic hash functions: Everyone used to use MD5, now they use some variation on SHA.

There’s some truth to that. MD5 was very popular, and remains popular years after it was proven insecure. And now variations on SHA like SHA1 and SHA256 are commonly used. But there are a lot more cryptographic hash functions in common use. Continue reading

Reversing an MD5 hash

The MD5 hashing algorithm was once considered secure cryptographic hash, but those days are long gone [1]. For a given hash value, it doesn’t take much computing power to create a document with the same hash.

Hash functions are not reversible in general. MD5 is a 128-bit hash, and so it maps any string, no matter how long, into 128 bits. Obviously if you run all strings of length, say, 129 bits, some of them have to hash to the same value. (Another win for the pigeon hole principle.)

And yet in practice it may be possible to reverse a hash, given some context. In the context of short, weak passwords, it may be possible to determine what text was hashed to create a particular value. All it may take is a quick web search [2]. For example, let’s take an insecure password like “password” and run it through a bit of Python to compute its MD5 hash.

>>> import hashlib
>>> def foo(x): 
...     print(hashlib.md5(x.encode('utf-8')).hexdigest())
>>> foo("password")
5f4dcc3b5aa765d61d8327deb882cf99

A Google search returns over 21,000 hits on 5f4dcc3b5aa765d61d8327deb882cf99, and the second one shows that it’s the hash of “password”.

If I try a slightly less naive password like “p@$$word” I still get several hits, indicating that the hash is part of a list of compromised passwords.

Not every hash of a short string can be reversed this way. If I hash my business phone number, for example, I get something that does not yet appear in Google searches. The hash could still be reversed easily, but it would take more than just a Google search.

See the next post for how salting can thwart web search attacks.

Related posts

[1] MD5 was invented in 1992 and the first flaw was discovered in 1996. Experts started moving away from MD5 at that time. More flaws were discovered over time. In 2010 the CMU Software Engineering Institute declared that MD5 was “cryptographically broken and unsuitable for further use.” It’s still being used, though not as much. MD5 still makes a useful checksum, though it’s not cryptographically secure.

[2] The same would be true of a secure hash of an insecure password. For example, SHA-256 is better than MD5, but you could look up the SHA-256 hash values of terrible passwords too. But MD5 hashes are easier to search on. In my experiments, I got far fewer hits searching on SHA-256 hash values.

If you’re trying to reverse hash values on your own computer without a web search, the MD5 value would require much less computation than the SHA-256 value.