Naive compression of genetic data

There are special compression algorithms for genetic sequence data, but I was curious how well simply zipping a text file would work.

I downloaded a 14 MB text file containing DNA sequence data from a fruit fly and compressed it as a zip file and as a 7z file. The result was about 3.5 MB, which is basically no compression beyond the obvious.

The file contains millions of A’s, C’s, G’s, and T’s and nothing more [0]. (I prepared the file by removing comments and line breaks.)

    AAAATCAATATGTTGCCATT…

There are only four possible characters, so each carries two bits of information [1], but is encoded in an 8-bit ASCII character. The most obvious encoding would pack four letters into a byte. That would compress a 14 MB text file down to a 3.5 MB binary file.

Here are the exact numbers.

|----------+----------+-------|
| file     |     size |  ratio|
|----------+----------+-------|
| original | 14401122 | 1.000 |
| zip      |  3875361 | 3.716 |
| 7z       |  3419294 | 4.212 |
|----------+----------+-------|

So the 7z format does a little better than simply packing groups of four letters into a byte, but the zip format does not.

There is repetition in genome sequences, but apparently generic compression software is unable to exploit this repetition. You can do much better with compression algorithms designed for genetic data.

Update

The plot thickens. Imran Haque kindly let me know that I overlooked the N’s in the data. These are placeholders for unresolved bases. You can’t simply encode each base using two bits because you need five states.

The number of N’s is small—at least in this example, though I imagine they would be more common in lower quality data—and so the calculation in footnote [1] is still approximately correct [2]. There are about two bits of information in each base, but this is on average. Shannon would say you can expect to compress your text file by at least 75%. But you can’t simply represent each base with two bits as I suggested because you need to make room for the possibility of a null base.

Note that the total information of a file is not the number of symbols times the information per symbol, unless all the symbols are independent. In the case of genetic data, the symbols are not independent, and so more compression is possible.

***

[0] Or so I thought. See the Update section.

[1] The letter frequencies are not exactly equal, but close: 26.75% A, 23.67% C, 23.94% G, and 25.58% T. The Shannon entropy is 1.998, so essentially two bits.

[2] N’s account for 0.04% of the data. Accounting for the N’s increases the Shannon entropy to 2.002.

FM signal approximation

FM radio transmits a signal by perturbing (modulating) the frequency of a carrier wave. If the carrier has frequency ω and the signal has frequency q, then the FM signal is

cos(ωt + β cos(qt)).

To understand the modulated signal, it’s useful to write it as a sum of simple sines and cosines with no modulation. I wrote about how to do this exactly using Bessel functions. Today I’ll write about an approximation that’s easier to understand and work with, assuming the modulation index β is small.

Here’s the approximation:

cos(ωt + β cos(qt)) ≈ cos ωt + ½ β ( sin (ω + q)t + sin (ω – q)t ).

This says that to a good approximation, the modulation term adds two sine waves to the carrier, one that adds the signal frequency to the carrier frequency and one that subtracts it.

To establish the approximation and see how the error depends on β, subtract the right side from the left and expand as a Taylor series in β. The first non-zero term in the series is

-½ cos(qt)² cos(ωt) β²

and so if β is small, the approximation error is very small. For example, if β = 0.1, then the approximation error is on the order of 0.005.

As an example, let ω = 10, q = 2, and β = 0.1. Then

cos(10t + 0.1 cos 2t) ≈ cos 10t + 0.05 ( sin 12t + sin 8t )

and the approximation error is plotted below.

As predicted, the amplitude of the error is around 0.005, while the amplitude of the FM signal is 1.

Related posts

Black Swan Gratification

Psychologists say that random rewards are more addictive than steady, predictable rewards. But I believe this only applies to relatively frequent feedback. If rewards are too infrequent, there’s no emotional connection between behavior and reward. The connection becomes more intellectual and less visceral as feedback becomes less frequent and less predictable.

Nassim Taleb distinguishes between delayed gratification and random gratification in his foreword to the book Safe Haven by Mark Spitznagel.

There are activities with remote payoff and no feedback that are ignored by the common crowd. … So what this idea is about isn’t delayed gratification but the ability to operate without gratification — or rather, with random gratification.

Choosing a course of action that is certain to pay off a year from now is opting for delayed gratification. Choosing something that is likely to pay off eventually, maybe two years from now, or maybe next week, is opting for random gratification.

Random rewards encourage an addictive response to frequent feedback, and discourage a rational response to infrequent feedback.

The solution is to act on principle, rather than respond like the rats in the psychological studies alluded to above.

 

Using cryptography broken 50 years ago

Old cryptography never dies. After a method is broken, its use declines, but never goes to zero.

And when I say “broken,” I do not mean no longer recommended, but broken to the point of being trivial to decrypt. I recently ran across an anecdote from World War I showing this is nothing new. The Vigenère cipher had been broken decades before the war broke out but was widely used anyway.

In this post I’ll explain what the Vigenère cipher is, give a little history of its rise and fall, and explain how it can be broken.

Vigenère cipher

A simple substitution cipher replaces one letter with another. For example, maybe you replace A with X, B with J, C with B, etc. Simple substitution ciphers are so easy to break that they’re included in pulp puzzle books.

The Vigenère cipher is a step up from simple substitution. It combines a key of length n with the clear text in such a way that you effectively have n different simple substitution ciphers. It’s not hard to break—I’ll explain how to break it below—but it’s less vulnerable that simple substitution. It makes it harder to spot high-frequency letters like E because not all E’s are encrypted the same way.

In its simplest form Vigenère essentially adds a key and a message mod 26. For example, if your message is “Attack the bridge at dawn” and your key is “ossifrage” the encryption would work like this.

    
    clear:  ATTACKTHEBRIDGEATDAWN
    key:    OSSIFRAGEOSSIFRAGEOSS
    cipher: OLLIHBTNIPJALLVAZHOOF

Starting from the left, O is the 14th letter of the alphabet (counting from 0), and so A is moved ahead 14 places to 0. Since the clear text and the key are involved symmetrically, you could say that the letter O is moved head zero places since A is the 0th letter of the alphabet.

The next two letters of the clear text and the key happen to be repeated [1]. T and S are the 19th and 18th letters of the alphabet, and 19 + 18 = 37, which is congruent to 11 mod 16. The 11th letter of the alphabet is L.

Here’s a little Python code to carry out the encryption.

    clear = "ATTACKTHEBRIDGEATDAWN"
    key   = "OSSIFRAGE"

    A = ord('A')
    for i in range(len(clear)):
        c = ord(clear[i]) - A
        k = ord(key[i % len(key)]) - A
        e = (c + k) % 26
        print(chr(e + A), end="")
    print()

A more secure version of Vigenère moves the clear text through scrambled alphabets. The simplified version above is a particularly insecure special case. However, using scrambled alphabets (“polyalphabetic substitution”) doesn’t make the method that much stronger.

World War I

According to [2],

[The Vigenère cipher] was commonly regarded as unbreakable and was widely used up through World War I, even though the Prussian cryptographer Friedrich Wilhelm Kasiski had published a method for breaking it in 1863.

David Kahn [3] gives more detail about Kasiski’s book:

Die Geheimschriften und die Dechiffrir-kunst concentrates on answering the problem that had vexed cryptanalysts for more than 300 years: how to achieve a general solution for polyalphabetic ciphers with repeating keywords. … But the 95-page volume seems to have stirred almost no comment at the time.

So armies were depending on the security of Vigenère over 50 years after it had been broken. This was worse than using DES today, around 50 years after it came out. Weaknesses have been found in DES, and it’s 56-bit keys are too short to resist a brute force attack from contemporary computers. But DES has not been broken as thoroughly as Vigenère had been broken by the time of WWI.

Breaking Vigenère

So how would you go about attacking Vigenère? At first it seems hard to break. You can’t naively look at letter frequencies. In the example above, the clear text has four A’s, but they are encrypted three different ways.

However, Vigenère with a key of length n is just a set of n different simple substitution ciphers. You can chop the cipher text into n pieces and break each one separately.

How would you know the key length n? You could just try brute force. Try 1, then 2, then 3, etc. For example, to see whether the example above might have a key of length 2, you could analyze the letters in even positions and the letters in odd positions separately. If n isn’t too large (and in practice it often wasn’t large) brute force is efficient.

A more sophisticated approach would be to try different alignments and see which one results in statistical properties most consistent with English text. (Or French text if you suspect your clear text was in French etc.) This was the motivation for William Friedman developing his index of coincidence.

In hindsight this seems fairly obvious, but it was not obvious to anyone for three centuries before Kasiski, nor to many for decades after.

Why study obsolete cryptography?

Classical encryption methods are completely obsolete. The methods described in David Kahn’s book The Codebreakers can now be broken almost instantly. So why study old cryptography?

My interest in the subject was renewed recently because I had use for it in new projects. Not directly, though I have seen obsolete encryption in use, but indirectly. Some of the ideas from classical cryptography are relevant to searching for patterns in data, even though nothing was encrypted.

There are still lessons to learn from classical cryptography. For example, even a subtle lack of randomness in keys can be exploited. This was done during WWII, and flaws in random number generators are still causing security failures today.

Related posts

[1] This is a little foreshadowing of what can go wrong even with non-repeating keys: If the clear text and the key are English prose, coincidences like this will happen, and they are an exploitable weakness.

[2] James S. Craft and Lawrence C. Washington. An Introduction to Number Theory with Cryptography, 2nd edition. CRC Press.

[3] David Kahn. The Codebreakers: The Story of Secret Writing. Scribner, 1967.

Unicode and Emoji, or The Giant Pawn Mystery

I generally despise emoji, but I reluctantly learned a few things about them this morning.

My latest couple blog posts involved chess, and I sent out a couple tweets using chess symbols. Along the way I ran into a mystery: sometimes the black pawn is much larger than other chess symbols. I first noticed this in Excel. Then I noticed that sometimes it happens in the Twitter app, and sometimes not, sometimes on the twitter website, and sometimes not.

For example, the following screen shot is from Safari on iOS.

screenshot of tweet with giant pawn

What’s going on? I explained in a footnote to this post, but I wanted to make this its own post to make it easier to find in the future.

In a nutshell, something in the software environment is deciding that 11 of the twelve chess characters are to be taken literally, but the character for the black pawn is to be interpreted as an emojus [1] representing chess. I’m not clear on whether this is happening in the font or in an app. Probably one, both, or neither depending on circumstances.

I erroneously thought that emoji were all outside Unicode’s BMP (Basic Multilingual Plane) so as not to be confused with ordinary characters. Alas, that is not true.

Here is a full list of Unicode characters interpreted (by …?) as emoji. There are 210 emoji characters in the BMP and 380 outside, i.e. 210 below FFFF and 380 above FFFF.

***

[1] I know that “emoji” is a Japanese word, not a Latin word, but to my ear the singular of “emoji” should be “emojus.”

Queens on a donut

The eight queens problem is to place eight queens on a chessboard so that no queen attacks another. Because queens are allowed to move any number of spaces horizontally, vertically, or diagonally, this means no queen can be on the same row, column, or diagonal as any other queen.

For example, the following image gives one solution to the eight queens problem. (See the previous post for how I made the chessboard images in this post.)

The eight queens problem has been generalizes many ways, first by considering square chessboards of different sizes. The n-queens problem works on an n by n board. The n-queens problem can be solved for all n greater than 3.

The problem I want to consider here is the toroidal n queens problem. We first curl up our chessboard into a cylinder, then curl join the ends of the cylinder together to make a torus (donut).

checkerboard torus

(I found the image above on TeX StackExchange.)

When we do turn our chessboard into a torus, the rows and columns don’t change. If no two queens were on the same row (column) on the flat chessboard, there still won’t be any two queens on the same row (column) in the toroidal chessboard.

But the diagonals do change, as the following image illustrates.

If we curl our chessboard vertically by joining the left and right edges—no need to imagine the torus for this example, a cylinder will do—then some of the diagonals merge. The queen on the bottom row would be on the same diagonal as the queen in the third row from the top.

George Pólya proved that the toroidal n queens problem has a solution if and only if n is not a multiple of 2 or 3. Not only does the particular solution to the eight queens problem above not extent to a torus, no solution to the eight queens problem extends to a torus because 8 is divisible by 2.

The smallest non-trivial chessboard of size n with n not divisible by 2 or 3 is n = 5. The following solution to the five queens problem remains a solution if you identify the edges, making the board into a torus.

Related posts

How to make a chessboard in Excel

I needed to make an image of a chessboard for the next blog post, and I’m not very good at image editing, so I make one using Excel.

There are Unicode characters for chess pieces— white king is U+2654, etc.—and so you can make a chessboard out of (Unicode) text.

    ♔♕♖♗♘♙♚♛♜♝♞♟

I placed the character for each piece in an cell and changed the formatting for all the cells to be centered horizontally and vertically. The following is a screenshot of the Excel file.

Chessboard: screenshot from Excel file

The trickiest part is getting the cells to be square. By default Excel uses different units for height and width, with no apparent way to change the units. But if you switch the View to Page Layout, you can set row height and column width in inches or centimeters.

Another quirk is that you may have to experiment with the font to get all the pieces the same size. In some fonts, the black pawns were larger than everything else [1].

You can download my Excel file here. You could make any chessboard configuration with this file by copying and pasting characters where you want them.

When I tested the file in Libre Office, it worked, but I had to reset the row height to match the column width.

Related posts

[1] Thanks to a reply on twitter I now understand why black’s pawn is sometimes outsized. The black pawn is used as an emoji, a sort of synecdoche representing chess. That’s why some fonts treat U+265E, black knight, entirely differently than U+265F, black pawn. The latter is interpreted not as a peer of the other pieces, but as the chess emoji. See the chess pawn entry in Emojipedia.

Initial letter frequency

I needed to know the frequencies of letters at the beginning of words for a project. The overall frequency of letters, wherever they appear in a word, is well known. Initial frequencies are not so common, so I did a little experiment.

I downloaded the Canterbury Corpus and looked at the frequency of initial letters in a couple of the files in the corpus. I first tried a different approach, then realized a shell one-liner [1] would be simpler and less-error prone.

cat alice29.txt | lc | grep -o '\b[a-z]' | sort | uniq -c | sort -rn

This shows that the letters in descending order of frequency at the beginning of a word are t, a, s, …, j, x, z.

The file alice29.txt is the text of Alice’s Adventures in Wonderland. Then for comparison I ran the same script on another file, lcet10.txt. a lengthy report from a workshop on electronic texts.

This technical report’s initial letter frequencies order the alphabet t, a, o, …, y, z, x. So starting with the third letter, the two files have different initial letter frequencies.

I made the following plot to visualize how the frequencies differ. The horizontal axis is sorted by overall letter frequency (based on the Google corpus summarized here).

I expected the initial letter frequencies to differ from overall letter frequencies, but I did not expect the two corpora to differ.

Apparently initial letter frequencies vary more across corpora than overall letter frequencies. The following plot shows the overall letter frequencies for both corpora, with the horizontal axis again sorted by the frequency in the Google corpus.

Here the two corpora essentially agree with each other and with the Google corpus. The tech report ranks letters in essentially the same order as the Google corpus because the orange dashed line is mostly decreasing, though there is a curious kink in the graph at c.

Related posts

[1] The lc function converts its input to lower case. See this post for how to install and use the function.

Lake Wobegon Dice

Garrison Keillor’s fictional Lake Wobegon is a place “where all the children are above average.” Donald Knuth alluded to this in his exercise regarding “Lake Wobegon Dice,” a set of dice where the roll of each die is (probably) above average.

Let A be a six-sided die with a 5 on one side and 3’s on the other sides.

Let B and C be six-sided dice with 1’s on two sides and 4’s on four sides.

Then the probabilities

Pr( A > (A + B + C)/3 )
Pr( B > (A + B + C)/3 )
Pr( C > (A + B + C)/3 )

are all greater than 1/2.

To see this, list out all the possible rolls where each die is above average. The triples show the value of A, B, and C in that order. We have A above average if the rolls are

(3, 1, 1)
(3, 1, 4)
(3, 4, 1)
(5, *, *)

These outcomes have probability 5/54, 10/54, 10/54, and 1/6. The total is 34/54 = 17/27 > 1/2.

We have B above average when the rolls are

(3, 4, *)
(5, 4, 1)

which have probability 5/9 and 1/27 for a total of 16/27 > 1/2. And C is the same as B.

It seems paradoxical for three different dice to each have a probability better than 1/2 of being above average. The resolution is that the three events

A > (A + B + C)/3
B > (A + B + C)/3
C > (A + B + C)/3

are not exclusive. For example, in the roll (3, 4, 1) both A and B are above the average of 8/3.

I ran across this in The Art of Computer Programming, Volume 4, Pre-fascicle 5A, Exercise 3. Fascicle 5 has since been published, but I don’t have a copy. I don’t know whether this exercise made it into the published version. If it did, the page number may have changed.

More dice posts

Index of coincidence

Index of coincidence is a statistic developed by William Friedman for use in cryptanalysis. It measures how unevenly symbols are distributed in a message.

It’s a kind of signature that could be used, for example, to infer the language of a text, even if the text has been encrypted with a simple substitution cipher. It also has more sophisticated uses, such as inferring key length in text that has been encrypted with a Vigenère cipher.

If a discrete random variable has n possible values, each occurring with probability pi, then its index of coincidence is

I = \sum_{i=1}^n p_i^2

In cryptanalysis, the probabilities are the letter frequencies in the language of the message. The sum above is often multiplied by some constant, but this makes no difference in application because you would compare index of coincidence values, not interpret the index in the abstract.

Often the sum above is multiplied by n. This amounts to dividing the sum above by the corresponding sum for a uniform distribution.

Index of coincidence is occasionally use in applications, though not in cryptography anymore.

Example

The letter frequencies in Google’s English language corpus are given here. In this corpus the probabilities range from 0.1249 for e down to 0.0009 for z. The index of coincidence would be

0.1249² + 0.0928² + … + 0.0009² = 0.06612

You’re more likely to see this value multiplied by 26, which gives 1.719.

Relation to entropy

Index of coincidence seems a lot like entropy, and in fact it is a simple transformation of an entropy, though not Shannon entropy. Index of coincidence is closely related to Rényi entropy.

Rényi entropy of order α is defined for a random variable X to be

H_\alpha(X) = \frac{1}{1 - \alpha} \log_2 \left(\sum_{i=1}^n p_i^\alpha \right)

and so Rényi entropy of order 2 is the negative log of the index of coincidence. That is, if H is the Rényi entropy of order 2 and I is the index of coincidence, then

\begin{align*} H &= -\log I \\ I &= \exp(-H) \end{align*}

Index of coicidence and Rényi entropy are sometimes defined with multiplicative constants that would slightly change the equations above.

Since exp(-x) is a decreasing function, increasing index of coincidence corresponds to decreasing Rényi entropy. Sorting in increasing order by index of coincidence is the same as sorting in decreasing order by Rényi entropy.

Related