Straddling checkerboard encryption

Introduction

Computers fundamentally changed cryptography, opening up new possibilities for making and breaking codes. At first it may not have been clear which side benefited most, but now it’s clear that computers gave more power to code makers than code breakers.

We now have cryptographic primitives that cannot be attacked more efficiently than by brute force, as far as we know. The weak link is how these primitives are implemented and combined, not the primitives themselves.

Before computers there was more of a cat and mouse game between encryption and cryptanalysis. Encryption schemes that were convenient to carry out by hand could usually be broken by hand eventually. But if you only needed secrecy briefly, a simple scheme might provide that secrecy for long enough. This post will look at one such scheme, the straddling checkerboard.

Checkerboards

Perhaps the most obvious way to conveniently turn letters into numbers is to arrange the letters into a 5 × 5 grid. This has to leave out one letter, and in practice this meant combining I and J. Or if you needed digits, you could use a 6 × 6 grid and put J back in. You’d scramble the alphabet in the grid according to some key, then encrypt each letter by its coordinates.

       12345
      +-----
     1|EBISP
     2|XWLVN
     3|AOYZQ
     4|MDCKH
     5|RTUFG

This is no better than a simple substitution cipher because someone intercepting a message encrypted this way would easily guess that pairs of digits represent letters. However, if you then permuted the digits with a transposition cipher, you’d have something more formidable. This is essentially what the ADFGV cipher did, which stumped cryptanalysts for a while.

The straddling checkerboard is a variation on the method above. Letters would be arranged in a 3 × 10 grid rather than 5 × 5. Some letters would be encrypted as a single digit and some as a pair of digits.

       1234567890
      +----------
      |  EBISPXWL
     1|VNAOYZQMDC
     2|KHRTUFGJ./

In the example above, E would be encrypted as 3, N would be encrypted as 12, and so on. This is an instance of a prefix code. In order to be able to decode the digits unambiguously, no letter could be encoded as 1 or 2; these digits always signaled the beginning of a pair.

Prefix codes are often used in non-secret codes, such as country codes for telephone numbers. More examples of prefix codes in this post.

Because 1 and 2 could not be used to encode single letters, there were 28 slots to fill. These could be filled with other symbols, and in practice period and slash were added [1].

Efficiency

The straddling checkerboard gives a more efficient encoding than does the checkerboard since typically fewer digits will be required. If efficiency were the only concern, we’d put the eight most frequent letters on the top row, something like the following [2].

       1234567890
      +----------
      |  ETAOINSR
     1|BCDFGHJKLM
     2|PQUVWXYZ./

This would be more efficient but less secure since the arrangement of the letters would be more predictable.

Security

The straddling checkerboard presents a bit of a challenge to the cryptanalyst since it’s not know a priori whether a digit is part of a pair (if the vertical coordinates are not always 1 and 2).

The straddling checkerboard didn’t offer much security even in its day. It would have been better if there had been some further processing done on the digits, such as how the ADFGV cipher permuted its coordinates.

The message, interpreted as a number N, could have been further encrypted as aN + b where a and b were randomly chosen numbers that were part of the key. As far as I know, nothing like this was ever done. This would have provided more security but would also require more effort and increase the chance of introducing errors.

Related posts

[1] David Kahn. The Codebreakers. Chapter 18.

[2] You may have expected the last letter on the first row to be H, going by the printer’s order ETAOIN SHRDLU. Peter Norvig discovered a slightly different order of letter frequencies based on the Google corpus.

Base 64 encoding remainder problem

I’ve mentioned base 64 encoding a few times here, but I’ve left out a detail. This post fills in that detail.

Base 64 encoding comes up in multiple contexts in which you want to represent binary data in text form. I’ve mentioned base 64 encoding in the context of Gnu ASCII armor. A more common application is MIME (Multipurpose Internet Mail Extensions) encoding.

Base 64 encoding uses 64 characters (A…Z, a…z, 0…9, +, and /) to represent six bits at a time.

In the previous post I showed how ASCII armor encoded a 91,272 byte JPEG image into a text file and how it could convert the text back into binary. The number of bytes in the file a multiple of 3, which you could quickly confirm by casting out nines.

If the number of bytes in a file is not a multiple of three, the number of bits is not a multiple of six, and so we have to do something with the remainder.

For an example, let’s start with a file containing the bits

000000 000001 000010 000011 000100 000101 000110 000111

If we run gpg --enarmor on this file, we get

ABCDEFGH
=/u99

and some extra text for human consumption. The base 64 encoding is ABCDEFGH, the equal sign is a separator, and /u99 is a checksum.

If we delete the last 8 bits from our file and run ASCII armor again, we get

ABCDEFE=
=/IPL

The second equal sign separates the base 64 encoding from the checksum, but the first equal sign is something new.

If we chop eight more bits off the end of the file we get

ABCDEA==
=izh9

What’s going on here? In each case, the first 30 bits are being encoded as ABCDE. The remaining bits are

000101 000110 000111

When we cut off the last 8 bits we were left with

000101 0001

The bits 00101 are encoded as F, and the last four bits are padded to 00100 and encoded as E. The trailing equal sign is a signal that two bits were added as padding.

When we cut off 8 more bits we were left with

00

which was padded to 000000 and encoded as A. Then two equal signs were added to signal that four bits were added as padding.

So the rule is add two or four 0s at the end to make the number of bits a multiple of six. Then add an equal sign for each pair of bits added.

Since file sizes are multiples of bytes, and a byte is 8 bits, the number of bits in a file is always even. This means the remainder when the number of bits is divided by 6 is 0, 2, or 4. So if we add padding, we only add two or four zero bits and never an odd number of bits.

ASCII armor: send encrypted data in plain text

GnuPG ASCII armor is a way to encode binary data as text and decode the text back into binary. It could be used with any binary data, but most often it is used with cryptographic data: public keys, encrypted messages, etc.

Secure communication over an insecure channel

When people ask whether some medium “supports encryption,” or more specifically whether the medium “supports end-to-end encryption,” they’re asking how convenient it is to use the medium with encrypted data. Any medium can convey encrypted text, but the process may be more or less manual.

You can use Gmail for end-to-end encryption, for example, though I’m sure Google would rather you not. Just encrypt the data on your computer, convert the output to ASCII, paste it into an email, and send. The receiver then copies the ASCII text, converts it to binary, and then decodes it.

ASCII armor is a way of handling the conversion to and from ASCII, with some niceties such as a checksum and human-friendly formatting, as friendly as formatting can be under the circumstances.

Aside: Coding versus Encryption

There are two kinds of encoding in this context: secret and non-secret. Coding theory is often misunderstood because it is primarily concerned with non-secret encoding, such as error-correcting codes and transmission protocols. ASCII armor belongs to coding theory, not encryption, though it is usually deployed in service of encryption.

ASCII armor is not providing protection from prying eyes. It is providing a relatively convenient way to convey naturally binary data, including a checksum to detect errors in transmission.

How ASCII armor works

At its heart, ASCII armor does base-64 encoding. But there’s more to it than that. There are optional fields, formatting rules, and a checksum.

For an example, we’ll take the first 120 binary digits of π after the “decimal” point.

Here’s some Python code to write the bits to a file. We start with the bits in hexadecimal since that’s more compact. In hex notation, π = 3.243f6a8885a3… For more details, see this post on hexadecimal floating point.

import binascii

data = '243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89'
with open('pibits', 'wb') as f:
    f.write(binascii.unhexlify(data))

We can inspect pibits in a hex editor to verify that the bits are correct, say by running xxd.

Enarmor

We can convert our binary file to ASCII using gpg, which stands for GNU Privacy Guard (GnuPG), the GNU implementation of Pretty Good Privacy. Running

gpg --enarmor pibits

produces a file pibits.ascii with the following contents.

-----BEGIN PGP ARMORED FILE-----
Comment: Use "gpg --dearmor" for unpacking

JD9qiIWjCNMTGYouA3BzRKQJOCIpnzHQCC76mOxO
=0G7A
-----END PGP ARMORED FILE-----

Dearmor

If we run

gpg --dearmor pibits.asc

we get a file pibits.asc.gpg identical to the original file pibits, which we could verify using diff.

Base 64 encoding

How does the string JD9… above correspond to bits?

A hexadecimal character represents 4 bits, and a base 64 character represents 6 bits. So every 3 hexadecimal character corresponds to 12 bits or 2 base 64 characters.

The first three hexadecimal characters in our data, 243, corresponds to the first two characters in our ASCII armor data, JD, as follows. First of all,

243hex = 001001000011two

and if we split the bits into groups of six we have 001001two = 9ten and 000011two = 3ten. ASCII armor represents base-64 numbers the same way MIME encoding does: the numbers 0 through 63 are represented by

A, B, C, … Z, a, b, c, …, z, 0, 1, 2, …, 9, +, /

So 9 is encoded as J, and 3 is encoded as D.

The 120 bits of π represented by the 30 hexadecimal characters 243…C89 or by the 20 base 64 characters JD…xO in base 64. Note that the last character is the letter O, not the decimal 0. (Base 58 avoids this problem by not using typographically similar symbols.)

In this example we had 120 bits, which is a multiple of 6. What if the number of bits is not a multiple of 6? See this post for an answer.

Checksum

After the 20 characters encoding our data follows five more characters, =0G7A. The equal sign is a separator, and 0G7A is a base 64 encoding of a 24-bit CRC checksum. The details of the checksum, and C code for implementing it, are given in RFC 4880 Section 6.1.

Related posts

USPS tracking numbers

I noticed the other day that an app on my phone assumed that a long number was a USPS tracking number. I wondered how it decided that and did a little research. I assumed there was some structure to the number, at least a check sum if not more than that.

This turned out to be a deep rabbit hole. USPS and other carriers all have a variety of tracking numbers, either for different kinds of packages or formats that have changed over time. My impression is that much of what is publicly known about these numbers has been reverse engineered, not extracted from documentation. I decided to turn around before I spent any more time looking into this.

Then I took a more empirical approach. What if I change a few digits? That should break the checksum. It seems my app believes a positive integer is a USPS tracking number if and only if the number has 22 digits.

That’s not very clever. Or maybe it is. It’s not very clever at the deepest level. The app apparently does not care about false positives. But that might be a clever choice at a higher level. Simply assuming 22-digit numbers are tracking numbers is a good bet, and this is robust to any changes in how groups of digits are interpreted.

Update: It looks like the software checks whether the first digit is a 9. I can change other digits of a tracking number, but not the first.

Related posts

Checksum polynomials

A large class of checksum algorithms have the following pattern:

  1. Think of the bits in a file as the coefficients in a polynomial P(x).
  2. Divide P(x) by a fixed polynomial Q(x) mod 2 and keep the remainder.
  3. Report the remainder as a sequence of bits.

In practice there’s a little more to the algorithm than this, such as appending the length of the file, but the above pattern is at the heart of the algorithm.

There’s a common misconception that the polynomial Q(x) is irreducible, i.e. cannot be factored. This may or may not be the case.

CRC-32

Perhaps the most common choice of Q is

Q(x) = x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x3 + x2 + x + 1

This polynomial is used in the cksum utility and is part of numerous standards. It’s know as CRC-32 polynomial, though there are other polynomials occasionally used in 32-bit implementations of the CRC algorithm. And it is far from irreducible as the following Mathematica code shows. The command

    Factor[x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + 
           x^11 + x^10 + x^8 +  x^7 + x^5 + x^4 + 
           x^3 + x^2 + x + 1, Modulus -> 2]

shows that Q can be factored as

(1 + x)5 (1 + x + x3 + x4 + x6) (1 + x + x2 + x5 + x6)
(1 + x + x4 + x6 + x7) (1 + x + x4 + x5 + x6 + x7 + x8)

(Mathematica displays polynomials in increasing order of terms.)

Note that the factorization is valid when done over the field with 2 elements, GF(2). Whether a polynomial can be factored, and what the factors are, depends on what field you do your arithmetic in. The polynomial Q(x) above is irreducible as a polynomial with real coefficients. It can be factored working mod 3, for example, but it factors differently mod 3 than it factors mod 2. Here’s the factorization mod 3:

(1 + 2 x2 + 2 x3 + x4 + x5) (2 + x + 2 x2 + x3 + 2 x4 + x6 + x7)
(2 + x + x3 + 2 x7 + x8 + x9 + x10 + 2 x12 + x13 + x15 + 2 x16 + x17 + x18 + x19 + x20)

CRC-64

The polynomial

Q(x) = x64 + x4 + x3 + x + 1

is known as CRC-64, and is part of several standards, including ISO 3309. This polynomial is irreducible mod 2 as the following Mathematica code confirms.

    IrreduciblePolynomialQ[x^64 + x^4 + x^3 + x + 1, Modulus -> 2]

The CRC algorithm uses this polynomial mod 2, but out of curiosity I checked whether it is irreducible in other contexts. The following code tests whether the polynomial is irreducible modulo the first 100 primes.

    Table[IrreduciblePolynomialQ[x^64 + x^4 + x^3 + x + 1, 
        Modulus -> Prime[n]], {n, 1, 100}]

It is irreducible mod p for p = 2, 233, or 383, but not for any other primes up to 541. It’s also irreducible over the real numbers.

Since Q is irreducible mod 2, the check sum essentially views its input P(x) as a member of the finite field GF(264).

Related posts

Luhn checksum algorithm

After writing the previous post on credit card numbers, I intended to link to a previous post that discussed credit card check sums. But I couldn’t find such a post. I’ve written about other kinds of checksums, such as the checksum scheme used in Vehicle Identification Numbers, but apparently I haven’t written about credit card checksums before.

The algorithm used by credit cards, and other identification numbers, is called the Luhn algorithm. I’ll lead up to the algorithm to explain its motivation, going through a thought process Luhn might have gone through in developing his algorithm.

NB: The intermediate steps are not the Luhn algorithm. If you’re just looking for Luhn’s algorithm, skip to the end.

Step 1

A simple way to create a check sum is to add up the digits of a number and tack on the sum mod 10 on the end. For example, this process would replace 1776 with 17761 because

1 + 7 + 7 + 6 = 21

and 21 mod 10 = 1.

This simple checksum will detect if any single digit has been changed. For example, if we receive 17791 we can compute the checksum of 1779 and tell that something is wrong because the last digit should be 4.

Step 2

A common kind of error is transposing consecutive digits, and the scheme above will not detect transpositions: all digits are treated the same way, so scrambling them doesn’t change the checksum.

Luhn’s idea was to double every other digit before taking the sum. This creates an asymmetry between the contribution of consecutive digits to the check sum and makes it possible to detect all transpositions.

Which digits to double

The method described above works whether you start from the left end or the right end, and whether you start doubling with the first or second digit you encounter. But to be compatible with the Luhn algorithm used in practice, start on the right end and double the last digit.

So going back to our little example, 1776 becomes 17764 because

2*6 + 7 + 2*7 + 1 = 34.

If we accidentally transpose the first two digits, 17764 becomes 71764. But the check sum of 7176 is 0. So 71760 would be valid but 71764 is not. We’ve detected an error.

The advantage of starting at the right end is that adding zeros to the front of the number doesn’t change the checksum. So, for example, 01776 becomes 017764.

Step 3

The next refinement is strange. Instead of simply doubling every other digit, we’ll double every other digit and reduce mod 9.

So we could calculate the checksum for 1776 by first doubling every other digit

1776 -> 1, 14, 7, 12

then reducing the doubled digits mod 9, i.e. replacing 14 by 5 and 12 by 3,

1, 14, 7, 12 -> 1, 5, 7, 3

and then finally taking the last digit of the sum.

1 + 5 + 7 + 3 = 16 -> 6.

What does this extra complication buy us? Nothing as far as I can tell. It weakens the algorithm. The algorithm in Step 2 could detect if 09 were transposed to 90; the algorithm here in Step 3 cannot.

Update: Nathan points out in the comments that without the mod 9 step some changes to a single digit could go undetected, namely changing the second of a pair of doubled digits by 5. For example, 33 and 38 would have the same checksum in Step 2 but not in Step 3.

Step 4

The final step is to take the tens compliment of the checksum computed above. This adds nothing to the effectiveness of the algorithm, but it’s what Luhn did.

Luhn algorithm in practice

Let’s describe the position of digits in a number by numbering them from the right end, starting with o: the last digit is in position 0, the one to its left is in position 1, etc.

We can summarize the Luhn algorithm in the following pseudocode.

    sum = 0
    for d in digits:
        if d is in an even position:
            sum = sum + (2*d) mod 9
        else:
            sum = sum + d
    return 10 - sum mod 10

The last line isn’t quite correct. If sum mod 10 is 0, then this returns 10 when it should return 0. A clever way around this is to change the last line to

    return (sum*9) mod 10

which will always return a single digit.

NB: In our examples above, the checksum was added to the end. Some credit cards do this, but others put the checksum somewhere in the interior of the number.

Afterword

The Luhn algorithm was developed in 1954, though it is still widely used. Since that time much more sophisticated error correcting codes have been developed. See the links below for examples.

Tradeoff between alphabet size and word size

Literal alphabets

Natural language alphabets are all within an order of magnitude of the size of the Roman alphabet. The Hebrew alphabet has a few less letters and Russian has a few more. The smallest alphabet I’m aware of is Hawaiian with 13 letters.

Syllabaries are larger than alphabets, but not an order of magnitude larger. For example, Japanese uses 46 symbols and the Cherokee uses 85.

There is a tradeoff between alphabet size and word size that constrains alphabets to be roughly the same size in every culture.

Metaphorical alphabets

Computer science generalizes the term alphabet to mean any fixed collection of symbols and generalizes word to mean any finite sequence of symbols from an alphabet. So, for example, the digits 0 through 9 form an alphabet in this sense, and the words are numbers.

Numbers are sometimes written using smaller or larger alphabets. Binary notation uses an alphabet of only 2 symbols, but numbers require significantly more symbols to express than they do in decimal. Hexadecimal uses an alphabet of 16 symbols and is more compact than decimal notation.

Tradeoffs

Many design problems can be formulated as a tradeoff between alphabet length and word length. I was working on such a problem recently, which motivated this blog post.

Larger alphabets mean shorter words, but a linear reduction in word size requires an exponential increase in alphabet size. For example, we could write numbers using half as many digits if we thought in base 100 and had symbols for numbers 0 through 99. Or we could think in base 1000, increasing the number of digits by a factor of 100 and reducing the length of number representations by a factor of 3.

Word length is inversely proportional to the logarithm of alphabet size, and so if we have some cost proportional to alphabet size a and another cost proportional to word length, the optimal alphabet size for the sum of these costs would minimize

C1a + C2 / log a.

We could divide this objective function by C1 and set w = C1/C2 to simplify the objective to minimizing

a + w / log a

Setting the derivative with respect to a to zero shows that the optimal alphabet size a is given by solving

a (log a)² = w.

As the weight w given to the cost of word size increases, the optimal alphabet size increases as well. A wide range of weights corresponds to a narrower range of alphabet sizes.

This plot could be misleading. The graph is not showing the cost associated with various alphabet sizes but rather the optimal alphabet size for a given word size cost. The weight w is fixed in any particular application.

An alphabet of size 10 is optimal for a word length cost weight w of about 50. Using w = 50, the following plot shows how alphabet size cost and word size costs compare. A large increase in alphabet size cost only buys a small decrease in word size cost.

Related post: The base with the largest decibel

Morse code numbers and abbreviations

Numbers in Morse code seem a little strange. Here they are:

    |-------+-------|
    | Digit | Code  |
    |-------+-------|
    |     1 | .---- |
    |     2 | ..--- |
    |     3 | ...-- |
    |     4 | ....- |
    |     5 | ..... |
    |     6 | -.... |
    |     7 | --... |
    |     8 | ---.. |
    |     9 | ----. |
    |     0 | ----- |
    |-------+-------|

They’re fairly regular, but not quite. That’s why a couple years ago I thought it would be an interesting exercise to write terse code to encode and decode digits in Morse code. There’s exploitable regularity, but it’s irregular enough to make the exercise challenging.

Design

As with so many things, this scheme makes more sense than it seems at first. When you ask “Why didn’t they just …” there’s often a non-obvious answer.

The letters largely exhausted the possibilities of up to 4 dots and dashes. Some digits would have to take five symbols, and it makes sense that they would all take 5 symbols. But why the ones above? This scheme uses a lot of dashes, and dashes take three times longer to transmit than dots.

A more efficient scheme would be to use binary notation, with dot for 0’s and dash for 1’s. That way the leading symbol would always be a dot and usually the second would be a dot. That’s when encoding digits 0 through 9. As a bonus you could use the same scheme to encode larger numbers in a single Morse code entity.

The problem with this scheme is that Morse code is intended for humans to decode by ear. A binary scheme would be hard to hear. The scheme actually used is easy to hear because you only change from dot to dash at most once. As Morse code entities get longer, the patterns get simpler. Punctuation marks take six or more dots and dashes, but they have simple patterns that are easy to hear.

Code golf

When I posed my coding exercise as a challenge, the winner was Carlos Luna-Mota with the following Python code.

    S="----.....-----"
    e=lambda x:S[9-x:14-x]
    d=lambda x:9-S.find(x)

Honorable mention goes to Manuel Eberl with the following code. It only does decoding, but is quite clever and short.

    d=lambda c:hash(c+'BvS+')%10

It only works in Python 2 because it depends on the specific hashing algorithm used in earlier versions of Python.

Cut numbers

If you’re mixing letters and digits, digits have to be five symbols long. But if you know that characters have to be digits in some context, this opens up the possibility of shorter encodings.

The most common abbreviations are T for 0 and N for 9. For example, a signal report is always three digits, and someone may send 5NN rather than 599 because in that context it’s clear that the N’s represent 9s.

When T abbreviates 0 it might be a “long dash,” slightly longer than a dash meant to represent a T. This is not strictly according to Hoyle but sometimes done.

There are more abbreviations, so called cut numbers, though these are much less common and therefore less likely to be understood.

    |-------+-------+-----+--------+----|
    | Digit | Code  |  T1 | Abbrev | T2 |
    |-------+-------+-----+--------+----|
    |     1 | .---- |  17 | .-     |  5 |
    |     2 | ..--- |  15 | ..-    |  7 |
    |     3 | ...-- |  13 | ...-   |  9 |
    |     4 | ....- |  11 | ....-  | 11 |
    |     5 | ..... |   9 | .      |  1 |
    |     6 | -.... |  11 | -....  | 11 |
    |     7 | --... |  13 | -...   |  9 |
    |     8 | ---.. |  15 | -..    |  7 |
    |     9 | ----. |  17 | -.     |  5 |
    |     0 | ----- |  19 | -      |  3 |
    |-------+-------+-----+--------+----|
    | Total |       | 140 |        | 68 |
    |-------+-------+-----+--------+----|

The space between dots and dashes is equal to one dot, and the length of a dash is the length of three dots. So the time required to send a sequence of dots and dashes equals

2(# dots) + 4(# dashes) – 1

In the table above, T1 is the time to transmit a digit, in units of dots, without abbreviation, and T2 is the time with abbreviation. Both the maximum time and the average time are cut approximately in half. Of course that’s ideal transmission efficiency, not psychological efficiency. If the abbreviations are not understood on the receiving end and the receiver asks for numbers to be repeated, the shortcut turns into a longcut.

Related posts

Frequency shift keying (FSK) spectrum

This post will look encoding digital data as an analog signal using frequency shift keying (FSK), first directly and then with windowing. We’ll look at the spectrum of the encoded signal and show that basic FSK uses much less bandwidth than direct encoding, but more bandwidth than FSK with windowing.

Square waves

The most natural way to encode binary data as an analog signal would be represent 0s and 1s by a sequence of pulses that take on the values 0 and 1.

A problem with this approach is that it would require a lot of bandwidth.

In theory a square wave has infinite bandwidth: its Fourier series has an infinite number of non-zero coefficients. In practice, the bandwidth of a signal is determined by how many Fourier coefficients it has above some threshold. The threshold would depend on context, but let’s say we ignore Fourier components with amplitude smaller than 0.001.

As I wrote about here, the nth Fourier sine series coefficients for a square wave is equal to 4/nπ for odd n. This means we would need on the order of 1,000 terms before the coefficients drop below our threshold.

Frequency shift keying

The rate of convergence of the Fourier series for a function f depends on the smoothness of f. Discontinuities, like the jump in a square wave, correspond to slow convergence, i.e. high bandwidth. We can save bandwidth by encoding our data with smoother functions.

So instead of jumping from 0 to 1, we’ll encode a 0 as a signal of one frequency and a 1 as a signal with another frequency. By changing the frequency after some whole number of periods, the resulting function will be continuous, and so will have smaller bandwidth.

Suppose we have a one second signal f(t) that is made of half a second of a 4 Hz signal and half a second of a 6 Hz signal, possibly encoding a 0 followed by a 1.

What would the Fourier coefficients look like? If we just had a 4 Hz sine wave, the Fourier series would have only one component: the signal itself at 4 Hz. If we just had a 6 Hz sine wave, the only Fourier component would again be the signal itself. There would be no sine components at other frequencies, and no cosine components.

But our signal patched together by concatenating 4 Hz and 6 Hz signals has non-zero cosine terms for every odd n, and these coefficients decay like O(1/n²).

Our Fourier series is

f(t) = 0.25 sin 8πt + 0.25 sin 12πt + 0.0303 cos 2πt + 0.1112 cos 6πt – 0.3151 cos 12πt + 0.1083 cos 10πt + …

We need to go out to 141 Hz before the coefficients drop below 0.001. That’s a lot of coefficients, but it’s an order of magnitude fewer coefficients than we’d need for a square wave.

Pulse shaping

Although our function f is continuous, it is not differentiable. The left-hand derivative at 1/2 is 8π and the right-hand derivative is 12π. If we could replace f with a related function that is differentiable at 1/2, presumably the signal would require less bandwidth.

We could do this by multiplying both halves of our signal by a windowing function. This is called pulse shaping because instead of a simple sine wave, we change the shape of the wave, tapering it at the ends.

Let’s using a cosine window because that’ll be easy; in practice you’d probably use a different window [1].

Now our function is differentiable at 1/2, and its Fourier series converges more quickly. Now we can disregard components above 40 Hz. With a smoother windowing function the windowed function would have more derivatives and we could disregard more of the high frequencies.

Related posts

[1] This kind of window is called a cosine window because you multiply your signal by one lobe of a cosine function, with the peak in the middle of the signal. Since we’re doing this over [0. 1/2] and again over [1/2, 1], we’re actually multiplying by |sin 2πt|.

Varicode

Varicode is a way of encoding text and control characters into binary using code words of variable length. It was developed as part of the PSK31 protocol for digital communication over amateur radio.

In the spirit of Morse code, it uses short code words for common characters and longer code words for less common characters in the expectation that this will result in shorter encodings.

If you use variable length words, you’ve got to have some way of knowing when one word ends and the next begins. Varicode solves this problem by using only keywords that begin and end with 1 and that do not contain two consecutive zeros. Then 00 is inserted between code words. Since 00 cannot appear inside a code word, these bits unambiguously mark the space between code words.

Synchronization

Varicode is self-synchronizing in the sense that if you jump into a stream of bits produced by Varicode, as soon as you see two zeros, you know you’re at a code word boundary and can start reading from there. You’ve lost any bits that were transmitted before you jumped in, but you can parse everything going forward.

ASCII doesn’t have this problem or this robustness. It doesn’t have the problem of determining code word boundaries because every code word is eight bits long. But then to read a stream of ASCII bits you need to know your position mod 8. If you jump into a stream of bits, you don’t know where the next code word boundary will be, though you may be able to infer it by trying 8 possibilities and seeing which produces the most intelligible results.

Regular expression

Another way to state the rules for forming Varicode code words is to say that 1 is a valid code (the code for a space, ASCII 0x20) and that you can form new codes by prefixing 1 or 10 to a code. In terms of regular expressions, this says a Varicode code word matches

    (1|10)*1

Fibonacci numbers

How many code words are there of length n? Well, there are two ways to make a code word that long: you either put a 1 in front of a code word of length n-1, or you put a 10 in front of a code word of length n-2. So the number of code words of length n equals the number of code words of length n-1 plus the number of code words of length n-2. That is, the number of code words satisfies the same recurrence relation as the Fibonacci numbers.

It’s easy to see that there’s only one code word of length one, and only one code word of length two, so the number of code words of each length satisfies the initial conditions for the Fibonacci sequence as well, and so they are the Fibonacci numbers.

Efficiency

Varicode encodes a lot more than lower case letters—it encodes most ASCII characters— and so it would take some work to discover the relative frequencies of the characters, and this frequency would depend on where Varicode is used. As far as I know Varicode is use primarily (only?) in PSK31, and so the relevant frequency would be the frequencies in messages sent over PSK31, not English more generally.

You can find the code words of each letter here.

To make things easier, let’s suppose messages are limited to lower case letters and spaces, and that the letters follow the same distribution as in English in general.

We can use the letter frequencies here, except these don’t take spaces into account. If we assume words are about 5 letters long, then the probability of a character being a space is 1/6 and the probabilities of the other characters conditional on not being a space are given by the table. This means the letter probabilities need to be multiplied by 5/6.

This gives us a an expected length of 3.89 bits per letter, which is effectively 5.89 bits when you consider the 00 pattern we have to add between letters.

You could represent the 26 letters and a few more characters using 5 bits, but the result would not be self-synchronizing. One way of looking at this to say that the compression provided by variable length encoding nearly pays for the overhead required to make the code self-synchronizing.

Related posts