Base 32 and base 64 encoding

Math has a conventional way to represent numbers in bases larger than 10, and software development has a couple variations on this theme that are only incidentally mathematical.

Math convention

By convention, math books typically represent numbers in bases larger than 10 by using letters as new digit symbols following 9. For example, base 16 would use 0, 1, 2, …, 9, A, B, C, D, E, and F as its “digits.” This works for bases up to 36; base 36 would use all the letters of the alphabet. There’s no firm convention for whether to use upper or lower case letters.

Base 64 encoding

The common use for base 64 encoding isn’t to represent bits as numbers per se, but to have an efficient way to transmit bits in a context that requires text characters.

There are around 100 possible characters on a keyboard, and 64 is the largest power of 2 less than 100 [1], and so base 64 is the most dense encoding using common characters in a base that is a power of 2.

Base 64 encoding does not follow the math convention of using the digits first and then adding more symbols; it’s free not to because there’s no intention of treating the output as numbers. Instead, the capital letters A through Z represent the numbers 0 though 25, the lower case letters a through z represent the numbers 26 through 51, and the digits 0 through 9 represent the numbers 52 through 61. The symbol + is used for 62 and / is used for 63.

Crockford’s base 32 encoding

Douglas Crockford proposed an interesting form of base 32 encoding. His encoding mostly follows the math convention: 0, 1, 2, …, 9, A, B, …, except he does not use the letters I, L, O, and U. This eliminates the possibility of confusing i, I, or l with 1, or confusing O with 0. Crockford had one more letter he could eliminate, and he chose U in order to avoid an “accidental obscenity.” [2]

Crockford’s base 32 encoding is a compromise between efficiency and human legibility. It is more efficient than hexadecimal, representing 25% more bits per character. It’s less efficient than base 64, representing 17% fewer bits per character, but is more legible than base 64 encoding because it eliminates commonly confused characters.

His encoding is also case insensitive. He recommends using only capital letters for output, but permitting upper or lower case letters in input. This is in the spirit of Postel’s law, also known as the robustness principle:

Be conservative in what you send, and liberal in what you accept.

See the next post for an explanation of Crockford’s check sum proposal.

A password generator

Here’s a Python script to generate passwords using Crockford’s base 32 encoding.

    from secrets import randbits
    from base32_crockford import encode

    def gen_pwd(numbits):
        print(encode(randbits(numbits)))

For example, gen_pwd(60) would create a 12-character password with 60-bits of entropy, and this password would be free of commonly confused characters.

Related posts

[1] We want to use powers of 2 because it’s easy to convert between base 2 and base 2n: start at the right end and convert bits in groups of n. For example, to convert a binary string to hexadecimal (base 24 = 16), convert groups of four bits each to hexadecimal. So to convert the binary number 101111001 to hex, we break it into 1 0111 1001 and convert each piece to hex, with 1 -> 1, 0111 -> 7, and 1001 -> 9, to find 101111001 -> 179. If we a base that is not a power of 2, the conversion would be more complicated and not so localized.

[2] All the words on George Carlin’s infamous list include either an I or a U, and so none can result from Crockford’s base 32 encoding. If one were willing to risk accidental obscenities, it would be good to put U back in and remove S since the latter resembles 5, particularly in some fonts.

4 thoughts on “Base 32 and base 64 encoding

  1. Radix 85 encoding is also used for similar purposes, primarily in cryptography applications, as the smallest radix that allows encoding 32 bits in 5 digits. The trade-offs relative to Base64 are that it is less portable across character sets (fewer 7- and 8-bit character sets include all the characters that are used) and it requires more complicated arithmetic. Obviously, both considerations are less significant now than they were decades ago when Base64 was standardized.

  2. I know this is a bit tangential, but footnote 2 is actually quite interesting. It seems unlikely to be a coincidence, because:
    1) I and U are certainly not the most common vowels in English;
    2) most obscenities that I can think of use the short sounds associated with these vowels (in fact, I can’t think of a single obscenity right now that uses a long vowel sound), and
    3) the short I and U sounds are phonetically similar (you don’t have to move your mouth or tongue much to go from one to the other).

    Does anyone know if there’s something psychological about these sounds that makes them conducive to use in obscenities? It’s certainly plausible that people who are in a state of mind to want to utter an obscenity are more prone to make certain sounds than others.

  3. The early vacuum tube computers at Manchester University used word lengths modulo 5. Five bits was the same width as the teletypes of the day. They used the teletypes to prepare punched tape machine code binaries by just typing programs using their base 32 encoding. Here is the manual for the Manchester Mark II written by Alan Turing in 1950. Look at pdf page 6, page number 3. (scanned from the collection of Don Knuth)
    https://archive.computerhistory.org/resources/text/Knuth_Don_X4100/PDF_index/k-4-pdf/k-4-u2780-Manchester-Mark-I-manual.pdf

  4. Other confusions are possible, e.g. ‘5’ – ‘S’ and ‘B’ -‘8’. In the late 90s I wrote a password generator for a UK Post Office project – trying to remove as many confusable characters as possible.

Comments are closed.