Connecting powers of two and decibels

Colin Wright pointed out a pattern in my previous post that I hadn’t seen before. He wrote about it here a couple years ago.

Start with the powers of 2 from the top of the post:

21 = 2
22 = 4
23 = 8
24 = 16
25 = 32
26 = 64
27 = 128
28 = 256
29 = 512

and sort the numbers on the right hand side in lexical order, i.e. sort them as you’d sort words. This is almost never what you want to do, but here it is [1].

128
16
2
256
32
4
512
64
8

Next, add a decimal point after the first digit in each number.

1.28
1.6
2
2.56
3.2
4
5.12
6.4
8

Now compare the decibel values listed at the bottom of the previous post.

1.25
1.6
2
2.5
3.2
4
5
6.3
8

Five of the numbers are the same and the remaining 4 are close. And where the numbers differ, the exact decibel value is between the the two approximate values.

Here’s a table to compare the exact value rounded to 3 decimals, the approximation given before, and the value obtained by sorting powers of 2 and moving the decimal.

    |---+-----------+--------+-------|
    | n | 10^(n/10) | approx | power |
    |---+-----------+--------+-------|
    | 1 |     1.259 |   1.25 |  1.28 |
    | 2 |     1.585 |   1.60 |  1.60 |
    | 3 |     1.995 |   2.00 |  2.00 |
    | 4 |     2.512 |   2.50 |  2.56 |
    | 5 |     3.162 |   3.20 |  3.20 |
    | 6 |     3.981 |   4.00 |  4.00 |
    | 7 |     5.012 |   5.00 |  5.12 |
    | 8 |     6.310 |   6.30 |  6.40 |
    | 9 |     7.943 |   8.00 |  8.00 |
    |---+-----------+--------+-------|

So the numbers at the top and bottom of my list are practically the same, but in a different order.

Related post: 2s, 5s, and decibels.

[1] One reason I use ISO dates (YYYY-MM-DD) in my personal work is that that lexical order equals chronological order. Otherwise you can get weird things like December coming before March because D comes before M or because 1 (as in 12) comes before 3. Using year-month-day and padding days and months with zeros as needed eliminates this problem.

100 digits worth memorizing

I was thinking today about how people memorize many digits of π, and how it would be much more practical to memorize a moderate amount of numbers to low precision.

So suppose instead of memorizing 100 digits of π, you memorized 100 digits of other numbers. What might those numbers be? I decided to take a shot at it. I exclude things that are common knowledge, like multiplication tables up to 12 and familiar constants like the number of hours in a day.

There’s some ambiguity over what constitutes a digit. For example, if you say the speed of light is 300,000 km/s, is that one digit? Five digits? My first thought was to count it as one digit, but then what about Avagadro’s number 6×1023? I decided to write numbers in scientific notation, so the speed of light is 2 digits (3e5 km/s) and Avagadro’s number is 3 digits (6e23).

Here are 40 numbers worth remembering, with a total of 100 digits.

Powers of 2

23 = 8
24 = 16
25 = 32
26 = 64
27 = 128
28 = 256
29 = 512
210 = 1024

Squares

13² = 169
14² = 196
15² = 225

Probability

P(|Z| < 1) ≈ 0.68
P(|Z| < 2) ≈ 0.95

Here Z is a standard normal random variable.

Music

Middle C ≈ 262 Hz
27/12 ≈ 1.5

The second fact says that seven half steps equals one (Pythagorean) fifth.

Mathematical constants

π ≈ 3.14
√2 ≈ 1.414
1/√2 ≈ 0.707
φ ≈ 1.618
loge 10 ≈ 2.3
log10 e ≈ 0.4343
γ ≈ 0.577
e ≈ 2.718

Here φ is the golden ratio and γ is the Euler-Mascheroni constant.

I included √2 and 1/√2 because both come up so often.

Similarly, loge 10 and log10 e are reciprocals, but it’s convenient to know both.

The number of significant figures above is intentionally inconsistent. It’s at least as easy, and maybe easier, to remember √2 as 1.414 than as 1.41. Similarly, if you’re going to memorize that log10 e  is 0.43, you might as well memorize 0.4343. Buy two get two free.

Each constant is truncated before a digit less than 5, so all the figures are correct and correctly rounded. For φ and loge 10 the next digit is a zero, so you get an implicit extra digit of precision.

The requirement that truncation = rounding means that you have to truncate e at either 2.7 or 2.718. If you’re going to memorize the latter, you could memorize six more digits with little extra effort since these digits are repetitive:

e = 2.7 1828 1828

Measurements and unit conversion

c = 3e8 m/s
g = 9.8 m/s²
NA = 6e23
Earth circumference = 4e7m
1 AU = 1.5e8 km
1 inch = 2.54 cm

Floating point

Maximum double = 1.8e308
Epsilon = 2e-16

These numbers could change from system to system, but they rarely do. See Anatomy of a floating point number.

Decibels

1 dB = 100.1 = 1.25
2 dB = 100.2 = 1.6
3 dB = 100.3 = 2
4 dB = 100.4 = 2.5
5 dB = 100.5 = 3.2
6 dB = 100.6 = 4
7 dB = 100.7 = 5
8 dB = 100.8 = 6.3
9 db = 100.9 = 8

These numbers are handy, even if you don’t work with decibels per se.

Update: The next post points out a remarkable pattern between the first and last sets of numbers in this post.

Related posts

Morse code in musical notation

Maybe this has been done before, but I haven’t seen it: Morse code in musical notation.

Here’s the Morse code alphabet, one letter per measure; in practice there would be less space between letters [1]. A dash is supposed to be three times as long as a dot, so a dot is a sixteenth note and a dash is a dotted eighth note.

Morse code is often at a frequency between 600 and 800 Hz. I picked the E above middle C (660 Hz) because it’s in that range.

Rhythm

Officially a dash is three times as long as a dot. But there’s also a space equal to the length of a dot between parts of a letter. So the sheet music above would be more accurate if you imagined all the sixteenth notes are staccato and the dotted eighth notes are really eighth notes followed by a sixteenth rest.

This doesn’t make much difference because individual operators have varying “fists,” styles of sending Morse code, and won’t exactly follow the official length and spacing rules.

You could rewrite the music above as follows, but it’s all an approximation.

Tempo

According to Wikipedia, “the dit length at 20 words per minute is 50 milliseconds.” So if a sixteenth note has a duration of 50 milliseconds, this would mean five quarter notes per second, or 300 beats per minute. But according to this video, the shortest duration people can distinguish is about 50 milliseconds.

That would imply that copying Morse code at 20 wpm is pushing the limits of human hearing. But copying at 20 wpm is common. Some people can copy Morse code at more than 50 words per minute or more, but at that speed they’re not hearing individual dits and dahs. An H, for example, four dits in a row, sounds like a single rough sound. In fact, they’re not really hearing letters at all but recognizing the shape of words.

How the image was made

I made the image above with LaTeX and Lilypond.

Adding the letters above each measure was kind of a hack. I used rehearsal markings to label the measures, but there was one problem: the software skips from letter H to letter J. That meant that the labels I and all subsequent letters were one ahead of what they should be, and the final letter Z was labeled AA. I tried several tricks, and Lilypond steadfastly refused to label a measure with ‘I’ even though I’ve seen such a label in the documentation.

My way around this was to make it label two consecutive measures with H, then in image editing software I turned the second H into an I. No doubt there’s a better way, but this worked.

I may play around with this and try to improve it a bit. If you have any suggestions, particularly related to Lilypond, please let me know.

Related posts

[1] You could think of the musical score above as a sort of transcription of the Farnsworth method of teaching Morse code. Students learn the letters at full speed, but with extra space between the letters at first. The faster speed discourages consciously counting the dits and dahs, forcing the student to listen to the overall rhythm of the letters.

Find log normal parameters for given mean and variance

Earlier today I needed to solve for log normal parameters that yield a given mean and variance. I’m going to save the calculation here in case I needed in the future or in case a reader needs it. The derivation is simple, but in the heat of the moment I’d rather look it up and keep going with my train of thought.

NB: The parameters μ and σ² of a log normal distribution are not the mean and variance of the distribution; they are the mean and variance of its log.

If m is the mean and v is the variance then

\begin{align*} m &= \exp(\mu + \sigma^2/2) \\ v &= (\exp(\sigma^2) - 1) \exp(2\mu + \sigma^2) \end{align*}

Notice that the square of the m term matches the second part of the v term.

Then

\frac{v}{m^2} = \exp(\sigma^2) -1

and so

\sigma^2 = \log\left(\frac{v}{m^2} + 1 \right)

and once you have σ² you can find μ by

\mu = \log m - \sigma^2/2

Here’s Python code to implement the above.

    from numpy immport log
    def solve_for_log_normal_parameters(mean, variance):
        sigma2 = log(variance/mean**2 + 1)
        mu = log(mean) - sigma2/2
        return (mu, sigma2)

And here’s a little test code for the code above.

    mean = 3.4
    variance = 5.6

    mu, sigma2 = solve_for_log_normal_parameters(mean, variance)

    X = lognorm(scale=exp(mu), s=sigma2**0.5)
    assert(abs(mean - X.mean()) < 1e-10)
    assert(abs(variance - X.var()) < 1e-10)

Related posts

Q codes in Seveneves

The first time I heard of Q codes was when reading the novel Seveneves by Neal Stephenson. These are three-letter abbreviations using in Morse code that all begin with Q.

Since Q is always followed by U in native English words, Q can be used to begin a sort of escape sequence [1].

There are dozens of Q codes used in amateur radio [2], and more used in other contexts, but there are only 10 Q codes used in Seveneves [3]. All begin with Q, followed by R, S, or T.

Tree[Q, {Tree[R, {A, K, N, S, T}], Tree[S, {B, L, O}], Tree[T, {H, X}]}]

Each Q code can be used both as a question and as an answer or statement. For example, QRS can mean “Would you like me to slow down” or “Please slow down.” I’ll just give the interrogative forms below.

Here are the 10 codes that appear in Stephenson’s novel.

QRA
What is your call sign?
QRK
Is my signal intelligible?
QRN
Is static a problem?
QRS
Should I slow down?
QRT
Should I stop sending?
QSB
Is my signal fading?
QSL
Are you still there?
QSO
Could you communicate with …?
QTH
Where are you?
QTX
Will you keep your station open for talking with me?

Related posts

[1] Some Q codes have a U as the second letter. I don’t know why—there are plenty of unused TLAs that begin with Q—but it is what it is.

[2] You can find a list here.

[3] There is one non-standard code in the novel: QET for “not on planet Earth.”

What use is mental math today?

Now that most people are carrying around a powerful computer in their pocket, what use is it to be able to do math in your head?

Here’s something I’ve noticed lately: being able to do quick approximations in mid-conversation is a superpower.

Zoom call

When I’m on Zoom with a client, I can’t say “Excuse me a second. Something you said gave me an idea, and I’d like to pull out my calculator app.” Instead, I can say things like “That would require collecting four times as much data. Are you OK with that?”

There’s no advantage to being able to do calculations to six decimal places on the spot like Mr. Spock, and I can’t do that anyway. But being able to come up with one significant figure or even an order-of-magnitude approximation quickly keeps the conversation flowing.

I have never had a client say something like “Could you be more precise? You said between 10 and 15, and our project is only worth doing if the answer is more than 13.2.” If they did say something like that, I’d say “I will look at this more carefully offline and get back to you with a more precise answer.”

I’m combining two closely-related but separate skills here. One is the ability to simple calculations. The other is the ability to know what to calculate, how to do so-called Fermi problems. These problems are named after Enrico Fermi, someone who was known for being able to make rough estimates with little or no data.

A famous example of a Fermi problem is “How many piano tuners are there in New York?” I don’t know whether this goes back to Fermi himself, but it’s the kind of question he would ask. Of course nobody knows exactly how many piano tuners there are in New York, but you could guess about how many piano owners there are, how often a piano needs to be tuned, and how many tuners it would take to service this demand.

The piano tuner example is more complicated than the kinds of calculations I have to do on Zoom calls, but it may be the most well-known Fermi problem.

In my work with data privacy, for example, I often have to estimate how common some combination of personal characteristics is. Of course nobody should bet their company on guesses I pull out of the air, but it does help keep a conversation going if I can say on the spot whether something sounds like a privacy risk or not. If a project sounds feasible, then I go back and make things more precise.

Related links

Missing Morse codes

Morse codes for Latin letters are sequences of between one and four symbols, where each symbol is a dot or a dash. There are 2 possible sequences with one symbol, 4 with two symbols, 8 with three symbols, and 16 with four symbols. This makes a total of 30 sequences with up to four symbols. There are 26 letters, so what are the four missing codes?

Here they are:

    .-.- 
    ..-- 
    ---. 
    ---- 

There are various uses for these codes, such as variants of Latin letters.

The first sequence on the list, .-.- is similar to two A’s .- .- and is used for variations on A, such as ä or æ.

The sequence ..-- is like a U (..-) with an extra dash on the end, and is used for variations on U, like ü.

The sequence ---. is like O (---) with an extra dot on the end, and is used for variations on O, like ö.

The last sequence ---- is used for letters like Ch or Š. Go figure.

Sequences of length 5

Sequences of five or six symbols are used for numbers, punctuation, and a few miscellaneous tasks, but there are a few unused combinations. (“Unused” is fuzzy here. Maybe some people do or did use these sequences.)

Here are the five-symbol sequences that do not appear in the Wikipedia article on Morse code:

    ..-.-
    .-.--
    -..--
    -.-.-
    -.---
    ---.-

So our of 32 possibilities, people have found uses for 26 of them.

Sequences of length 6

Out of 64 possible sequences of six symbols, 13 have found a use.

It’s harder to distinguish longer sequences by ear, and so it’s not surprising that most sequences of six symbols are unused; the ones that are used have special patterns that are easier to hear. Here are the ones that are used.

    ..--..
    ..--.-
    .-..-.
    .-.-.-
    .--.-.
    .----.
    -....-
    -.-.-.
    -.-.--
    -.--.-
    --..-.
    --..--
    ---...

Related posts

Almost periodic functions

When you see the word “almost” in a mathematical context, it might be used informally, but often it has a precise meaning. I wrote about this before in the post Common words that have a technical meaning.

Often the technical meaning of “almost” is “within any finite tolerance.” That’s how it is used in the context of almost periodic functions.

A function f is periodic with period T if for all x,

f(x + T) = f(x)

For example, the sine function is periodic with period 2π.

So what does almost periodic mean? It means that for any ε > 0, there exists a T > 0 such that

|f(x + T) − f(x)| < ε

for all x. Note that the value of T depends on the value of ε. You tell me your tolerance for “almost” and in theory I could hand you back T that meets your tolerance.

That’s in theory. Can we actually do it in practice? Let’s consider

f(t) = sin(2παt) + sin(2πβt)

where α/β is irrational. In the previous post we had α = 1/log 2 and β = 1/log 5.

The Hurwitz approximation theorem says that because α/β is irrational, there are infinitely many integers p and q such that

|α/β – p/q| < 1/(√5q²).

To put it another way, we can find p and q such that

|qα – pβ| < β/√5q,

i.e. we can find p and q such that qα and pβ are close together as we wish by looking for a large enough q, and Hurwitz promises us that we can find a q as large as we want.

If we let T = q/α then the first component of f is exactly periodic, i.e.

sin(2πα(t + q/α) = sin(2παt).

With a little effort we can show that

sin(2πβ(t + q/α)) = sin(2πβt + 2π(qβ − pα)t/α)

and we said above we can make

qβ − pα

as small as we like by choosing a large enough q in Hurwitz’ theorem. And so we can choose q large enough that

sin(2πβ(t + q/α)) − sin(2πβt)

is uniformly as small as we’d like.

2s, 5s, and decibels

Last night I ran across a delightful Twitter thread by Nathan Mishra-Linger that starts out by saying

It’s nice that 2^7 ≈ 5^3, because that lets you approximate any integer valued decibel quantity as a product of powers of 2 and 5.

The observation that

27 ≈ 53,

is more familiar if you multiply both sides by 23. That is,

210 ≈ 103

This is the basis of using “kilo” to mean both 1000 and 1024 in different contexts. More on that here.

Decibel approximations

Nathan points out that this happy coincidence means that small integer decibels can be written as small powers of 2 and 5. Here’s a table based on one of the tweets in Nathan’s thread.

0 dB = 100.0 = 2050
1 dB = 100.1 ≈ 2−251
2 dB = 100.2 ≈ 235−1
3 dB = 100.3 ≈ 2150
4 dB = 100.4 ≈ 2−151
5 dB = 100.5 ≈ 2−352
6 dB = 100.6 ≈ 2250
7 dB = 100.7 ≈ 2051
8 dB = 100.8 ≈ 255−1
9 dB = 100.9 ≈ 2350

You can show that the maximum relative error above is 1.4% with the following Python code.

    approx = [
        2** 0 * 5** 0,
        2**-2 * 5** 1,
        2** 3 * 5**-1,
        2** 1 * 5** 0,
        2**-1 * 5** 1,
        2**-3 * 5** 2,
        2** 2 * 5** 0,
        2** 0 * 5** 1,
        2** 5 * 5**-1,
        2** 3 * 5** 0,
    ]
    
    err = [abs(1 - approx[n]/10**(n/10)) for n in range(10)]
    print(max(err))

Almost in step

The last tweet in Nathan’s thread is a diagram showing how powers of 2 and 5 sort of cycle. Here’s another to look at that.

Taking logs of both sides of

27 ≈ 53,

shows that

7 log 2 ≈ 3 log 5.

Suppose two people start out walking together, one taking lots of small steps and the other taking fewer larger steps. Specifically, one takes 7 steps of size log 2 in the time that it takes the other to make 3 step of size log 5. Then periodically they’ll almost be in step.

This reminded me of my recent thread about how Earth and Mars line up approximately every 17 years.

Suppose you have two planets orbiting a star. One takes time log 2 to orbit, and another one, further from the star, takes time log 5 to orbit. Then after the inner planet takes 7 orbits and the outer planet takes 3, the two planets will be approximately in a straight line with their star.

Plots

One last observation. Above I said “periodically they’ll almost be in step.” There’s a rigorous way to use the words “almost” and “periodic.” The sum of a sine wave of period log 2 and a sine wave of period log 5 is an “almost periodic” function. See my next post for the rigorous definition of an almost periodic function, but for now I’ll just include a few plots that the function described in this paragraph is “almost periodic” in the colloquial sense.

Here’s a plot of two sine waves, one with 7 periods of length log 2 and the other with 3 periods of length 5.

    Plot[{Sin[2 Pi t/Log[2]], Sin[2 Pi t/Log[5]]}, {t, 0, 7 Log[2] }]

Here’s their sum.

    Plot[Sin[2 Pi t/Log[2]] + Sin[2 Pi t/Log[5]], {t, 0, 7 Log[2] }]

And here’s two approximate periods of the sum.

    Plot[Sin[2 Pi t/Log[2]] + Sin[2 Pi t/Log[5]], {t, 0, 14 Log[2] }]

Related posts