I was thinking today about how people memorize many digits of π, and how it would be much more practical to memorize a moderate amount of numbers to low precision.
So suppose instead of memorizing 100 digits of π, you memorized 100 digits of other numbers. What might those numbers be? I decided to take a shot at it. I exclude things that are common knowledge, like multiplication tables up to 12 and familiar constants like the number of hours in a day.
There’s some ambiguity over what constitutes a digit. For example, if you say the speed of light is 300,000 km/s, is that one digit? Five digits? My first thought was to count it as one digit, but then what about Avagadro’s number 6×1023? I decided to write numbers in scientific notation, so the speed of light is 2 digits (3e5 km/s) and Avagadro’s number is 3 digits (6e23).
Here are 40 numbers worth remembering, with a total of 100 digits.
Powers of 2
23 = 8
24 = 16
25 = 32
26 = 64
27 = 128
28 = 256
29 = 512
210 = 1024
13² = 169
14² = 196
15² = 225
P(|Z| < 1) ≈ 0.68
P(|Z| < 2) ≈ 0.95
Here Z is a standard normal random variable.
Middle C ≈ 262 Hz
27/12 ≈ 1.5
The second fact says that seven half steps equals one (Pythagorean) fifth.
π ≈ 3.14
√2 ≈ 1.414
1/√2 ≈ 0.707
φ ≈ 1.618
loge 10 ≈ 2.3
log10 e ≈ 0.4343
γ ≈ 0.577
e ≈ 2.718
Here φ is the golden ratio and γ is the Euler-Mascheroni constant.
I included √2 and 1/√2 because both come up so often.
Similarly, loge 10 and log10 e are reciprocals, but it’s convenient to know both.
The number of significant figures above is intentionally inconsistent. It’s at least as easy, and maybe easier, to remember √2 as 1.414 than as 1.41. Similarly, if you’re going to memorize that log10 e is 0.43, you might as well memorize 0.4343. Buy two get two free.
Each constant is truncated before a digit less than 5, so all the figures are correct and correctly rounded. For φ and loge 10 the next digit is a zero, so you get an implicit extra digit of precision.
The requirement that truncation = rounding means that you have to truncate e at either 2.7 or 2.718. If you’re going to memorize the latter, you could memorize six more digits with little extra effort since these digits are repetitive:
e = 2.7 1828 1828
Measurements and unit conversion
c = 3e8 m/s
g = 9.8 m/s²
NA = 6e23
Earth circumference = 4e7m
1 AU = 1.5e8 km
1 inch = 2.54 cm
Maximum double = 1.8e308
Epsilon = 2e-16
These numbers could change from system to system, but they rarely do. See Anatomy of a floating point number.
1 dB = 100.1 = 1.25
2 dB = 100.2 = 1.6
3 dB = 100.3 = 2
4 dB = 100.4 = 2.5
5 dB = 100.5 = 3.2
6 dB = 100.6 = 4
7 dB = 100.7 = 5
8 dB = 100.8 = 6.3
9 db = 100.9 = 8
These numbers are handy, even if you don’t work with decibels per se.
Update: The next post points out a remarkable pattern between the first and last sets of numbers in this post.
9 thoughts on “100 digits worth memorizing”
Great post, John.
Surely \log_e 2 \approx 0.693 belongs on such a list, no?
That probably comes up more often than Euler’s constant. :)
Back when I was learning about 8-bit microprocessors and microcomputers, I wound up (not by intention, but by repetition) memorizing the powers of 2 up to 2^16. In more recent decades I’ve remembered that 2^32 is 4 billion and something (twice the number of digits as 2^16), the maximum unsigned 32 bit integer, and that the maximum signed integer is half that, 2 billion and change. I know when I see unexpected computer output in those ranges, some 32 bit value has overflowed or wrapped around.
The decibels listed are conversions to ratios of power. Also useful in electronics is a conversion from dB to voltage (or current), which gives half those values. It’s convenient to remember that 6dB is a (ALMOST exactly) 2 to 1 ratio of voltage, thus an 8 bit (256 values) ADC or DAC gives (approximately, close enough for most purposes) a 6 * 8 or 48dB signal-to-noise ratio. For each bit of increase in resolution, the S/N ratio increases by 6dB.
I find it useful to remember that the speed of light is approximately six orders of magnitude (1,000,000) times the speed of sound. Confusing the two can be embarrassing:
This is a great list…certainly not all of them are relevant to my specific line of work, but it’s surprising how many are!
Personally, I’ve never had to care about the max size of a double. It just doesn’t come up. However, I’ve found that having a decent idea of the max size of a 32-bit integer is very helpful, because sometimes the decision about whether a specific field is 32-bits wide or 64-bits can have long-lasting ramifications (and if, in 2022, you’re using 32-bit timestamps you are probably in a state of sin). Thankfully, in my particular field we have the luxury of handwaving away anything that could possibly overflow a 64-bit integer as “can’t happen”.
You should make this into a poster! I would buy one :)
ln 2 comes up all the time. It is close to 0.7. This is VERY close to being 1% too big if you need more accuracy.
One place this occurs is translating between lg (log base 2) for discrete math, and ln (log base e) for calculus.
Example: suppose you need lg (log base 2) of 31:
lg 31 = lg 32(1 – 1/32) = 5 + lg(1 – 1/32) = 5 + (1 / ln 2) ln ( 1 – 1/32)
= 5 – (1/32)(1/0.7) = 5 – 1/22.4 = 5 – 0.045 = 4.955.
Compare exact answer 4.954…
Also worthwhile knowing that pi is very close to sqrt(10)
“Applied Analysis” by Cornelius Lanczos (1956, still available from Dover) is FULL of awesome tricks, and a lot of fun to read.
Lanczos was one of Einstein’s friends and coworker.
sum 1/n**2 = 1.644
A above middle C is more often used to tune a piano, etc. Which is 440 Hz.