# Up and down the abstraction ladder

It’s easier to go up a ladder than to come down, literally and metaphorically.

Gian-Carlo Rota made a profound observation on the application of theory.

One frequently notices, however, a wide gap between the bare statement of a principle and the skill required in recognizing that it applies to a particular problem.

This isn’t quite what he said. I made two small edits to generalize his actual statement. He referred specifically to the application of the inclusion-exclusion principle to problems in combinatorics. Here is his actual quote from [1].

One frequently notices, however, a wide gap between the bare statement of the principle and the skill required in recognizing that it applies to a particular combinatorial problem.

This post will expand a little on Rota’s original statement and a couple applications of the more general version of his statement.

## The inclusion-exclusion principle

I don’t think Rota would have disagreed with my edited version of his statement, but it’s interesting to think of his original context. The inclusion-exclusion principle seems like a simple problem solving strategy: you may count a set of things by first over-counting, then correcting for the over-counting.

For example, a few days ago I wrote about a graph created by turning left at the nth step if n is divisible by 7 or contains a digit 7. Suppose you wanted to count how many times you turn in the first 100 steps. You could count the number of positive integers up to 100 that are divisible by 7, then the number that contain the digit 7, then subtract the number that both are divisible by 7 and contain a 7.

You can carry this a step further by over-counting, then over-correcting for your over-counting, then correcting for your over-correction. This is the essence of the following probability theorem.

The inclusion-exclusion principle a clever idea, but not that clever. And yet Rota discusses how this idea was developed over decades into the Möbius inversion principle, which has diverse applications, including Euler characteristic and the calculus of finite differences.

## Bayes’ theorem

Bayesian statistics is a direct application of Bayes’ theorem. Bayes’ theorem is a fairly simple idea, and yet people make careers out of applying it.

When I started working in the Biostatistics Department at MD Anderson, a bastion of Bayesian statistics, I was surprised how subtle Bayesian statistics is. I probably first saw Bayes’ theorem as a teenager, and yet it was not easy to wrap my head around Bayesian statistics. I would think “This is simple. Why is this hard?” The core principle was simple, but the application was not trivial.

## Newtonian mechanics

When I took introductory physics, we would get stuck on homework problems and ask our professor for help. He would almost always begin by saying “Well, F = ma.”

This was infuriating. Yes, we know F = ma. But how does that let us solve this problem?!

There’s more to Newtonian mechanics than Newton’s laws, a lot more. And most of it is never made explicit. You pick it up by osmosis after you’ve worked hundreds of exercises.

## Related posts

[1] G. C. Rota. On the Foundations of Combinatorial Theory I. Theory of Möbius Functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 2 (1964) 340–368.

# Making documents with makefiles

I learned to use the command line utility make in the context of building C programs. The program make reads an input file to tell it how to make things. To make a C program, you compile the source files into object files, then link the object files together.

You can tell make what depends on what, so it will not do any unnecessary work. If you tell it that a file foo.o depends on a file foo.c, then it will rebuild foo.o if that file is older than the file foo.c that it depends on. Looking at file timestamps allows make to save time by not doing unnecessary work. It also makes it easier to dictate what order things need to be done in.

There is nothing about make that is limited to compiling programs. It simply orchestrates commands in a declarative way. Because you state what the dependencies are, but let it figure if and when each needs to be run, a makefile is more like a Prolog program than a Python script.

I have a client that needs a dozen customized reports, each of which requires a few steps to create, and the order of the steps is important. I created the reports by hand. Then, of course, something changed and all the reports needed to be remade. So then I wrote a Python script to manage the next build. (And the next, and the next, …). But I should have written a makefile. I’m about to have to revise the reports, and this time I’ll probably use a makefile.

Here’s a very simple example of using a makefile to create documents. For more extensive examples of how to use makefiles for a variety of tasks, see How I stopped worrying and loved Makefiles. [1]

Suppose you need to create a PDF and a Microsoft Word version of a report from a LaTeX file [2].

all: foo.pdf foo.docx

foo.pdf: foo.tex
pdflatex foo.tex

foo.docx: foo.tex
pandoc foo.tex --from latex --to docx > foo.docx

clean:
rm foo.aux foo.log


If the file foo.pdf does not exist, or exists but it older than foo.tex, the command make foo.pdf will create the PDF file by running pdflatex. If foo.pdf is newer than foo.tex then running make foo.pdf will return a message

make: 'foo.pdf' is up to date.

If you run make with no argument, it will build both foo.pdf and foo.docx as needed.

The command make clean deletes auxiliary files created by pdflatex. This shows that a make action does have to “make” anything; it simply runs a command.

[1] The blog post title is an allusion to the 1964 movie Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.

[2] I use LaTeX for everything I can. This saves time in the long run, if not in the moment, because of consistency. I used to use Word for proposals and such, and LaTeX for documents requiring equations. My workflow became more efficient when I switched to using LaTeX for everything.

# A deck of cards

One time when I was in grad school, I was a teaching assistant for a business math class that included calculus and a smattering of other topics, including a little bit of probability. I made up examples involving a deck of cards, but then learned to my surprise that not everyone was familiar with playing cards. I was young and had a lot to learn about seeing things through the eyes of other people.

Later I learned that a “standard” deck of cards is not quite as standard as I thought. The “standard 52-card deck” is indeed standard in the English-speaking world, but there are variations used in other countries.

The Unicode values for the characters representing playing cards are laid out in a nice pattern. The last digit of the Unicode value corresponds to the point value of the card (aces low), and the second-to-last digit corresponds to the suite.

The ace of spades has code point U+1F0A1, and the aces of hearts, diamonds, and clubs have values  U+1F0B1, U+1F0C1, and U+1F0D1 respectively. The three of spades is U+1F0A3 and the seven of hearts is U+1F0B7.

But there’s a surprise if you’re only away of the standard 52-card deck: there’s a knight between the jack and the queen. The Unicode values for jacks end in B (Bhex = 11ten) as you’d expect, but queens end in D and kings end in E. The cards ending in C are knights, cards that don’t exist in the standard 52-card deck.

Here’s a little Python code that illustrates the Unicode layout.

for i in range(4):
for j in range(14):
print(chr(0x1f0a1+j+16*i), end='')
print()


The output is too small to read as plain text, but here’s a screen shot from running the code above in a terminal with a huge font.

Playing cards have their origin in tarot cards, which are commonly associated with the occult. But John C. Wright argues that tarot cards were based on Christian iconography before they became associated with the occult.

# On greedy algorithms and rodeo clowns

This weekend I ran across a blog post The Rodeo Clown Theory of Personal Development. The title comes from a hypothetical example of a goal you don’t know how to achieve: becoming a rodeo clown.

Let’s say you decide you want to be a rodeo clown. And let’s say you’re me and you have no idea how to be a rodeo clown. … Can you look at the possibilities currently available to you and imagine which of them might lead to an overall increase of rodeo clowndom in your life, even infinitesimally?

Each day you ask yourself what you can do that might lead you closer to your goal. This is a greedy algorithm: at each step you make as much progress toward your goal as you can. Greedy algorithms are not always optimal, but the premise above is that you have no idea what to do. In that case, doing whatever you can imagine that might bring you a little closer to your goal is a smart thing to do.

## Crossing the continent

If you’re in Los Angeles and your goal is to get to New York, you could start by walking east. You might run into a brick wall and decide that you should not take your approach too literally. Maybe you decide to buy a ticket for a bus that’s traveling east, even if you have to walk west to the bus station. Maybe the bus takes you to Houston. This isn’t the most direct route, but it’s progress.

When you don’t know what else to do, a greedy approach, especially one tempered with common sense, could be a very sensible way to start. Maybe it’ll put you in a position to learn of a more efficient approach. In the scenario above, maybe some stranger sees you walk into a wall, asks you what you’re trying to do, and tells you where the bus station is.

## Sorting algorithms

The way most people sort a hand of playing cards is a sort of greedy algorithm. They scan for the highest card and put it first. Then they scan the remainder of the hand for the next highest card and so on. This algorithm is known as insertion sort. It’s not the most efficient algorithm, but it may be the most natural, and it works. And for a hand of five cards it’s about as good as any.

Anyone who has taken an algorithms class will know you can beat insertion sort with a more sophisticated algorithm such as heap sort. But sorting is a well-understood, one-dimensional problem. That’s why people long ago came up with solutions that are better than a greedy approach. But when a problem is not well understood and is high-dimensional, greedy algorithms are more likely to succeed, even if they’re not optimal.

## Training neural nets

Training a neural network is a very high dimensional optimization problem, maybe on the order of a billion dimensions. And yet it is solved using stochastic gradient descent, which is essentially a greedy algorithm. At each iteration, you move in the direction that gives you the most improvement toward your goal. There’s a little more nuance to it than that, but that’s the basic idea.

How can gradient descent work so well in practice when in theory it could get stuck in a local minimum or a saddle point? John Carmack posted a heuristic explanation on Twitter a few weeks ago. A local critical point requires the “cooperation” of every variable in the function you’re optimizing. The more variables, the less likely this is to happen.

Carmack didn’t give a full explanation—he posted a tweet, not a journal article—but he does give an intuitive idea of what’s going on. At the global minimum, the variables cooperate for a good reason. He assumes the variables are unlikely to cooperate anywhere else, which is apparently often true when training neural nets.

Usually things get harder in higher dimensions, hence the “curse of dimensionality.” But there’s a sort of blessing of dimensionality here: you’re less likely to get stuck in a local minimum in higher dimensions.

## Back to the rodeo

Training a neural network can be a huge undertaking, using millions of dollars worth of electricity, but it’s a mathematically defined problem. Life goals such as becoming a rodeo clown are not cleanly quantified problems. They are very high dimensional in the sense that there are a vast number of things you could do at any given time. You might argue by analogy with neural nets that the fact that there are so many dimensions means that you’re unlikely to get stuck in a local minimum. This is a loose analogy, one that easily breaks down, but still one worth thinking about.

Seth Godin once said “Remarkable work often comes from making choices when everyone else feels as though there is no choice.” What looks like a local minimum may be revealed to not be once you consider more options.

Image by Clarence Alford from Pixabay

# Finding strings in binary files

There’s a little program called strings that searches for what appear to be strings inside binary file. I’ll refer to it as strings(1) to distinguish the program name from the common English word strings. [1]

What does strings(1) consider to be a string? By default it is a sequence of four or more bytes that correspond to printable ASCII strings. There are command options to change the sequence length and the character encoding.

There are 98 printable ASCII characters [2] and 256 possible values for an 8-bit byte, so the probability of a byte being a printable character is

p = 98/256 = 0.3828125.

This implies that the probability of strings(1) flagging a sequence of four random bytes as a string is p4 or about 2%.

How long a string might you find inside a binary file?

I ran strings(1) on a photograph and found a 46-character string:

   .IEC 61966-2.1 Default RGB colour space - sRGB

Of course this isn’t random. This string is part of the metadata stored inside JPEG images.

Next I encrypted the file so that no strings would be metadata and all strings would be false positives. The longest line was 12 characters:

    Z<Bq{7fH}~G9

How does this compare to what we might expect from a random file? I wrote about the probability of long runs a dozen years ago. In a file of n bytes, the expected length of the longest run of printable characters is approximately

In my case, the file had n = 203,308 bytes. The expected length of the longest run of printable characters would then be 12.2 characters, and so the actual length of the longest run is in line with what theory would have predicted.

[1] Unix documentation is separated into sections, and parentheses after a name specify the documentation section. Section 1 is for programs, Section 2 is for system calls, etc. So, for example, chmod(1) is the command line utility named chmod, and chmod(2) is the system call by the same name. Since command line utilities often have names that are common words, tacking (1) on the end helps distinguish program names from English words.

[2] p could be a little larger if you consider some of the control characters to be printable.

# Extract text from a PDF

Arshad Khan left a comment on my post on the less and more utilities saying “on ubuntu if I do less on a pdf file, it shows me the text contents of the pdf.”

Apparently this is an undocumented feature of GNU less. It works, but I don’t see anything about it in the man page documentation [1].

Not all versions of less do this. On my Mac, less applied to a PDF gives a warning saying “… may be a binary file. See it anyway?” If you insist, it will dump gibberish to the command line.

A more portable way to extract text from a PDF would be to use something like the pypdf Python module:

    from pypdf import PdfReader

print(page.extract_text())


The pypdf documentation gives several options for how to extract text. The documentation also gives a helpful discussion of why it’s not always clear what extracting text from a PDF should mean. Should captions and page numbers be extracted? What about tables? In what order should text elements be extracted?

PDF files are notoriously bad as a data exchange format. When you extract text from a PDF, you’re likely not using the file in a way its author intended, maybe even in a way the author tried to discourage.

Related post: Your PDF may reveal more than you intend

[1] Update: Several people have responded saying that that less isn’t extracting the text from a PDF, but lesspipe is. That would explain why it’s not a documented feature of less. But it’s not clear how lesspipe is implicitly inserting itself.

Further update: Thanks to Konrad Hinsen for pointing me to this explanation. less reads an environment variable LESSOPEN for a preprocessor to run on its arguments, and that variable is, on some systems, set to lesspipe.

# How big will a carpet be when you roll or unroll it?

If you know the dimensions of a carpet, what will the dimensions be when you roll it up into a cylinder?

If you know the dimensions of a rolled-up carpet, what will the dimensions be when you unroll it?

## Flexible carpet: solid cylinder

The edge of a rolled-up carpet can be described as an Archimedean spiral. In polar coordinates, this spiral has the equation

r = hθ / 2π

where h is the thickness of the carpet. The previous post gave an exact formula for the length L of the spiral, given the maximum value of θ which we denoted T. It also gave a simple approximation that is quite accurate when T is large, namely

L = hT² / 4π

If r1 is the radius of the carpet as a rolled up cylinder, r1 = hT / 2π and so T = 2π r1 / h. So when we unroll the carpet

L = hT² / 4π = πr1² / h.

Now suppose we know the length L and want to find the radius r when we roll up the carpet.

T = √(hL/π).

## Stiff carpet: hollow cylinder

The discussion so far has assumed that the spiral starts from the origin, i.e. the carpet is rolled up so tightly that there’s no hole in the middle. This may be a reasonable assumption for a very flexible carpet. But if the carpet is stiff, the rolled up carpet will not be a solid cylinder but a cylinder with a cylindrical hole in the middle.

In the case of a hollow cylinder, there is an inner radius r0 and an outer radius r1. This means θ runs from T0 = 2π r0/h to T1 = 2πr1/h.

To find the length of the spiral running from T to T1 we find the length of a spiral that runs from 0 to T1 and subtract the length of a spiral from 0 to T0

L = πr1² / h − πr0² / h = π(r1² − r0²)/h.

This approximation is even better because the approximation is least accurate for small T, and we’ve subtracted off that part.

Now let’s go the other way around and find the outer radius r1 when we know the length L. We also need to know the inner radius r0. So suppose we are wrapping the carpet around a cardboard tube of radius r0. Then

r1 = √(r0² + hL/π).

# Precise answers to useless questions

I recently ran across a tweet from Allen Downey saying

So much of 20th century statistics was just a waste of time, computing precise answers to useless questions.

He’s right. I taught mathematical statistics at GSBS [1, 2] several times, and each time I taught it I became more aware of how pointless some of the material was.

I do believe mathematical statistics is useful, even some parts whose usefulness isn’t immediately obvious, but there were other parts of the curriculum I couldn’t justify spending time on [3].

## Fun and profit

I’ll say this in partial defense of computing precise answers to useless questions: it can be fun and good for your career.

Mathematics problems coming out of statistics can be more interesting, and even more useful, than the statistical applications that motivate them. Several times in my former career a statistical problem of dubious utility lead to an interesting mathematical puzzle.

Solving practical research problems in statistics is hard, and can be hard to publish. If research addresses a practical problem that a reviewer isn’t aware of, a journal may reject it. The solution to a problem in mathematical statistics, regardless of its utility, is easier to publish.

## Private sector

Outside of academia there is less demand for precise answers to useless questions. A consultant can be sure that a client finds a specific project useful because they’re willing to directly spend money on it. Maybe the client is mistaken, but they’re betting that they’re not.

Academics get grants for various lines of research, but this isn’t the same kind of feedback because the people who approve grants are not spending their own money. Imagine a grant reviewer saying “I think this research is so important, I’m not only going to approve this proposal, I’m going to write the applicant a \$5,000 personal check.”

Consulting clients may be giving away someone else’s money too, but they have a closer connection to the source of the funds than a bureaucrat has when dispensing tax money.

## Notes

[1] When I taught there, GSBS was The University of Texas Graduate School of Biomedical Sciences. I visited their website this morning, and apparently GSBS is now part of, or at least co-branded with, MD Anderson Cancer Center.

There was a connection between GSBS and MDACC at the time. Some of the GSBS instructors, like myself, were MDACC employees who volunteered to teach a class.

[2] Incidentally, there was a connection between GSBS and Allen Downey: one of my students was his former student, and he told me what a good teacher Allen was.

[3] I talk about utility a lot in this post, but I’m not a utilitarian. There are good reasons to learn things that are not obviously useful. But the appeal of statistics is precisely its utility, and so statistics that isn’t useful is particularly distasteful.

Pure math is beautiful (and occasionally useful) and applied math is useful (and occasionally beautiful). But there’s no reason to study fake applied math that is neither beautiful or useful.

# Pairs in poker

An article by Y. L. Cheung [1] gives reasons why poker is usually played with five cards. The author gives several reasons, but here I’ll just look at one reason: pairs don’t act like you might expect if you have more than five cards.

In five-card poker, the more pairs the better. Better here means less likely. One pair is better than no pair, and two pairs is better than one pair. But in six-card or seven-card poker, a hand with no pair is less likely than a hand with one pair.

For a five-card hand, the probabilities of 0, 1, or 2 pair are 0.5012, 0.4226, and 0.0475 respectively.

For a six-card hand, the same probabilities are 0.3431, 0.4855, and 0.1214.

For a seven-card hand, the probabilities are 0.2091, 0.4728, and 0.2216.

## Related posts

[1] Y. L. Cheung. Why Poker Is Played with Five Cards. The Mathematical Gazette, Dec., 1989, Vol. 73, No. 466 (Dec., 1989), pp. 313–315

# Solar system means

Yesterday I stumbled on the fact that the size of Jupiter is roughly the geometric mean between the sizes of Earth and the Sun. That’s not too surprising: in some sense (i.e. on a logarithmic scale) Jupiter is the middle sized object in our solar system.

What I find more surprising is that a systematic search finds mean relationships that are far more accurate. The radius of Jupiter is within 5% of the geometric mean of the radii of the Earth and Sun. But all the mean relations below have an error less than 1%.

The radius of Mercury equals the geometric mean of the radii of the Moon and Mars, within 0.052%.

The radius of Mars equals the harmonic mean of the radii of the Moon and Jupiter, within 0.08%.

The radius of Uranus equals the arithmetic-geometric mean of the radii of Earth and Saturn, within 0.0018%.

See the links below for more on AM, GM, and AGM.

Now let’s look at masses.

The mass of Earth is the geometric mean of the masses of Mercury and Neptune, within 2.75%. This is the least accurate approximation in this post.

The mass of Pluto is the harmonic mean of the masses of the Moon and Mars, within 0.7%.

The mass of Uranus is the arithmetic-geometric mean of the masses of of the Moon and Saturn, within 0.54%.