The slang “tl;dr” stands for “too long; didn’t read.” The context is often either a bad joke or a shallow understanding.

What bothers me most about tl;dr is the mindset it implies, scanning everything but reading nothing. I find myself slipping into that mode sometimes. Skimming is a vital skill, but it can become so habitual that it crowds out reflective reading.

When I realize everything I’m reading is short and new, when my patience has atrophied to the point that I get annoyed at long tweets, I’ll read something long and old to restore my concentration and perspective.

Ruby creator Yukihiro Matsumoto gave a presentation How Emacs changed my Life in which he explains how Emacs influenced him personally and how it influenced the programming language he created. Here is his summary:

Emacs taught me freedom for software.

Emacs taught me how to code.

Emacs taught me the power of Lisp.

Emacs taught me how to implement a core language.

Emacs taught me how to implement a garbage collector.

Emacs helped me to code and debug.

Emacs helped me to write and edit text/mails/documents.

I was listening to a business book in my car this afternoon. A couple times it said

Numerous studies have confirmed …

and I couldn’t help but hear

Several of my peers, who share my prejudices, were also able to do a multivariate regression and select a few variables out of hundreds to confirm the prevailing wisdom.

Maybe the prevailing wisdom is right. It often is. However, I’m not very impressed by attempts to shore up prevailing wisdom with linear regression, especially in business studies.

Suppose p(x) is a polynomial with integer coefficients. If all the coefficients are non-negative, I can tell you what p(x) is if you’ll tell me the value of p(x) at just two points.

This sounds too good to be true. Don’t you need n+1 points to determine an nth degree polynomial? Not in this case. The trick is that first I ask for p(1). Call your answer q. Next I ask you for the value of p(q). That’s enough information to find all the coefficients.

I ran across this in the answer to a question on Math Overflow. Aeryk said

If you know that the coefficients are non-negative and also integral, then the polynomial can be completely determined by the values of p(1) and p(p(1)).

I suppose this is a well-known theorem, but I’d never seen it before.

ARupinksi added this explanation.

q=p(1) gives the sum of the coefficients. Now think of p(p(1))=p(q) written in base q; one sees that the “digits” are exactly the coefficients of p. The only possible ambiguity comes if p(q)=q^n for some n, but since the coefficients sum to q, one sees that p=qx^n−1 in this case.

This explanation is correct, but terse, so I’ll expand on it a bit. First a couple examples.

Suppose you tell me that p(1) = 9 and p(9) = 1497. I need to write 1497 in base 9. A little work shows 1497 = 2 × 9^{3} + 4 ×9 + 3. This means me p(x) = 2 x^{3} + 4 x + 3.

Next suppose you tell me p(1) = 5 and p(5) = 625. Since 625 = 5^{4}, p(x) = 5 x^{3}.

Here’s a slightly more formal, algorithmic explanation. Suppose

We can recover the coefficients of p(x) from highest to lowest by repeatedly pulling out the largest powers of q we can. First we find the largest power of q less than p(x), say

q^{m} < p(q) ≤ q^{m+1}.

Then the quotient when p(q) is divided by q^{m} is the coefficient a_{m}. If p(q) = q^{m+1}, then a_{m} = q and we’re done. Otherwise,

p(q) = a_{m}q^{m} + r

where 0 < r < q^{m}. We repeat our process, pulling out the highest power of q out of r that we can to find the next coefficient. Since the coefficients sum to q, and we first found a_{m} ≥ 1, all our subsequent coefficients must be less than q.

Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.

The context of the quote was the author’s experience debugging Lisp software running on the Deep Space 1 spacecraft.

For a daily dose of computer science and related topics, follow @CompSciFact on Twitter.

Suppose you want to evaluate the following integral:

We’d like to do a change of variables to make the range of integration finite, and we’d like the transformed integral to be easy to evaluate numerically.

The change of variables t = 1/x^{2} transforms the range of integration from (30, ∞) to (0, 1/900). I’ll explain where this choice came from and why it results in a nice function to integrate.

The integrand is essentially 1/x^{3} for large x because the exponential term approaches 1. If the integrand were exactly 1/x^{3} then the change of variables t = 1/x^{2} would make the integrand constant. So we’d expect this change of variables to give us an integrand that behaves well. In particular, the integral should be bounded at 0.

(If you’re not careful, a change of variables may just swap one kind of improper integral for another. That is, you can make the domain of integration finite, but at the cost of introducing a singularity in the transformed integrand. However, we don’t have that problem here because our change of variables was just right.)

In general, if you need to integrate a function that behaves like 1/x^{n} as x goes to infinity, try the change of variables t = 1/x^{n-1}.

Keith Kendig compares math to the Hawaiian islands:

Hawaii may look like a group of separate islands, but actually the islands are just the highest peaks of an immense, mostly-submerged mountain range. All that water hides their underlying connectedness, their oneness. Mathematics may similarly seem like an archipelago of different areas — geometry, analysis, topology, number theory, applied math, and so on. My philosophy is that we’re really just seeing a few peaks of a huge mathematical mountain range. Our ignorance is like the water surrounding Hawaii and hiding its true mountain-rangeness. In mathematics, when we remove ignorance by making discoveries and advances, the water level in effect goes down, and when it drops far enough, separate islands are connected.

My intention was to compute the integral to 6 significant figures. (epsrel is a shortened form of epsilon relative, i.e. relative error.) To my surprise, the estimated error was larger than the value of the integral. Specifically, the integral was computed as 5.15 × 10^{-9} and the error estimate was 9.07 × 10^{-9}.

What went wrong? The integration routine quad lets you specify either a desired bound on your absolute error (epsabs) or a desired bound on your relative error (epsrel). I assumed that since I specified the relative error, the integration would stop when the relative error requirement was met. But that’s not how it works.

The quad function has default values for both epsabs and epsrel.

I thought that since I did not specify an absolute error bound, the bound was not effective, or equivalently, that the absolute error target was 0. But no! It was as if I’d set the absolute error bound to 1.49 × 10^{-8}. Because my integral is small (the exact value is 5 × 10^{-9}) the absolute error requirement is satisfied before the relative error requirement and so the integration stops too soon.

The solution is to specify an absolute error target of zero. This condition cannot be satisfied, and so the relative error target will determine when the integration stops.

This correctly computes the integral as 5 × 10^{-9} and estimates the integration error as 4 ×10^{-18}.

It makes some sense for quad to specify non-zero default values for both absolute and relative error, though I imagine most users expect small relative error rather than small absolute error, so perhaps the latter could be set to 0 by default.

Nicolas Bourbaki was the collective pseudonym of a semi-secret group of French mathematicians, best known for the formal style of mathematics it promoted. The group insisted that Bourbaki was a real person, but only as a joke.

Monsieur Nicolas Bourbaki, Canonical Member of the Royal Academy of Poldavia, Grand Master of the Order of Compacts, Conserver of Uniforms, Lord Protector of Filters, and Madame née One-to-One, have the honor of announcing the marriage of their daughter Betti …

The trivial isomorphism will be given to them by P. Adic, of the Diophantine Order, at the Principal Cohomology of the Universal Variety …

The organ will be played by Monsieur Modulo, Assistant Simplex of the Grassmannian (Lemmas will be sung by Scholia Cartanorum). …

After the congruence, Monsieur and Madame Bourbaki will receive guests in their Fundamental Domain …

The original French text and full English translation are available here.

The invitation is littered with obscure references to math in general but particularly references to Bourbaki-style math. For example, “Madame née One-to-One” is an allusion to Bourbaki’s attempt to replace the traditional term “one-to-one” with their coinage “injective.” The bride’s name alludes to Betti numbers, a kind of topological invariant. Etc.

The wedding invitation nearly cost Bourbaki member André Weil his life. Weil fled to Finland at the start of World War II. Finnish police found the wedding invitation and thought that it was an encoded message. Weil was sentenced to death as a spy but received a last-minute pardon.

I’ve taught a variety of math classes, and statistics has been the hardest to teach. The thing I find most challenging is coming up with homework problems. Most exercises are either blatantly artificial or extremely tedious. It’s hard to find moderately realistic problems that don’t take too long to work out.

The course I’ve found easiest to teach has been differential equations. The course has a flat structure: there’s a list of techniques to cover, all roughly the same level of difficulty. There are no deep analytic or philosophical issues to skirt around as there are in statistics. And it’s not hard to come up with practical applications that can be worked out fairly easily.

In the dark ages of programming, functions acted on data. To slice your bread, you passed a bread data structure to a slice function:

slice(bread);

Then came object oriented programming. Instead of having an external function slice our bread, we would ask the bread to slice itself by calling the slice method on a bread object:

bread.slice();

Obviously a vast improvement.

Now object oriented programming has become more refined. First we create a bread-slicing object and then we simply pass bread objects to the slice method on the bread-slicer:

BreadSlicer slicer = new BreadSlicer();
slicer.slice(bread);

Austin Kleon has an interesting idea for setting up a workspace: have a digital desk and an analog desk.

I have two desks in my office — one is “analog” and one is “digital.” The analog desk has nothing but markers, pens, pencils, paper, index cards, and newspaper. Nothing electronic is allowed on that desk. That’s where most of my work is born … The digital desk has my laptop, my monitor, my scanner, and my drawing tablet. This is where I edit and publish my work.

The context of this quote is a discussion of how we think differently depending on the tools we use. I wrote something along these lines a while back: Create offline, analyze online.

This evening I ran across a dialog that suggests that decimal notation is wrong.

It happened when I started learning about decimals in school. I knew then that ten has one zero, a hundred has two, a thousand three, and so on. And then this teacher starts saying that tenth doesn’t have any zero, a hundredth has only one, a thousandth has only two, and so on. … Only much later did I have enough perspective to put my finger on the problem: The decimal point is always misplaced!

The proposed solution is to put the decimal point above the units position rather than after it. Then the notation would be symmetric. For example, 1000 and 1/1000 would look like this:

Of course decimal notation isn’t likely to change, but the author makes an interesting point.

Several people have told me they like the iPad because it lets them bring the Internet into situations where a laptop would be too conspicuous. In other words, it’s a hip flask.