# Variable-speed learning

When I was in college, one of the professors seemed to lecture at a sort of quadratic pace, maybe even an exponential pace.

He would proceed very slowly at the beginning of the semester, so slowly that you didn’t see how he could possibly cover the course material by the end. But his pace would gradually increase to the point that he was going very quickly at the end. And yet the pace increased so smoothly that you were hardly aware of it. By understanding the first material thoroughly, you were able to go through the latter material quickly.

If you’ve got 15 weeks to cover 15 chapters, don’t assume the optimal pace is to cover one chapter every week.

I often read technical books the way the professor mentioned above lectured. The density of completely new ideas typically decreases as a book progresses. If your reading pace is proportional to the density of new ideas, you’ll start slow and speed up.

The preface may be the most important part of the book. Some books I’ve only read the preface and felt like I got a lot out of the book.

The last couple chapters of technical books can often be ignored. It’s common for authors to squeeze in something about their research at the end of a book, even it its out of character with the rest of the book.

# Off by one character

There was a discussion on Twitter today about a mistake calculus students make: I pointed out that it’s only off by one character: The first equation is simply wrong. The second is correct, but a gross violation of convention, using x as a constant and e as a variable.

# It’s like this other thing except …

One of my complaints about math writing is that definitions are hardly ever subtractive, even if that’s how people think of them.

For example, a monoid is a group except without inverses. But that’s not how you’ll see it defined. Instead you’ll read that it’s a set with an associative binary operation and an identity element. A module is a vector space, except the scalars come from a ring instead of a field. But the definition from scratch is more than I want to write here. Any time you have a sub-widget or a pre-widget or a semi-widget, it’s probably best to define the widget first.

I understand the logical tidiness of saying what a thing is rather than what it is not. But it makes more pedagogical sense to describe the difference between a new concept and the most similar familiar concept. And the nearest familiar concept may have more structure rather than less.

Suppose you wanted to describe where a smaller city is by giving directions from larger, presumably more well known city, but you could only move east. Then instead of saying Ft. Worth is 30 miles west of Dallas, you’d have to say it’s 1,000 miles east of Phoenix.

Writers don’t have to chose between crisp logic and good pedagogy. They can do both. They can say, for example, that a pre-thingy is a thingy without some property, then say “That is, a pre-thingy satisfies the following axioms: …”

# Student’s future, teacher’s past

“Teachers should prepare the student for the student’s future, not for the teacher’s past.” — Richard Hamming

I ran across the above quote from Hamming this morning. It made me wonder whether I tried to prepare students for my past when I used to teach college students.

How do you prepare a student for the future? Mostly by focusing on skills that will always be useful, even as times change: logic, clear communication, diligence, etc.

Negative forecasting is more reliable here than positive forecasting. It’s hard to predict what’s going to be in demand in the future (besides timeless skills), but it’s easier to predict what’s probably not going to be in demand. The latter aligns with Hamming’s exhortation not to prepare students for your past.

# Group projects

The best teams have people with complementary skills, but similar work ethic. Academic assignments are the opposite. There’s not much variation in skills, in part because students haven’t yet developed specialized skills, and in part because students are in the same class because they have similar interests. The biggest variation is likely to be work ethic. It’s not uncommon for the hardest working person in a group to do 10x as much work as the laziest person in the group. The person doing most of the work learns that it’s best to avoid working with teams.

Working with people with complementary skills is a blast, but you’re unlike to experience that in an academic project. You might get some small degree specialization. Maybe one of the mechanical engineers on a project has more artistic ability than the other mechanical engineers, for example. But this is hardly like the experience of working with a team of people who are all great at different things.

# Permutations and tests

Suppose a test asks you to place 10 events in chronological order. Label these events A through J so that chronological order is also alphabetical order.

If a student answers BACDEFGHIJ, then did they make two mistakes or just one? Two events are in the wrong position, but they made one transposition error. The simplest way to grade such a test would be to count the number of events that are in the correct position. Is this the most fair way to grade?

If you decide to count how many transpositions are needed to correct a student’s answer, do you count any transposition or only adjacent transpositions? For example, if someone answered JBCDEFGHIA, then transposing the A and the J is enough to put the results in order. But reversing the first and last event seems like a bigger mistake than reversing the first two events. Counting only adjacent transpositions would penalize this mistake more. You would have to swap the J with each of the eight letters between J and A. But it hardly seems that answering JBCDEFGHIA is eight times worse than answering BACDEFGHIJ.

Maybe counting transpositions is too much work. So we just go back to counting how many events are in the right place. But then suppose someone answers JABCDEFGHI. This is completely wrong since every event is in the wrong position. But the student obviously knows something, since the relative order of nearly all of the events is correct. From one perspective there was only one mistake: J comes last, not first.

What is the worst possible answer? Maybe getting the order exactly backward? If you have an odd number of events, then getting the order backward means one event is in the right place, and so that doesn’t receive the lowest possible score.

This is an interesting problem beyond grading exams. (As for grading exams, I’d suggest simply not using questions of this type on an exam.) In manufacturing, how serious a mistake is it to reverse two consecutive components versus two distant components? You could also ask the same question when comparing DNA sequences or other digital signals. The best way to assign a distance between the actual and desired sequence would depend entirely on context.

# Reading equations forward and backward

There is no logical difference between writing A = B and writing B = A, but there is a psychological difference.

Equations are typically applied left to right. When you write A = B you imply that it may be useful to replace A with B. This is helpful to keep in mind when learning something new: the order in which an equation is written gives a hint as to how it may be applied. However, this way of thinking can also be a limitation. Clever applications often come from realizing that you can apply an equation in the opposite of the usual direction.

For example, Euler’s reflection formula says

Γ(z) Γ(1-z) = π / sin(πz).

Reading from left to right, this says that two unfamiliar/difficult things, values of the Gamma function, are related to a more familiar/simple thing, the sine function. It would be odd to look at this formula and say “Great! Now I can compute sines if I just know values of the Gamma function.” Instead, the usual reaction would be “Great! Now I can relate the value of Gamma at two different places by using sines.”

When we see Einstein’s equation

E = mc2

the first time, we think about creating energy from matter, such as the mass lost in nuclear fission. This applies the formula from left to right, relating what we want to know, an amount of energy, to what we do know, an amount of mass. But you could also read the equation from right to left, calculating the amount of energy, say in an accelerator, necessary to create a particle of a given mass.

Calculus textbooks typically have a list of equations, either inside the covers or in an appendix, that relate an integral on the left to a function or number on the right. This makes sense because calculus students compute integrals. But mathematicians often apply these equations in the opposite direction, replacing a number or function with an integral. To a calculus student this is madness: why replace a familiar thing with a scary thing? But integrals aren’t scary to mathematicians. Expressing a function as an integral is often progress. Properties of a function may be easier to see in integral form. Also, the integral may lend itself to some computational technique, such as reversing the order of integration in a double integral, or reversing the order to taking a limit and an integral.

Calculus textbooks also have lists of equations involving infinite sums, the summation always being on the left. Calculus students want to replace the scary thing, the infinite sum, with the familiar thing, the expression on the right. Generating functions turn this around, wanting to replace things with infinite sums. Again this would seem crazy to a calculus student, but it’s a powerful problem solving technique.

Differential equation students solve differential equations. They want to replace what they find scary, a differential equation, with something more familiar, a function that satisfies the differential equation. But mathematicians sometimes want to replace a function with a differential equation that it satisfies. This is common, for example, in studying special functions. Classical orthogonal polynomials satisfy 2nd order differential equations, and the differential equation takes a different form for different families of orthogonal polynomials. Why would you want to take something as tangible and familiar as a polynomial, something you might study as a sophomore in high school, and replace it with something as abstract and mysterious as a differential equation, something you might study as a sophomore in college? Because some properties, properties that you would not have cared about in high school, are more clearly seen via the differential equations.

# Pedantic arithmetic rules

Generations of math teachers have drilled into their students that they must reduce fractions. That serves some purpose in the early years, but somewhere along the way students need to learn reducing fractions is not only unnecessary, but can be bad for communication. For example, if the fraction 45/365 comes up in the discussion of something that happened 45 days in a year, the fraction 45/365 is clearer than 9/73. The fraction 45/365 is not simpler in a number theoretic sense, but it is psychologically simpler since it’s obvious where the denominator came from. In this context, writing 9/73 is not a simplification but an obfuscation.

Simplifying fractions sometimes makes things clearer, but not always. It depends on context, and context is something students don’t understand at first. So it makes sense to be pedantic at some stage, but then students need to learn that clear communication trumps pedantic conventions.

Along these lines, there is a old taboo against having radicals in the denominator of a fraction. For example, 3/√5 is not allowed and should be rewritten as 3√5/5. This is an arbitrary convention now, though there once was a practical reason for it, namely that in hand calculations it’s easier to multiply by a long decimal number than to divide by it. So, for example, if you had to reduce 3/√5 to a decimal in the old days, you’d look up √5 in a table to find it equals 2.2360679775. It would be easier to compute 0.6*2.2360679775 by hand than to compute 3/2.2360679775.

As with unreduced fractions, radicals in the denominator might be not only mathematically equivalent but psychologically preferable. If there’s a 3 in some context, and a √5, then it may be clear that 3/√5 is their ratio. In that same context someone may look at 3√5/5 and ask “Where did that factor of 5 in the denominator come from?”

A possible justification for rules above is that they provide standard forms that make grading easier. But this is only true for the simplest exercises. With moderately complicated exercises, following a student’s work is harder than determining whether two expressions represent the same number.

One final note on pedantic arithmetic rules: If the order of operations isn’t clear, make it clear. Add a pair of parentheses if you need to. Or write division operations as one thing above a horizontal bar and another below, not using the division symbol. Then you (and your reader) don’t have to worry whether, for example, multiplication has higher precedence than division or whether both have equal precedence and are carried out left to right.

# Confidence

Zig Ziglar said that if you increase your confidence, you increase your competence. I think that’s generally true. Of course you could be an idiot and become a more confident idiot. In that case confidence just makes things worse . But otherwise when you have more confidence, you explore more options, and in effect become more competent.

There are some things you may need to learn not for the content itself but for the confidence boost. Maybe you need to learn them so you can confidently say you didn’t need to. Also, some things you need to learn before you can see uses for them. (More on that theme here.)

I’ve learned several things backward in the sense of learning the advanced material before the elementary. For example, I studied PDEs in graduate school before having mastered the typical undergraduate differential equation curriculum. That nagged at me. I kept thinking I might find some use for the undergrad tricks. When I had a chance to teach the undergrad course a couple times, I increased my confidence. I also convinced myself that I didn’t need that material after all.

My experience with statistics was similar. I was writing research articles in statistics before I learned some of the introductory material. Once again the opportunity to teach the introductory material increased my confidence. The material wasn’t particularly useful, but the experience of having taught it was.

Related post: Psychological encapsulation

 See Yeats’ poem The Second Coming:

The best lack all conviction, while the worst
Are full of passionate intensity.

# Elementary vs Foundational

Euclid’s proof that there are infinitely many primes is simple and ancient. This proof is given early in any course on number theory, and even then most students would have seen it before taking such a course.

There are also many other proofs of the infinitude of primes that use more sophisticated arguments. For example, here is such a proof by Paul Erdős. Another proof shows that there must be infinitely many primes because the sum of the reciprocals of the primes diverges. There’s even a proof that uses topology.

When I first saw one of these proofs, I wondered whether they were circular. When you use advanced math to prove something elementary, there’s a chance you could use a result that depends on the very thing you’re trying to prove. The proofs are not circular as far as I know, and this is curious: the fact that there are infinitely many primes is elementary but not foundational. It’s elementary in that it is presented early on and it builds on very little. But it is not foundational. You don’t continue to use it to prove more things, at least not right away. You can develop a great deal of number theory without using the fact that there are infinitely many primes.

The Fundamental Theorem of Algebra is an example in the other direction, something that is foundational but not elementary. It’s stated and used in high school algebra texts but the usual proof depends on Liouville’s theorem from complex analysis.

It’s helpful to distinguish which things are elementary and which are foundational when you’re learning something new so you can emphasize the most important things. But without some guidance, you can’t know what will be foundational until later.

The notion of what is foundational, however, is conventional. It has to do with the order in which things are presented and proved, and sometimes this changes. Sometimes in hindsight we realize that the development could be simplified by changing the order, considering something foundational that wasn’t before. One example is Cauchy’s theorem. It’s now foundational in complex analysis: textbooks prove it as soon as possible then use it to prove things for the rest of course. But historically, Cauchy’s theorem came after many of the results it is now used to prove.

Related: Advanced or just obscure?