Public reaction to Ebola

Ebola elicits two kinds of reactions in the US. Some think we are in imminent danger of an Ebola epidemic. Others think Ebola poses absolutely zero danger and that those who think otherwise are kooks.

Nothing can be discussed rationally. Even narrow scientific questions lead to emotionally-charged political arguments. Those who have a different opinion must be maligned.

The big question is whether the Ebola virus can spread by air. Experts say “probably not” but some are cautious. For example, Ebola researcher C. J. Peters says “We just don’t have the data to exclude it.” But people who know absolutely nothing about virology are firmly convinced one way or the other.

 

 

John Napier

Julian Havil has written a new book John Napier: Life, Logarithms, and Legacy.

I haven’t read more than the introduction yet — a review copy arrived just yesterday — but I imagine it’s good judging by who wrote it. Havil’s book Gamma is my favorite popular math book. (Maybe I should say “semi-popular.” Havil’s books have more mathematical substance than most popular books, but they’re still aimed at a wide audience. I think he strikes a nice balance.) His latest book is a scientific biography, a biography with an unusual number of equations and diagrams.

Napier is best known for his discovery of logarithms. (People debate endlessly whether mathematics is discovered or invented. Logarithms are so natural — pardon the pun — that I say they were discovered. I might describe other mathematical objects, such as Grothendieck’s schemes, as inventions.) He is also known for his work with spherical trigonometry, such as Napier’s mnemonic. Maybe Napier should be known for other things I won’t know about until I finish reading Havil’s book.

Taking responsibility for the mistakes of others

The version of Windows following 8.1 will be Windows 10, not Windows 9. Apparently this is because Microsoft knows that a lot of software naively looks at the first digit of the version number, concluding that it must be Windows 95 or Windows 98 if it starts with 9.

Many think this is stupid. They say that Microsoft should call the next version Windows 9, and if somebody’s dumb code breaks, it’s their own fault.

People who think that way aren’t billionaires. Microsoft got where it is, in part, because they have enough business savvy to take responsibility for problems that are not their fault but that would be perceived as being their fault.

Wouldn’t trade places

Last week at the Heidelberg Laureate Forum, I was surrounded by the most successful researchers in math and computer science. The laureates had all won the Fields Medal, Abel Prize, Nevanlinna Prize, or Turing Award. Some had even won two of these awards.

I thought about my short academic career [1]. If I had been wildly successful, the most I could hope for would be to be one of these laureates. And yet I wouldn’t trade places with any of them. I’d rather do what I’m doing now than have an endowed chair at some university. Consulting suits me very well. I could see teaching again someday, maybe in semi-retirement, but I hope to never see another grant proposal.

***

[1] I either left academia once or twice, depending on whether you count my stint at MD Anderson as academic. I’d call my position there, and even the institution as a whole, quasi-academic. I did research and some teaching there, but I also did software development and project management. The institution is a hospital, a university, a business, and a state agency; it can be confusing to navigate.

The great reformulation of algebraic geometry

“Tate helped shape the great reformulation of arithmetic and geometry which has taken place since the 1950’s.” — Andrew Wiles

At the Heidelberg Laureate Forum I has a chance to interview John Tate. In his remarks below, Tate briefly comments on his early work on number theory and cohomology. Most of the post consists of his comments on the work of Alexander Grothendieck.

***

JT: My first significant work after my thesis was to determine the cohomology groups of class field theory. The creators of the theory, including my thesis advisor Emil Artin, didn’t think in terms of cohomology, but their work could be interpreted as finding the cohomology groups H0, H1, and H2.

I was invited to give a series of three talks at MIT on class field theory. I’d been at a party, and I came home and thought about what I’d talk about. And I got this great idea: I realized I could say what all the higher groups are. In a sense it was a disappointing answer, though it didn’t occur to me then, that there’s nothing new in them; they were determined by the great work that had already been done. For that I got the Cole prize in number theory.

Later when I gave a talk on this work people would say “This is number theory?!” because it was all about cohomology groups.

JC: Can you explain what the great reformulation was that Andrew Wiles spoke of? Was it this greater emphasis on cohomology?

JT: Well, in the class field theory situation it would have been. And there I played a relatively minor part. The big reformulation of algebraic geometry was done by Grothendieck, the theory of schemes. That was really such a great thing, that unified number theory and algebraic geometry. Before Grothendieck, going between characteristic 0, finite characteristic 2, 3, etc. was a mess.

Grothendieck’s system just gave the right framework. We now speak of arithmetic algebraic geometry, which means studying problems in number theory by using your geometric intuition. The perfect background for that is the theory of schemes. ….

Grothendieck ideas [about sheaves] were so simple. People had looked at such things in particular cases: Dedekind rings, Noetherian rings, Krull rings, …. Grothendieck said take any ring. … He just had an instinct for the right degree of generality. Some people make things too general, and they’re not of any use. But he just had an instinct to put whatever theory he thought about in the most general setting that was still useful. Not generalization for generalization’s sake but the right generalization. He was unbelievable.

He started schemes about the time I got serious about algebraic geometry, as opposed to number theory. But the algebraic geometers classically had affine varieties, projective varieties, … It seemed kinda weird to me. But with schemes you had a category, and that immediately appealed to me. In the classical algebraic geometry there are all these birational maps, or rational maps, and they’re not defined everywhere because they have singularities. All of that was cleared up immediately from the outset with schemes. ….

There’s a classical algebraic geometer at Harvard, Joe Harris, who works mostly over the complex numbers. I asked him whether Grothendieck made much of a difference in the classical case — I knew for number theorists he had made a tremendous difference — and Joe Harris said yes indeed. It was a revolution for classical algebraic geometry too.

Mental crypto footnotes

I spoke with Manuel Blum this afternoon about his password scheme described here. This post is a few footnotes based on that conversation.

When I mentioned that some people had reacted to the original post saying the scheme was too hard, Blum said that he has taught the scheme to a couple children, 6 and 9 years old, who can use it.

He also said that many people have asked for his slide summarizing the method and asked if I could post it.  You can save the image below to get the full-sized slide.

This slide and my blog post both use a 3-digit password for illustration, though obviously a 3-digit password would be easy to guess by brute force. I asked Blum how long a password using his scheme would need to be so that no one with a laptop would be able to break it. He said that 12 digits should be enough. Note that this assumes the attacker has access to many of your passwords created using the scheme, which would be highly unlikely.

 

Proof maintenance

Leslie Lamport coined the phrase “proof maintenance” to describe the process of producing variations of a proof over time.

It’s well known that software needs to be maintained; most of the work on a program occurs after it is “finished.” Proof maintenance is common as well, but it is usually very informal.

Proofs of any significant length have an implicit hierarchical structure of sub-proofs and sub-sub-proofs etc. Sub-proofs may be labeled as lemmas, but that’s usually the extent of the organization. Also, the requirements of a lemma may not be precisely stated, and the propositions used to prove the lemma may not be explicitly referenced. Lamport recommends making the hierarchical structure more formal and fine-grained, extending the sub-divisions of the proof down to propositions that take only two or three lines to prove. See his paper How to write a 21st century proof.

When proofs have this structure, you can see which parts of a proof need to be modified in order to produce a proof of a new related theorem. Software could help you identify these parts, just as software tools can show you the impact of changing one part of a large program.

Uses for orthogonal polynomials

When I interviewed Daniel Spielman at this year’s Heidelberg Laureate Forum, we began our conversation by looking for common mathematical ground. The first thing that came up was orthogonal polynomials. (If you’re wondering what it means for two polynomials to be orthogonal, see here.)

JC: Orthogonal polynomials are kind of a lost art, a topic that was common knowledge among mathematicians maybe 50 or 100 years ago and now they’re obscure.

DS: The first course I taught I spent a few lectures on orthogonal polynomials because they kept coming up as the solutions to problems in different areas that I cared about. Chebyshev polynomials come up in understanding solving systems of linear equations, such as if you want to understand how the conjugate gradient method behaves. The analysis of error correcting codes and sphere packing has a lot of orthogonal polynomials in it. They came up in a course in multi-linear algebra I had in grad school. And they come up in matching polynomials of graphs, which is something people don’t study much anymore. … They’re coming back. They come up a lot in random matrix theory. … There are certain things that come up again and again and again so you got to know what they are.

***

More from my interview with Daniel Spielman:

 

Mathematical beauty

Michael Atiyah quoted Hermann Weyl in the opening talk at the second Heidelberg Laureate Forum:

I believe there is, in mathematics, in contrast to the experimental disciplines, a character which is nearer to that of free creative art.

There is evidence that the relation of artistic beauty and mathematical beauty is more than an analogy. Michael Atiyah recently published a paper with Semir Zeki et al that suggests the same part of the brain responds to both.

Love locks

If you walk across the Seine in Paris on the Pont des Arts you’ll see thousands and thousands of love locks. I saw this morning that Heidelberg has its own modest collection of love locks on the Old Bridge across the Neckar.

love locks on Old Bridge across Neckar

These may be new. If they were here last year, I didn’t notice them.

There are several other points along the Old Bridge that have locks but nowhere are there very many.

love locks on Pont des Arts across Seine

Photo credit: Disdero via Wikimedia Commons

Common sense and statistics

College courses often begin by trying to weaken your confidence in common sense. For example, a psychology course might start by presenting optical illusions to show that there are limits to your ability to perceive the world accurately. I’ve seen at least one physics textbook that also starts with optical illusions to emphasize the need for measurement. Optical illusions, however, take considerable skill to create. The fact that they are so contrived illustrates that your perception of the world is actually pretty good in ordinary circumstances.

For several years I’ve thought about the interplay of statistics and common sense. Probability is more abstract than physical properties like length or color, and so common sense is more often misguided in the context of probability than in visual perception. In probability and statistics, the analogs of optical illusions are usually called paradoxes: St. Petersburg paradox, Simpson’s paradox, Lindley’s paradox, etc. These paradoxes show that common sense can be seriously wrong, without having to consider contrived examples. Instances of Simpson’s paradox, for example, pop up regularly in application.

Some physicists say that you should always have an order-of-magnitude idea of what a result will be before you calculate it. This implies a belief that such estimates are usually possible, and that they provide a sanity check for calculations. And that’s true in physics, at least in mechanics. In probability, however, it is quite common for even an expert’s intuition to be way off. Calculations are more likely to find errors in common sense than the other way around.

Nevertheless, common sense is vitally important in statistics. Attempts to minimize the need for common sense can lead to nonsense. You need common sense to formulate a statistical model and to interpret inferences from that model. Statistics is a layer of exact calculation sandwiched between necessarily subjective formulation and interpretation. Even though common sense can go badly wrong with probability, it can also do quite well in some contexts. Common sense is necessary to map probability theory to applications and to evaluate how well that map works.

Prevent errors or fix errors

The other day I was driving by our veterinarian’s office and saw that the marquee said something like “Prevention is less expensive than treatment.” That’s sometimes true, but certainly not always.

This evening I ran across a couple lines from Ed Catmull that are more accurate than the vet’s quote.

Do not fall for the illusion that by preventing errors, you won’t have errors to fix. The truth is, the cost of preventing errors is often far greater than the cost of fixing them.

From Creativity, Inc.

Sum of geometric means

Let xn be a sequence of non-negative numbers. Then the sum of their running geometric means is bounded by e times their sum. In symbols

\sum_{n=1}^\infty \left(x_1 x_2 \cdots x_n\right)^{1/n} \leq e \sum_{n=1}^\infty x_n

The inequality is strict unless all the x‘s are zero, and the constant e on the right side is optimal. Torsten Carleman proved this theorem in 1923.