Daniel Lemire wrote a blog post this morning that ties together a couple themes previously discussed here.
Most published math papers contain errors, and yet there have been surprisingly few “major screw-ups” as defined by Mark Dominus. Daniel Lemire’s post quotes Doron Zeilberger on why these frequent errors are often benign.
Most mathematical papers are leaves in the web of knowledge, that no one reads, or will ever use to prove something else. The results that are used again and again are mostly lemmas, that while a priori non-trivial, once known, their proof is transparent. (Zeilberger’s Opinion 91)
Those papers that are “branches” rather than “leaves” receive more scrutiny and are more likely to be correct.
Zeilberger says lemmas get reused more than theorems. This dovetails with Mandelbrot’s observation mentioned a few weeks ago.
Many creative minds overrate their most baroque works, and underrate the simple ones. When history reverses such judgments, prolific writers come to be best remembered as authors of “lemmas,” of propositions they had felt “too simple” in themselves and had to be published solely as preludes to forgotten theorems.
There are obvious analogies to software. Software that many people use has fewer bugs than software that few people use, just as theorems that people build on have fewer bugs than “leaves in the web of knowledge.” Useful subroutines and libraries are more likely to be reused than complete programs. And as Donald Knuth pointed out, re-editable code is better than black-box reusable code.
Everybody knows that software has bugs, but not everyone realizes how buggy theorems are. Bugs in software are more obvious because paper doesn’t abort. Proofs and programs are complementary forms of validation. Attempting to prove the correctness of an algorithm certainly reduces the chances of a bug, but proofs are fallible as well. Again quoting Knuth, he once said “Beware of bugs in the above code; I have only proved it correct, not tried it.” Not only can programs benefit from being more proof-like, proofs can benefit from being more program-like.
I’d add the qualification that for well-used software to have fewer bugs the users must recognize the bugs and they must be fixed when recognized. Neither of those are certain, and in the case of close-source software the latter depends on users reporting the bugs and the source owners’ fixing them.
I suspect a lot of the bugs in published math are rarely recognized and when recognized are not necessarily reported to the author.
For cases where the bugs are obvious but no one bothers to report them, think about popular literature — extremely well read, but often full of unintentional spelling and grammar errors.
Actually, spelling and grammar errors are extremely common in the electronic realm, even in situations where they are read extensively. Even news reporting agencies frequently have such errors, but I doubt they are often reported — most readers ignore them if they even catch them. You might argue that this is not so important since these reports are ephemeral, but archiving is nearly universal.
Excellent post.
If I might add something: software can help debug math, and vice versa.
1) Sometimes, you can determine mathematical some expectations software must meet. Are you using up n^3 time to merge sort an array? Something is wrong.
2) [More interesting] Do you claim that a given quantity grows as “1.4 * x^2”? Then let me see a plot generated from a computer simulation. You can’t simulate it? Really?