Jenga is a game where you start with a tower of wooden pegs and take turns removing pegs until someone makes the tower collapse. A style of mathematics analogous to Jenga reached the height of its popularity about 40 years ago and then fell out of fashion. I use the phrase “Jenga mathematics” to refer to generalizing a well-known theorem by weakening its hypotheses, seeing how many pegs you can pull out before it falls.
Many 20th century mathematicians spent their careers going over the work of 19th century mathematicians, removing every hypothesis they could. Sometimes a 20th century mathematician would get his name tacked on to a 19th century theorem due to his Jenga accomplishments.
Taken to extremes, Jenga mathematics turns theorems inside-out and proofs become hypotheses. Natural hypotheses are replaced with a laundry list of properties necessary to make the proof work. Start with some theorem of the form “Let X be a widget. Then X has a foozle.” Go back over the proof and see just what features of a widget are needed for the proof. Then restate the theorem as “Let X have the following apparently arbitrary list of properties necessary for my proof to work. Then X has a foozle.” Never mind whether anybody can think of anything other that a widget that satisfies the hypotheses of the new theorem.
Jenga mathematics is no longer fashionable. Mathematicians still value removing unneeded hypotheses, but they’re not as willing to go to extremes to do so. They are more interested in building new towers than in removing every piece possible from old towers.
Can you give some examples?
Stone-Weierstrass theorem is an example (though I wouldn’t call the generalization useless…).
The Stone-Weierstrass theorem is an excellent example. Stone generalized the Weierstass approximation theorem, but the essential idea belonged to Weierstrass; Stone shouldn’t get top billing.
I agree with Thomas Nyberg that Stone’s generalization is useful. Stone not only weakened the hypotheses, he simplified the proof. But the S-W theorem is starting to become inverted: some of the proof details are exposed in the hypothesis. Further generalizations are less original and more inverted.
Many theorems from number theory were turned into abstract algebra theorems simply by applying vocabulary that didn’t exist when the theorem was originally stated. That’s valuable, but the theorem shouldn’t be named after the person who updated the language.
I’ve tried to think of egregious examples of inverted proofs, but such theorems are inherently unmemorable.
I think the notion of “semi-locally 1-connected” topological space is Jenga mathematics; it’s exactly
the condition on the base space of a covering map that makes homotopy lifting work. Unless it has
other consequences I don’t know about.
The axioms for homology theory come to mind, too. These are first verified as a theorem for a simple case such as simplicial homology but then become the defining axioms for any generic homology theory. Or much more simply, think of the axioms for a metric given that they are first known to hold in usual Euclidean space.
Oooo… How about the Riemann-Roch-Atiyah-Hirzebruch (did I leave anyone out?) theorem?
I have to disagree with the last two examples. Given two homology theories, it is very hard to check that they agree. Topologists at the beginning of the twentieth century had simplicial homology, singular homology, DeRham homology, Cech homology and no doubt some others I am missing. Being able to prove they coincided by checking the axioms was a major breakthrough. Thus, I would say that the axiomatizing of homology was not an example of Jenga math but the discovery of a great strategy for proving homology theories coincide.
I might be willing to agree that the Grothendieck-Riemman-Roch formula is an example of Jenga math. (Yes, I am leaving out Atiyah, Singer, Bott, Hirzebruch and possibly others.) However, if so, then it is an example of why Jenga math is sometimes worth doing. There are three major improvements from the original RR to the current version, and all of them are important for problems which mathematicians actually care about every day.
The original RR considered a holomorphic vector bundle L on a complex curve X. We have a “derivative” (the proper term is a dbar-connection) on L, whose kernel are the holomorphic sections. The original RR expressed the holomorphic Euler characteristic — roughly, the dimension of the kernel of this operator minus the dimension of its cokernel — in terms of purely toplogical facts about L and X.
This is useful, but mathematicians want to study complex varieties other than curves. Hirzebuch figured out how to permit X to be a smooth compact complex variety of any dimension, and L to be a vector bundle rather than a line bundle.
Atiyah, Singer and Bott figured out how to replace dbar by more general differential operators. I am not clear on why this is useful, but my friends in differential geometry tell me they use the more general form all the time.
Finally, Grothendieck figured out how to handle the case where L and X vary in a continuous family, and you want to understand how the holomorphic Euler characteristic varies. He realized that holmorphic Euler characteristic describes, in a formal sense, the “variation when mapping to a point” and forced himself to write the whole proof to work for any (proper) map. This shows up all the time in my own work, and the work of tons of other algebraic geometers. Moreover, by forcing himsself to consider the problem in such a high level of generality, he actually came up with one of the shortest and most elegant proofs.