In 1989, *Star Trek: The Next Generation* aired The Royale. In this episode, we learn that Captain Picard tries his hand at proving Fermat’s Last Theorem (FLT) in his spare time. The writers must have believed that FLT would still be unresolved in the 24th century. Four years after The Royale, Andrew Wiles announced his proof of FLT. There was a flaw in Wiles’ first proof, but this was patched two years later in 1995.

Richard Feynman also tried his hand at FLT. He wrote a paper (unpublished) in which he gave a pseudo-proof of FLT based on probability. Feynman said that the probability of FLT being false was “certainly less than 10^-200.” The argument was high creative, sketchy, but ultimately nonsensical. Paul Nahin concludes

Feynman’s probabilistic analysis of Fermat’s Last Theorem would have no mathematical interest at all but for the fact it was Feynman who cooked it up.

Source: Number-Crunching by Paul Nahin.

Can you give a little inkling of a sketch of Feynman’s probabilistic argument? I can’t imagine how one would get to 10^-200 … or even how one would start talking about the probability of a mathematical theorem. What’s the probability that 7 is prime? What’s the chance that 13+17=30 today, weatherman?

Considering that « […] cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet. », that a margin is less 1/7 of a page (say: 3 cm about of 21), and finally that Wiles FLT proof is about 100 pages, i went to the conjecture that Fermat’s last theorem was probably true with 1-1/7000 confidence. Since the bound is less tight than Feynman’s, I am afraid that “one small step for a man” will remain buried in the antics of science, which is, very likely, unfair. Or maybe i should have used Bayes theorem.

I read somewhere (Scott Aaronson’s blog?) that Feynman did not believe P=NP was an open question.

You can read much of the Feynman story in the Google books preview.

Feynman’s argument (assuming I’m piecing it together correctly; this book is checked out from our library) looks like it can be adapted to the Fermat-Catalan conjecture, which states that am + bn = ck has only finitely many solutions with a, b, c coprime and 1/m + 1/n + 1/k < 1 (that is, where the exponents are not too small).

Just the other day I watched this great documentary about Andrew Wiles’ work to prove FLT over at Google video. It’s a great story.

Maybe it would have been better to pick the Riemann Hypothesis. David Hilbert is supposed to have said, “If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?” Might be an open question in the 24th century!

I think Feynman used “detrahendum ex mitram”, which is I think the only sensible method, besides Bayesian techniques.

BTW, Bayesian methods were mentioned in another ST:TNG episode. I don’t know which one, but it was mentioned as an epilogue of sorts to a talk on the expected useful life of an underground radioactive waste containment facility. Naturally without having millenia to conduct a direct experiement, they used Bayesian techniques.

@human mathematics,

The probability that 7 is prime is 1; the chance that 13 + 17 = 30 depends on the numbering system base, but it is either 100% or 0%.

You’ve hit the nail on the head of the principal philosophical objection to Bayesian statistical methods. As you rightly point out, it seems to make little sense to talk about the probability that objective factual statements are true; by definition the proability is either 1 or 0 — depending on the truth of the statement.

However, assigning probabilities to factual statements can either be regarded as consistent with a philosophy which denies objective truth, or can be regarded as a measure of how much we know about whether the statement is true or false.

So regardless of how Feynman arrived at his estimate, it can be interpreted most charitably as his expression that he is extremely certain that Fermat’s Last Theorem is in fact true. Which is why I think it would have been most sensible to use the “detrahendum ex mitram” method or its more vulgar cousin to obtain the actual estimate.

Less charitably it could be interpreted as his actual estimate of the distribution of a variable which is in fact random. I say less charitably mostly because I think he would have been insulted if it were interpreted that way, however, I do know at least one famous statistician who in fact believes that all reality is subjective. That person would (I presume) have no difficulty with assigning to mathematical statements probabilities of truth which are not 0 or 1.

But for those of us who believe in objective truth, Bayesian statistical methods can be a stumbling block unless we view them either as pure calculation algorithms or as dealing (in at least some cases) with quantities which are not actually probabilities although they are are labeled and treated as such.

Hi human mathematics and @John Venier,

It is not necessarily true that the probability of a given mathematical proposition is either zero or unity. Anything quantity which obeys, say, the Kolmogorov axioms is a probability. So there are different concepts or interpretations of probability, like relative frequency, logical, or subjective probability. It is at least arguable that degrees of rational commitment or betting behavior ought to obey the axioms of probability theory.

If we have a subjective interpretation of probability like that, then we can reasonably talk about non-trivial probabilities for mathematical propositions. For example, take “standard Zermelo-Fraenkel set theory is logically consistent.” This is either true or it is not, and it would be true in another possible world if it were true in this one, and false in another possible world if it is false in this one. If ZF is (in)consistent, then it is

necessarily(in)consistent. At the same time, we can say that ZF isprobablyconsistent, that is, we rationally can have a high degree of confidence that ZF is consistent or it’s a good bet that it is consistent. That judgement is based on the fact that ZF has been used extensively for by many mathematicians over a long time and no contradiction has yet been discovered. That doesn’t mean that there is no such contradiction, just that thereprobablyisn’t.I first saw Feinman give his “proof” of FLT is a lecture titled “Applications of Mathematics to Mathematics.” I don’t know whether he believed in the proof or he was just pushing the mathematicians buttons. He was very successful in getting a strong reaction out of them. I will describe what he did notionally. The all caps parenthetical is an invalid assumption, and I think that he knew so. What he did was take all that had been proven about FTL at the time (1967 I believe.) This gave him a lower bound for a solution of a**n+b**n=c**n. It also gave him several constraints. Given these constraints and the densities of nth powers as a function of the root, he calculated the probability (IF THE POWERS WERE RANDOMLY DISTRIBUTED WITH THIS DENSITY) that three nth powers would be related in the desired manner. He then integrated this probability from the established lower bounds of the roots and the power n to infinity, infinity to get his probability.

You can view the polynomial

x^nasnstacks of blocks; each stack having between 0 andx-1 blocks.FLT is a pain due to unordered “rows”, if you allow the swapping of rows in mid air, the “row ordered” FLT is much easier to prove. (My MS thesis, http://www.integers-ejcnt.org/vol8.html “A Combinatorial Interpretation of the Poly-Bernoulli Numbers and Two Fermat Analogues” )

Something similar to Polya’s theorem for enumerating graphs http://en.wikipedia.org/wiki/P%C3%B3lya_enumeration_theorem is at play. Enumerating those row swap symmetries and mapping them back to the “row ordered” version would yield an elegant combinatorial proof.

Wile’s Proof of Fermat’s Last Theorem http://www.coolissues.com/mathematics/Wile'sproofofFLT.html