Fermat’s proof of his last theorem

Fermat famously claimed to have a proof of his last theorem that he didn’t have room to write down. Mathematicians have speculated ever since what this proof must have been, though everyone is convinced the proof must have been wrong.

The usual argument for Fermat being wrong is that since it took over 350 years, and some very sophisticated mathematics, to prove the theorem, it’s highly unlikely that Fermat had a simple proof. That’s a reasonable argument, but somewhat unsatisfying because it’s risky business to speculate on what a proof must require. Who knows how complex the proof of FLT in The Book is?

André Weil offers what I find to be a more satisfying argument that Fermat did not have a proof, based on our knowledge of Fermat himself. Dana Mackinzie summarizes Weil’s argument as follows.

Fermat repeatedly bragged about the n = 3 and n = 4 cases and posed them as challenges to other mathematicians … But the never mentioned the general case, n = 5 and higher, in any of his letters. Why such restraint? Most likely, Weil argues, because Fermat had realized that his “truly wonderful proof” did not work in those cases.

Dana comments:

Every mathematician has had days like this. You think you have a great insight, but then you go out for a walk, or you come back to the problem the next day, and you realize that your great idea has a flaw. Sometimes you can go back and fix it. And sometimes you can’t.

The quotes above come from The Universe in Zero Words. I met Dana Mackinzie in Heidelberg a few weeks ago, and when I came home I looked for this book and his book on the formation of the moon, The Big Splat.

More on Fermat’s last theorem

NYT Book of Physics and Astronomy

I’ve enjoyed reading The New York Times Book of Physics and Astronomy, ISBN 1402793200, a collection of 129 articles written between 1888 and 2012. Its been much more interesting than its mathematical predecessor. I’m not objective — I have more to learn from a book on physics and astronomy than a book on math — but I think other readers might also find this new book more interesting.

I was surprised by the articles on the bombing of Hiroshima and Nagasaki. New York Times reporter William Lawrence was allowed to go on the mission over Nagasaki. He was not on the plane that dropped the bomb, but was in one of the other B-29 Superfortresses that were part of the mission. Lawrence’s story was published September 9, 1945, exactly one month later. Lawrence was also allowed to tour the ruins of Hiroshima. His article on the experience was published September 5, 1945. I was surprised how candid these articles were and how quickly they were published. Apparently military secrecy evaporated rapidly once WWII was over.

Another thing that surprised me was that some stories were newsworthy more recently than I would have thought. I suppose I underestimated how long it took to work out the consequences of a major discovery. I think we’re also biased to think that whatever we learned as children must have been known for generations, even though the dust may have only settled shortly before we were born.

Concepts, explosions, and developments

Paul Halmos divided progress in math into three categories: concepts, explosions, and developments. This was in his 1990 article “Has progress in mathematics slowed down?”. (His conclusion was no.) This three-part classification not limited to math and could be useful in other areas.

Concepts are organizational ideas, frameworks, new vocabulary. Some of his examples were category theory and distributions (generalized functions).

Explosions solve old problems and generate a lot of attention among mathematicians and in the popular press. As Halmos puts is, “hot news not only for the Transactions, but also for the Times for a day, for Time for a week, and for student mathematics clubs for many months.” He cites the solution to the Four Color Theorem as an example. He no doubt would have cited Fermat’s Last Theorem had he written his article five years later.

Developments are “deep and in some cases even breathtaking developments (but not explosions) of the kind that might not make the Times, but could possibly get Fields medals for their discoverers.” One example he gives is the Atiyah-Singer index theorem.

The popular impression of math and science is that progress is all about explosions though it’s more about concepts and developments.

Related: Birds and Frogs by Freeman Dyson [pdf]

Social networks in fact and fiction

SIAM News arrived this afternoon and had an interesting story on the front page: Applying math to myth helps separate fact from fiction.

In a nutshell, the authors hope to get some insight into whether a myth is based on fact by seeing whether the social network of characters in the myth looks more like a real social network or like the social network in a work of deliberate fiction. For instance, the social networks of the Iliad and Beowulf look more like actual social networks than does the social network of Harry Potter. Real social networks follow a power law distribution more closely than do social networks in works of fiction.

This could be interesting. For example, the article points out that some scholars believe Beowulf has a basis in historical events, though they don’t believe that Beowulf the character corresponds to a historical person. The network approach lends support to this position: the Beowulf social network looks more realistic when Beowulf himself is removed.

It seems however that an accurate historical account might have a suspicious social network, not because the events in it were made up but because they were filtered according to what the historian thought was important.

Baroque computers

From an interview with Neal Stephenson, giving some background for his Baroque Cycle:

Leibniz [1646-1716] actually thought about symbolic logic and why it was powerful and how it could be put to use. He went from that to building a machine that could carry out logical operations on bits. He knew about binary arithmetic. I found that quite startling. Up till then I hadn’t been that well informed about the history of logic and computing. I hadn’t been aware that anyone was thinking about those things so far in the past. I thought it all started with [Alan] Turing. So, I had computers in the 17th century.

Fourier series before Fourier

I always thought that Fourier was the first to come up with the idea of expressing general functions as infinite sums of sines and cosines. Apparently this isn’t true.

The idea that various functions can be described in terms of Fourier series … was for the first time proposed by Daniel Bernoulli (1700–1782) to solve the one-dimensional wave equation (the equation of motion of a string) about 50 years before Fourier. … However, no one contemporaneous to D. Bernoulli accepted the idea as a general method, and soon the study was forgotten.

Source: The Nonlinear World

Perhaps Fourier’s name stuck to the idea because he developed it further than Bernoulli did.

Related posts

History of weather prediction

I’ve just started reading Invisible in the Storm: The Role of Mathematics in Understanding Weather, ISBN 0691152721.

The subtitle may be a little misleading. There is a fair amount of math in the book, but the ratio of history to math is pretty high. You might say the book is more about the role of mathematicians than the role of mathematics. As Roger Penrose says on the back cover, the book has “illuminating descriptions and minimal technicality.”

Someone interested in weather prediction but without a strong math background would enjoy reading the book, though someone who knows more math will recognize some familiar names and theorems and will better appreciate how they fit into the narrative.

Related posts

Size of ancient and modern bureaucracies

According to The History of Rome, episode 126, Diocletian increased the size of the Roman imperial bureaucracy from around 15,000 people to around 30,000.

I wanted to compare the size of the bureaucracy that ran the Roman Empire to the size of the bureaucracy that runs Houston, TX. This page suggests that the city of Houston has about 68,000 employees. But far more people work for government in other capacities than work for the city. According to Table 1 of this page, the latest estimate is that 361,800 in the Houston MSA work in the government sector. And about 22 million people work in the government sector nation wide.

Please don’t leave comments saying the Roman Empire and Houston are not directly comparable. Of course they’re not. But still, a very rough comparison is interesting.

Related post: Pax Romana

Oldest series for pi

Here’s an interesting bit of history from Julian Havil’s new book The Irrationals. In 1593 François Viète discovered the following infinite product for pi:

\frac{2}{\pi} = \frac{\sqrt{2}}{2}\frac{\sqrt{2+\sqrt{2}}}{2}\frac{\sqrt{2 + \sqrt{2+\sqrt{2}}}}{2} \cdots

Havil says this is “the earliest known.” I don’t know whether this is specifically the oldest product representation for pi, or more generally the oldest formula for an infinite sequence of approximations that converge to pi. Vièta’s series is based on the double angle formula for cosine.

The first series for pi I remember seeing comes from evaluating the Taylor series for arc tangent at 1:

\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots

I saw this long before I knew what a Taylor series was. I imagine others have had the same experience because the series is fairly common in popular math books. However, this series is completely impractical for computing pi because it converges at a glacial pace. Vièta’s formula, on the other hand, converges fairly quickly. You could see for yourself by running the following Python code:

    from math import sqrt

    prod = 1.0
    radic = 0.0

    for i in range(10):
        radic = sqrt(2.0 + radic)
        prod *= 0.5*radic
        print 2.0/prod

After 10 terms, Vièta’s formula is correct to five decimal places.

Posts on more sophisticated and efficient series for computing pi: