Striving for simplicity, arriving at complexity

This post is a riff on a line from Mathematics without Apologies, the book I quoted yesterday.

In an all too familiar trade-off, the result of striving for ultimate simplicity is intolerable complexity; to eliminate too-long proofs we find ourselves “hopelessly lost” among the too-long definitions. [emphasis added]

It’s as if there’s some sort of conservation of complexity, but not quite in the sense of a physical conservation law. Conservation of momentum, for example, means that if one part of a system loses 5 units of momentum, other parts of the system have to absorb exactly 5 units of momentum. But perceived complexity is psychological, not physical, and the accounting is not the same. By moving complexity around we might increase or decrease the overall complexity.

The opening quote suggests that complexity is an optimization problem, not an accounting problem. It also suggests that driving the complexity of one part of a system to its minimum may disproportionately increase the complexity of another part. Striving for the simplest possible proofs, for example, could make the definitions much harder to digest. There’s a similar dynamic in programming languages and programs.

Larry Wall said that Scheme is a beautiful programming language, but every Scheme program is ugly. Perl, on the other hand, is ugly, but it lets you write beautiful programs. Scheme can be simple because it requires libraries and applications to implement functionality that is part of more complex languages. I had similar thoughts about COM. It was an elegant object system that lead to hideous programs.

Scheme is a minimalist programming language, and COM is a minimalist object framework. By and large the software development community prefers complex languages and frameworks in hopes of writing smaller programs. Additional complexity in languages and frameworks isn’t felt as strongly as additional complexity in application code. (Until something breaks. Then you might have to explore parts of the language or framework that you had blissfully ignored before.)

The opening quote deals specifically with the complexity of theorems and proofs. In context, the author was saying that the price of Grothendieck’s elegant proofs was a daunting edifice of definitions. (More on that here.) Grothendieck may have taken this to extremes, but many mathematicians agree with the general approach of pushing complexity out of theorems and into definitions. Michael Spivak defends this approach in the preface to his book Calculus on Manifolds.

… the proof of [Stokes’] theorem is, in the mathematician’s sense, an utter triviality — a straight-forward calculation. On the other hand, even the statement of this triviality cannot be understood without a horde of definitions … There are good reasons why the theorems should all be easy and the definitions hard. As the evolution of Stokes’ theorem revealed, a single simple principle can masquerade as several difficult results; the proofs of many theorems involve merely stripping away the disguise. The definitions, on the other hand, serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs. [emphasis added]

Mathematicians like to push complexity into definitions like software developers like to push complexity into languages and frameworks. Both strategies can make life easier on professionals while making it harder on beginners.

Related post: A little simplicity goes a long way

Problem solvers and theory builders

From Mathematics without Apologies:

It’s conventional to classify mathematicians as “problem solvers” or “theory builders,” depending on temperament. My experiences and the sources I consulted in writing this book convince me that curiosity about problems guides the growth of theories, rather than the other way around. Alexander Grothendieck and Robert Langlands … count among the most ambitious of all builders of mathematical theories, but everything they built was addressed to specific problems with ancient roots.

Related post: Examples bring a subject to life

Special function resources

This week’s resource post: some pages of notes on special functions:

See also blog posts tagged special functions.

I tweet about special functions occasionally on AnalysisFact

Last week: HTML, LaTeX, and Unicode

Next week: Python resources

Counting primitive bit strings

A string of bits is called primitive if it is not the repetition of several copies of a smaller string of bits. For example, the 101101 is not primitive because it can be broken down into two copies of the string 101. In Python notation, you could produce 101101 by "101"*2. The string 11001101, on the other hand, is primitive. (It contains substrings that are not primitive, but the string as a whole cannot be factored into multiple copies of a single string.)

For a given n, let’s count how many primitive bit strings there are of length n. Call this f(n). There are 2n bit strings of length n, and f(n) of these are primitive. For example, there are f(12) primitive bit strings of length 12. The strings that are not primitive are made of copies of primitive strings: two copies of a primitive string of length 6, three copies of a primitive string of length 4, etc. This says

 2^{12} = f(12) + f(6) + f(4) + f(3) + f(2) + f(1)

and in general

2^n = \sum_{d \mid n} f(d)

Here the sum is over all positive integers d that divide n.

Unfortunately this formula is backward. It gives is a formula for something well known, 2n, as a sum of things we’re trying to calculate. The Möbius inversion formula is just what we need to turn this formula around so that the new thing is on the left and sums of old things are on the right. It tells us that

f(n) = \sum_{d \mid n} \mu\left(\frac{n}{d}\right) 2^d

where μ is the Möbius function.

We could compute f(n) with Python as follows:

from sympy.ntheory import mobius, divisors

def num_primitive(n):
    return sum( [mobius(n/d)*2**d for d in divisors(n)] )

The latest version of SymPy, version 0.7.6, comes with a function mobius for computing the Möbius function. If you’re using an earlier version of SymPy, you can roll your own mobius function:

from sympy.ntheory import factorint

def mobius(n):
    exponents = factorint(n).values()
    lenexp = len(exponents)
    m = 0 if lenexp == 0 else max(exponents)
    return 0 if m > 1 else (-1)**lenexp

The version of mobius that comes with SymPy 0.7.6 may be more efficient. It could, for example, stop the factorization process early if it discovers a square factor.

After a coin comes up heads 10 times

Suppose you’ve seen a coin come up heads 10 times in a row. What do you believe is likely to happen next? Three common responses:

  1. Heads
  2. Tails
  3. Equal probability of heads or tails.

Each is reasonable in its own context. The last answer is correct assuming the flips are independent and heads and tails are equally likely.

But as I argued here, if you see nothing but heads, you have reason to question the assumption that the coin is fair. So there’s some justification for the first answer.

The reasoning behind the second answer is that tails are “due.” This isn’t true if you’re looking at independent flips of a fair coin, but it could reasonable in other settings, such as sampling without replacement.

Say there are a number of coins on a table, covered by a cloth. A fixed number are on the table heads up, and a fixed number tails up. You reach under the cloth and slide a coin out. Every head you pull out increases the chances that the next coin will be tails. If there were an equal number of heads and tails under the cloth to being with, then after pulling out 10 heads tails are indeed more likely next time.

Related post: Long runs

How medieval astronomers made trig tables

How would you create a table of trig functions without calculators or calculus?

It’s not too hard to create a table of sines at multiples of 3°. You can use the sum-angle formula for sines

sin(α+β) = sin α cos β + sin β cos α.

to bootstrap your way from known values to other values. Elementary geometry gives you the sines of 45° and 30°, and the sum-angle formula will then give you the sine of 75°. From Euclid’s construction of a 5-pointed star you can find the sine of 72°. Then you can use the sum-angle formula to find the sine of 3° from the sines of 75° and 72°. Ptolemy figured this out in the 2nd century AD.

But if you want a table of trig values at every degree, you need to find the sine of 1°. If you had that, you could bootstrap your way to every other integer number of degrees. Ptolemy had an approximate solution to this problem, but it wasn’t very accurate or elegant.

The Persian astronomer Jamshīd al-Kāshī had a remarkably clever solution to the problem of finding the sine of 1°. Using the sum-angle formula you can find that

sin 3θ = 3 sin θ – 4 sin3 θ.

Setting θ = 1° gives you a cubic equation for the unknown value of sin 1° involving the known value of sin 3°. However, the cubic formula wasn’t discovered until over a century after al-Kāshī. Instead, he used a numerical algorithm more widely useful than the cubic formula: finding a fixed point of an iteration!

Define f(x) = (sin 3° + 4x3)/3. Then sin 1° is a fixed point of f. Start with an approximate value for sin 1° — a natural choice would be (sin 3°)/3 — and iterate. Al-Kāshī used this procedure to compute sin 1° to 16 decimal places.

Here’s a little Python code to play with this algorithm.

from numpy import sin, deg2rad

sin3deg = sin(deg2rad(3))

def f(x):
    return (sin3deg + 4*x**3)/3

x = sin3deg/3
for i in range(4):
    x = f(x)
    print(x)

This shows that after only three iterations the method has converged to floating point precision, which coincidentally is about 16 decimal places, the same as al-Kāshī’s calculation.

Source: Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry

Ergodic

Roughly speaking, an ergodic system is one that mixes well. You get the same result whether you average its values over time or over space.

This morning I ran across the etymology of the word:

In the late 1800s, the physicist Ludwig Boltzmann needed a word to express the idea that if you took an isolated system at constant energy and let it run, any one trajectory, continued long enough, would be representative of the system as a whole. Being a highly-educated nineteenth century German-speaker, Boltzmann knew far too much ancient Greek, so he called this the “ergodic property”, from ergon “energy, work” and hodos “way, path.” The name stuck.

Found here, footnote on page 479.

Other etymological footnotes:

 

Miscellaneous math notes

This web site started as static HTML files. Later I added a WordPress blog, but still wrote some things as static HTML pages for various reasons. Now I’ve moved most of those static pages to WordPress pages so that they’ll have the same style as the blog.

There’s not a good way to find these pages except through search. So I plan to categorize them and write a short post each Wednesday for the next few weeks listing some related pages. This post starts the series with math notes that didn’t fall into any other category.

See also posts tagged math.

Next week: Emacs resources

Googol and googolplex

Numericon gives the history of the words googol and googolplex:

… the famous googol, 10100 (a 1 followed by 100 zeros), defined in 1929 by American mathematician Edward Kasner and named by his nine-year-old nephew, Milton Sirotta. Milton went even further and came up with the googolplex, now defined as 10googol but initially defined by Milton as a 1, followed by writing zeros until you get tired.

Related post: There isn’t a googol of anything

Four brief reviews

Princeton University Press and No Starch Press both sent me a couple books this week. Here are a few brief words about each.

The first from Princeton was The Best Writing on Mathematics 2014. My favorite chapters were The Beauty of Bounded Gaps by Jordan Ellenberg and The Lesson of Grace in Teaching by Francis Su. The former is a very high-level overview of recent results regarding gaps in prime numbers. The latter is taken from the Francis’ Haimo Teaching Award lecture. A recording of the lecture and a transcript are available here.

The second book from Princeton was a new edition of Andrew Hodges’ book Alan Turing: The Enigma. This edition has a new cover and the new subtitle “The Book That Inspired the Film ‘The Imitation Game.'” Unfortunately I’m not up to reading a 768-page biography right now.

The first book from No Starch Press was a new edition of The Book of CSS3: A Developer’s Guide to the Future of Web Design by Peter Gasston. The book says from the beginning that it is intended for people who have a lot of experience with CSS, including some experience with CSS 3. I tend to ignore such warnings; many books are more accessible to beginners than they let on. But in this case I do think that someone with more CSS experience would get more out of the book. This looks like a good book, and I expect I’ll get more out of it later.

The final book was a new edition of How Linux Works: What Every Superuser Should Know by Brian Ward. I’ve skimmed through this book and would like to go back and read it carefully, a little at a time. Most Unix/Linux books I’ve seen either dwell on shell commands or dive into system APIs. This one, however, seems to live up to its title and give the reader an introduction to how Linux works.

Cyclic fractions

Somewhere along the way you may have noticed that the digits in the decimal expansion of multiples of 1/7 are all rotations of the same digits:

1/7 = 0.142857142857…
2/7 = 0.285714285714…
3/7 = 0.428571428571…
4/7 = 0.571428571428…
5/7 = 0.714285714285…
6/7 = 0.857142857142…

We can make the pattern more clear by vertically aligning the sequences of digits:

1/7 = 0.142857142857…
2/7 =   0.2857142857…
3/7 =  0.42857142857…
4/7 =     0.57142857…
5/7 =      0.7142857…
6/7 =    0.857142857…

Are there more cyclic fractions like that? Indeed there are. Another example is 1/17. The following shows that 1/17 is cyclic:

 
 1/17 = 0.05882352941176470588235294117647…
 2/17 =           0.1176470588235294117647…
 3/17 =            0.176470588235294117647…
 4/17 =     0.2352941176470588235294117647…
 5/17 =        0.2941176470588235294117647…
 6/17 =      0.352941176470588235294117647…
 7/17 =          0.41176470588235294117647…
 8/17 =               0.470588235294117647…
 9/17 =       0.52941176470588235294117647…
10/17 =  0.5882352941176470588235294117647…
11/17 =              0.6470588235294117647…
12/17 =                0.70588235294117647…
13/17 =             0.76470588235294117647…
14/17 =    0.82352941176470588235294117647…
15/17 =   0.882352941176470588235294117647…
16/17 =         0.941176470588235294117647…

The next denominator to exhibit this pattern is 19. After finding 17 and 19 by hand, I typed “7, 17, 19″ into the Online Encyclopedia of Integer Sequences found a list of denominators of cyclic fractions: OEIS A001913. These numbers are called “full reptend primes” and according to MathWorld “No general method is known for finding full reptend primes.”

Integration trick

Here’s a clever example from Paul Nahin’s new book Inside Interesting Integrals. Suppose you want to evaluate

\int_{-1}^1 \frac{\cos(x)}{\exp(1/x) + 1}\,dx

Since the range of integration is symmetric around zero, you might think to see whether the integrand is an odd function, in which case the integral would be zero. (More on such symmetry tricks here.) Unfortunately, the integrand is not odd, so that trick doesn’t work directly. However, it does help indirectly.

You can split any function f(x) into its even and odd parts.

f_e(x) = \frac{f(x) + f(-x)}{2} \\ f_o(x) = \frac{f(x) - f(-x)}{2}

The integral of a function over a symmetric interval is the integral of its even part because its odd part integrates to zero. The even part of the integrand above works out to be simply cos(x)/2 and so the integral evaluates to sin(1).

John Napier

Julian Havil has written a new book John Napier: Life, Logarithms, and Legacy.

I haven’t read more than the introduction yet — a review copy arrived just yesterday — but I imagine it’s good judging by who wrote it. Havil’s book Gamma is my favorite popular math book. (Maybe I should say “semi-popular.” Havil’s books have more mathematical substance than most popular books, but they’re still aimed at a wide audience. I think he strikes a nice balance.) His latest book is a scientific biography, a biography with an unusual number of equations and diagrams.

Napier is best known for his discovery of logarithms. (People debate endlessly whether mathematics is discovered or invented. Logarithms are so natural — pardon the pun — that I say they were discovered. I might describe other mathematical objects, such as Grothendieck’s schemes, as inventions.) He is also known for his work with spherical trigonometry, such as Napier’s mnemonic. Maybe Napier should be known for other things I won’t know about until I finish reading Havil’s book.

The great reformulation of algebraic geometry

“Tate helped shape the great reformulation of arithmetic and geometry which has taken place since the 1950’s.” — Andrew Wiles

At the Heidelberg Laureate Forum I has a chance to interview John Tate. In his remarks below, Tate briefly comments on his early work on number theory and cohomology. Most of the post consists of his comments on the work of Alexander Grothendieck.

***

JT: My first significant work after my thesis was to determine the cohomology groups of class field theory. The creators of the theory, including my thesis advisor Emil Artin, didn’t think in terms of cohomology, but their work could be interpreted as finding the cohomology groups H0, H1, and H2.

I was invited to give a series of three talks at MIT on class field theory. I’d been at a party, and I came home and thought about what I’d talk about. And I got this great idea: I realized I could say what all the higher groups are. In a sense it was a disappointing answer, though it didn’t occur to me then, that there’s nothing new in them; they were determined by the great work that had already been done. For that I got the Cole prize in number theory.

Later when I gave a talk on this work people would say “This is number theory?!” because it was all about cohomology groups.

JC: Can you explain what the great reformulation was that Andrew Wiles spoke of? Was it this greater emphasis on cohomology?

JT: Well, in the class field theory situation it would have been. And there I played a relatively minor part. The big reformulation of algebraic geometry was done by Grothendieck, the theory of schemes. That was really such a great thing, that unified number theory and algebraic geometry. Before Grothendieck, going between characteristic 0, finite characteristic 2, 3, etc. was a mess.

Grothendieck’s system just gave the right framework. We now speak of arithmetic algebraic geometry, which means studying problems in number theory by using your geometric intuition. The perfect background for that is the theory of schemes. ….

Grothendieck ideas [about sheaves] were so simple. People had looked at such things in particular cases: Dedekind rings, Noetherian rings, Krull rings, …. Grothendieck said take any ring. … He just had an instinct for the right degree of generality. Some people make things too general, and they’re not of any use. But he just had an instinct to put whatever theory he thought about in the most general setting that was still useful. Not generalization for generalization’s sake but the right generalization. He was unbelievable.

He started schemes about the time I got serious about algebraic geometry, as opposed to number theory. But the algebraic geometers classically had affine varieties, projective varieties, … It seemed kinda weird to me. But with schemes you had a category, and that immediately appealed to me. In the classical algebraic geometry there are all these birational maps, or rational maps, and they’re not defined everywhere because they have singularities. All of that was cleared up immediately from the outset with schemes. ….

There’s a classical algebraic geometer at Harvard, Joe Harris, who works mostly over the complex numbers. I asked him whether Grothendieck made much of a difference in the classical case — I knew for number theorists he had made a tremendous difference — and Joe Harris said yes indeed. It was a revolution for classical algebraic geometry too.