Software quality: better in practice than in theory

Sir Tony Hoare

C. A. R. Hoare wrote an article How Did Software Get So Reliable Without Proof? in 1996 that still sounds contemporary for the most part.

In the 1980’s many believed that programs could not get much bigger unless we started using formal proof methods. The argument was that bugs are fairly common, and that each bug has the potential to bring a system down. Therefore the only way to build much larger systems was to rely on formal methods to catch bugs. And yet programs continued to get larger and formal methods never caught on. Hoare asks

Why have twenty years of pessimistic predictions been falsified?

Another twenty years later we can ask the same question. Systems have gotten far larger, and formal methods have not become common. Formal methods are used—more on that shortly—but have not become common.

Better in practice than in theory

It’s interesting that Hoare was the one to write this paper. He is best known for the quicksort, a sorting algorithm that works better in practice than in theory! Quicksort is commonly used in practice, even though has terrible worst-case efficiency, because its average efficiency has optimal asymptotic order [1], and in practice it works better than other algorithms with the same asymptotic order.

Economic considerations

It is logically possible that the smallest bug could bring down a system. And there have been examples, such as the Mars Climate Orbiter, where a single bug did in fact lead to complete failure. But this is rare. Most bugs are inconsequential.

Some will object “How can you be so blasé about bugs? A bug crashed a $300 million probe!” But what is the realistic alternative? Would spending an additional billion dollars on formal software verification have prevented the crash? Possibly, though not certainly, and the same money could send three more missions to Mars. (More along these lines here.)

It’s all a matter of economics. Formal verification is extremely tedious and expensive. The expense is worth it in some settings and not in others. The software that runs pacemakers is more critical than the software that runs a video game. For most software development, less formal methods have proved more cost effective at achieving acceptable quality: code reviews, unit tests, integration testing, etc.

Formal verification

I have some experience with formal software verification, including formal methods software used by NASA. When someone says that software has been formally verified, there’s an implicit disclaimer. It’s usually the algorithms have been formally verified, not the implementation of those algorithms in software. Also, maybe not all the algorithms have been verified, but say 90%, the remaining 10% being too difficult to verify. In any case, formally verified software can and has failed. Formal verification greatly reduces the probability of encountering a bug, but it does not reduce the probability to zero.

There has been a small resurgence of interest in formal methods since Hoare wrote his paper. And again, it’s all about economics. Theorem proving technology has improved over the last 20 years. And software is being used in contexts where the consequences of failure are high. But for most software, the most economical way to achieve acceptable quality is not through theorem proving.

There are also degrees of formality. Full theorem proving is extraordinarily tedious. If I remember correctly, one research group said that they could formally verify about one page of a mathematics textbook per man-week. But there’s a continuum between full formality and no formality. For example, you could have formal assurance that your software satisfies certain conditions, even if you can’t formally prove that the software is completely correct. Where you want to be along this continuum of formality is again a matter of economics. It depends on the probability and consequences of errors, and the cost of reducing these probabilities.

Related posts

[1] The worst-case performance of quicksort is O(n²) but the average performance is O(n log n).

Photograph of C. A. R. Hoare by Rama, Wikimedia Commons, Cc-by-sa-2.0-fr, CC BY-SA 2.0 fr

Riemann hypothesis, the fine structure constant, and the Todd function

This morning Sir Michael Atiyah gave a presentation at the Heidelberg Laureate Forum with a claimed proof of the Riemann hypothesis. The Riemann hypothesis (RH) is the most famous open problem in mathematics, and yet Atiyah claims to have a simple proof.

Photo via https://twitter.com/QuinteScience/status/1044134956996468736

Simple proofs of famous conjectures

If anyone else claimed a simple proof of RH they’d immediately be dismissed as a crank. In fact, many people have sent me simple proofs of RH just in the last few days in response to my blog post, and I imagine they’re all cranks [1]. But Atiyah is not a crank. He won the Fields Medal in 1966 and the Abel prize in 2004. In other words, he was in the top echelon of mathematicians 50 years ago and has kept going from there. There has been speculation that although Atiyah is not a crank, he has gotten less careful with age. (He’s 89 years old.)

QuinteScience, source of the image above, quoted Atiyah as saying

Solve the Riemann hypothesis and you’ll become famous. But if you’re already famous, you run the risk of becoming infamous.

If Atiyah had a simple self-contained proof of RH that would be too much to believe. Famous conjectures that have been open for 150 years don’t have simple self-contained proofs. It’s logically possible, but practically speaking it’s safe to assume that the space of possible simple proofs has been very thoroughly explored by now.

But Atiyah’s claimed proof is not self-contained. It’s really a corollary, though I haven’t seen anyone else calling it that. He is claiming that a proof of RH follows easily from his work on the Todd function, which hasn’t been published. If his proof is correct, the hard work is elsewhere.

Andrew Wiles’ proof of Fermat’s last theorem was also a corollary. He proved a special case of the Taniyama–Shimura conjecture, and at end of a series of lectures noted, almost as an afterthought, that his work implied a proof to Fermat’s last theorem. Experts realized this was where he was going before he said it. Atiyah has chosen the opposite approach, presenting his corollary first.

Connections with physics

Atiyah has spoken about connections between mathematics and physics for years. Maybe he was alluding to his work on the the fine structure constant which he claims yields RH as a corollary. And he is not the only person talking about connections between the Riemann hypothesis specifically and physics. For example, there was a paper in Physical Review Letters last year by Bender, Brody, and Müller stating a possible connection. I don’t know whether this is related to Atiyah’s work.

Fine structure constant

The fine structure constant is a dimensionless physical constant α, given by

\alpha = \frac{e^2}{4 \pi \varepsilon_0 \hbar c}

where e is the elementary charge, ε0 is vacuum permittivity, ħ is the reduced Planck constant, and c is the speed of light in a vacuum. Its value is roughly 1/137.

The Todd function

The Todd function T is a function introduced by Atiyah, named after his teacher J. A. Todd. We don’t know much about this function, except that it is key to Atiyah’s proof. Atiyah says the details are to be found in his manuscript The Fine Structure Constant which has been submitted to the Proceedings of the Royal Society.

Atiyah says that his manuscript shows that on the critical line of the Riemann zeta function, the line with real part 1/2, the Todd function has a limit ж and that the fine structure constant α is exactly 1/ж. That is,

limy → ∞ T(1/2 + yi) = ж = 1/α.

Now I don’t know what he means by proving that a physical constant has an exact mathematical value; the fine structure constant is something that is empirically measured. Perhaps he means that in some mathematical model of physics, the fine structure constant has a precise mathematical value, and that value is the limit of his Todd function.

Or maybe it’s something like Koide’s coincidence where a mathematical constant is within the error tolerance of a physical constant, an interesting but not necessarily important observation.

Taking risks

Michael Atiyah is taking a big risk. I’ve seen lots of criticism of Atiyah online. As far as I know, none of the critics have a Fields Medal or Abel Prize in their closet.

Atiyah’s proof is probably wrong, just because proofs of big theorems are usually wrong. Andrew Wiles’ proof of Fermat’s Last Theorem had a flaw that took a year to patch. We don’t know who Atiyah has shown his work to. If he hasn’t shown it to anyone, then it is almost certainly flawed: nobody does flawless work alone. Maybe his proof has a patchable flaw. Maybe it is flawed beyond repair, but contains interesting ideas worth pursuing further.

The worst case scenario is that Atiyah’s work on the fine structure constant and the Todd function is full of holes. He has made other big claims in the last few years that didn’t work out. Some say he should quit doing mathematics because he has made big mistakes.

I’ve made big mistakes too, and I’m not quitting. I make mistakes doing far less ambitious work than trying to prove the Riemann hypothesis. I doubt I’ll ever produce anything as deep as a plausible but flawed proof of the Riemann hypothesis.

Update

The longer paper has been leaked, presumably without permission from Atiyah or the Royal Society, and it doesn’t seem to hold up. No one is saying the proof can be patched, but there has been some discussion about whether the Todd trick could be useful.

In writing this post I wanted to encourage people to give Atiyah a chance, to wait until more was known before assuming the proof wasn’t good. I respect Atiyah as a mathematician and as a person—I read some of his work in college and I’ve had the privilege of meeting him on a couple occasions—and I hoped that he had a proof even though I was skeptical. I think no less of him for attempting a to prove a big theorem. I hope that I’m swinging for the fences in my ninth decade.

Related posts

[1] I don’t call someone a crank just because they’re wrong. My idea of a crank is someone without experience in an area, who thinks he has found a simple solution to a famous problem, and who believes there is a conspiracy to suppress his work. Cranks are not just wrong, they can’t conceive that they might be wrong.

Three applications of Euler’s theorem

Fermat’s little theorem says that if p is a prime and a is not a multiple of p, then

ap-1 = 1 (mod p).

Euler’s generalization of Fermat’s little theorem says that if a is relatively prime to m, then

aφ(m) = 1 (mod m)

where φ(m) is Euler’s so-called totient function. This function counts the number of positive integers less than m and relatively prime to m. For a prime number p, φ(p) = p-1, and to Euler’s theorem generalizes Fermat’s theorem.

Euler’s totient function is multiplicative, that is, if a and b are relatively prime, then φ(ab) = φ(a) φ(b). We will need this fact below.

This post looks at three applications of Fermat’s little theorem and Euler’s generalization:

  • Primality testing
  • Party tricks
  • RSA public key encryption

Primality testing

The contrapositive of Fermat’s little theorem is useful in primality testing: if the congruence

ap-1 = 1 (mod p)

does not hold, then either p is not prime or a is a multiple of p. In practice, a is much smaller than p, and so one can conclude that p is not prime.

Technically this is a test for non-primality: it can only prove that a number is not prime. For example, if 2p-1 is not congruent to 1 (mod p) then we know p is not a prime. But if 2p-1 is congruent to 1 (mod p) then all we know is that we haven’t failed the test; it’s still conceivable that p is prime. So we try another value of a, say 3, and see whether 3p-1 is congruent to 1 (mod p).

If we haven’t disproved that p is prime after several attempts, we have reason to believe p is probably prime.[1]. There are pseudoprimes, a.k.a. Carmichael numbers that are not prime but pass the primality test above for all values of a. But these numbers are much less common than primes.

By the way, if p is a huge number, say with hundreds or thousands of digits, doesn’t it seem odd that we would want to compute numbers to the power p? Actually computing ap would be impossible. But because we’re computing mod p, this is actually easy. We can apply the fast exponentiation algorithm and take remainders by p at every step, so we’re never working with numbers more than twice as long as p.

Fifth root party trick

A few days ago I wrote about the fifth root party trick. If someone raises a two-digit number to the fifth power, you can quickly tell what the number was. Part of what makes the trick work is that in base 10, any number n and its fifth power end in the same digit. You can prove this by trying all 10 possible last digits, but if you want to generalize the trick to other bases, it helps to use Euler’s theorem. For example, you could use 9th powers in base 15.

Euler’s theorem shows why raising a to the power  φ(m) + 1 in base m keeps the last digit the same, but only if a is relatively prime to m. To extend the fifth root trick to other bases you’ll have a little more work to do.

RSA encryption

The original [2] RSA public key cryptography algorithm was a clever use of Euler’s theorem.

Search for two enormous prime numbers p and q [3]. Keep p and q private, but make npq public. Pick a private key d and solve for a public key e such that de = 1 (mod φ(n)).

Since you know p and q, you can compute φ(n) = (p – 1)(q – 1), and so you can compute the public key e. But someone who doesn’t know p and q, but only their product n, will have a hard time solving for d from knowing e. Or at least that’s the hope! Being able to factor n is sufficient to break the encryption scheme, but it’s not logically necessary. Maybe recovering private keys is much easier than factoring, though that doesn’t seem to be the case.

So where does Euler come in? Someone who has your public key e and wants to send you a message m computes

me (mod n)

and sends you the result [4]. Now, because you know d, you can take the encrypted message me and compute

(me)d = mφ(n) = 1 (mod n)

by Euler’s theorem.

This is the basic idea of RSA encryption, but there are many practical details to implement the RSA algorithm well. For example, you don’t want p and q to be primes that make pq easier than usual to factor, so you want to use safe primes.

Related posts

[1] Saying that a number is “probably prime” makes sense the first time you see it. But then after a while it might bother you. This post examines and resolves the difficulties in saying that a number is “probably prime.”

[2] The original RSA paper used Euler’s totient function φ(n) = (p – 1)(q – 1), but current implementations use Carmichael’s totient function λ(n) = lcm(p – 1, q – 1). Yes, same Carmichael as Carmichael numbers mentioned above, Robert Daniel Carmichael (1879–1967).

[3] How long does it take to find big primes? See here. One of the steps in the process it to weed out composite numbers that fail the primality test above based on Fermat’s little theorem.

[4] This assumes the message has been broken down into segments shorter than n. In practice, RSA encryption is used to send keys for non-public key (symmetric) encryption methods because these methods are more computationally efficient.

News regarding ABC conjecture and Riemann Hypothesis

There have been a couple news stories regarding proofs of major theorems. First, an update on Shinichi Mochizuki’s proof of the abc conjecture, then an announcement that Sir Michael Atiyah claims to have proven the Riemann hypothesis.

Shinichi Mochizuki’s proof of the abc conjecture

Quanta Magazine has a story today saying that two mathematicians have concluded that Shinichi Mochizuki’s proof of the ABC conjecture is flawed beyond repair. The story rightly refers to a “clash of Titans” because Shinichi Mochizuki and his two critics Peter Scholze and Jakob Stix are all three highly respected.

I first wrote about the abc conjecture when it came out in 2012. In find the proposed proof fascinating, not because I understand it, but because nobody understands it. The proof is 500 pages of dense, sui generis mathematics. Top mathematicians have been trying to digest the proof for six years now. Scholze and Stix believe they’ve found an irreparable hole in the proof, but Mochizuki has not conceded defeat.

Sometimes when a flaw is found in a proof, the flaw can later be fixed, as was the case with Andrew Wiles’ proof of Fermat’s last theorem. Other times the proof cannot be salvaged entirely, but interesting work comes out of it, as was the case with the failed attempts to prove FLT before Wiles. What will happen with Mochizuki’s proof of the abc conjecture if it cannot be repaired? I can imagine two outcomes.

  1. Because the proof is so far out of the mainstream, there must be useful pieces of it that can be salvaged.
  2. Because the proof is so far out of the mainstream, nobody will build on the work and it will be a dead end.

Michael Atiyah and the Riemann hypothesis

This morning I heard rumors that Michael Atiyah claims to have proven the Riemann hypothesis. The Heidelberg Laureate Forum twitter account confirmed that Atiyah is scheduled to announce his work at the forum on Monday.

I was a fan of Atiyah’s work when I was a grad student, and I got to talk to him at the 2013 and 2014 Heidelberg Laureate Forums. I hope that he really has a proof, but all the reaction I’ve seen has been highly skeptical. He claims to have a relatively simple proof, and long-standing conjectures rarely have simple proofs.

It would be great for the Riemann hypothesis to be settled, and it would be great for Atiyah to be the one who settles it.

Whereas Mochizuki’s proof is radically outside the mainstream, Atiyah’s proof appears to be radically inside the mainstream. He says he has a “radically new approach … based on work of von Neumann (1936), Hirzebruch (1954) and Dirac (1928).” It doesn’t seem likely that someone could prove the Riemann hypothesis using such classical mathematics. But maybe he found a new approach by using approaches that are not fashionable.

I hope that if Atiyah’s proof doesn’t hold up, at least something new comes out of it.

Update (9/23/2018): I’ve heard a rumor that Atiyah has posted a preprint to arXiv, but I don’t see it there. I did find a paper online that appeared to be his that someone posted to Dropbox.  It gives a very short proof of RH by contradiction, based on a construction using the Todd function. Apparently all the real work is in his paper The Fine Structure Constant which has been submitted to Proceedings of the Royal Society. I have not been able to find a preprint of that article.

Atiyah gave a presentation of his proof in Heidelberg this morning. From what I gather the presentation didn’t remove much mystery because what he did was show that RH follows as a corollary of his work on the Todd function that hasn’t been made public yet.

Update: More on Atiyah’s proof here.

What are these theorems?

The abc conjecture claims a deep relationship between addition and multiplication of integers. Specifically, for every ε > 0, there are only finitely many coprime triples (abc) such that abc and c > rad(abc)1 + ε.

Here coprime means ab, and c share no common divisor larger than 1. Also, rad(abc) is the product of the distinct prime factors of ab, and c.

This is a very technical theorem (conjecture?) and would have widespread consequences in number theory if true. This is sort of the opposite of Fermat’s Last Theorem: the method of proof had widespread application, but the result itself did not. With the abc conjecture, it remains to be seen whether the method of (attempted?) proof has application.

The Riemann hypothesis concerns the Riemann zeta function, a function ζ of complex values that encodes information about primes. The so-called trivial zeros of ζ are at negative integers. The Riemann hypothesis says that the rest of the zeros are all lined up vertically with real part 1/2 and varying imaginary parts.

Many theorems have been proved conditional on the Riemann hypothesis. If it is proven, a lot of other theorems would immediately follow.

Related posts

Footnote on fifth root trick

Numberphile has a nice video on the fifth root trick: someone raises a two-digit number to the 5th power, reads the number aloud, and you tell them immediately what the number was.

Here’s the trick in a nutshell. For any number n, n5 ends in the same last digit as n. You could prove that by brute force or by Euler’s theorem. So when someone tells you n5, you immediately know the last digit. Now you need to find the first digit, and you can do that by learning, approximately, the powers (10k)5 for i = 1, 2, 3, …, 9. Then you can determine the first digit by the range.

Here’s where the video is a little vague. It says that you don’t need to know the powers of 10k very accurately. This is true, but just how accurately do you need to know the ranges?

If the two-digit number is a multiple of 10, you’ll recognize the zeros at the end, and the last non-zero digit is the first digit of n. For example, if n5 = 777,600,000 then you know n is a multiple of 10, and since the last non-zero digit is 6, n = 60.

So you need to know the fifth powers of multiples of 10 well enough to distinguish (10k – 1)5 from (10k + 1)5. The following table shows what these numbers are.

|---+---------------+---------------|
| k | (10k - 1)^5   | (10k + 1)^5   |
|---+---------------+---------------|
| 1 |        59,049 |       161,051 |
| 2 |     2,476,099 |     4,084,101 |
| 3 |    20,511,149 |    28,629,151 |
| 4 |    90,224,199 |   115,856,201 |
| 5 |   282,475,249 |   345,025,251 |
| 6 |   714,924,299 |   844,596,301 |
| 7 | 1,564,031,349 | 1,804,229,351 |
| 8 | 3,077,056,399 | 3,486,784,401 |
| 9 | 5,584,059,449 | 6,240,321,451 |
|---+---------------+---------------|

So any number less than a million has first digit 1. Any number between 1 million and 3 million has first digit 2. Etc.

You could choose the following boundaries, if you like.

|---+----------------|
| k | upper boundary |
|---+----------------|
| 1 |      1,000,000 |
| 2 |      3,000,000 |
| 3 |     25,000,000 |
| 4 |    100,000,000 |
| 5 |    300,000,000 |
| 6 |    800,000,000 |
| 7 |  1,700,000,000 |
| 8 |  3,200,000,000 |
| 9 |  6,000,000,000 |
|---+----------------|

The Numberphile video says you should have someone say the number aloud, in words. So as soon as you hear “six billion …”, you know the first digit of n is 9. If you hear “five billion” or “four billion” you know the first digit is 8. If you hear “three billion” then you know to pay attention to the next number, to decide whether the first digit is 7 or 8. Once you hear the first few syllables of the number, you can stop pay attention until you hear the last syllable or two.

An empirical look at the Goldbach conjecture

The Goldbach conjecture says that every even number bigger than 2 is the sum of two primes. I imagine he tried out his idea on numbers up to a certain point and guessed that he could keep going. He lived in the 18th century, so he would have done all his calculation by hand. What might he have done if he could have written a Python program?

Let’s start with a list of primes, say the first 100 primes. The 100th prime is p = 541. If an even number less than p is the sum of two primes, it’s the sum of two primes less than p. So by looking at the sums of pairs of primes less than p, we’ll know whether the Goldbach conjecture is true for numbers less than p. And while we’re at it, we could keep track not just of whether a number is the sum of two primes, but also how many ways it is a sum of two primes.

    from sympy import prime
    from numpy import zeros
    
    N = 100
    p = prime(N)
    
    primes = [prime(i) for i in range(1, N+1)]
    sums = zeros(p, int)
    
    for i in range(N):
        # j >= i so we don't double count
        for j in range(i, N):
            s = primes[i] + primes[j]
            if s >= p:
                break
            sums[s] += 1
    
    # Take the even slots starting with 4
    evens = sums[4::2]
    
    print( min(evens), max(evens) )

This prints 1 and 32. The former means that every even number greater than 4 and less than p was hit at least once, that every number under consideration was the sum of two primes. The latter means that at least one number less than p can be written as a sum of two primes 32 different ways.

According to the Wikipedia article on the Goldbach conjecture, Nils Pipping manually verified the Goldbach conjecture for even numbers up to 100,000 in 1938, an amazing feat.

There are 9,952 primes less than 100,000 and so we would need to take N = 9592 in our program to reproduce Pipping’s result. This took about seven minutes.

Update: As suggested in the comments, nearly all of the time is being spent generating the list of primes. When I changed the line

    primes = [prime(i) for i in range(1, N+1)]

to

    primes = [x for x in primerange(1, p)]

the runtime dropped from 7 minutes to 18 seconds.

Number of groups of prime power order

John Baez left a comment on my post on group statistics saying

It’s known that the number of groups of order p^n for prime p is p^{2n^3/27+O(n^(8/3))}. It might be fun to compare this to what Mathematica says.

Here goes. First let’s let p = 2. Mathematica’s FiniteGroupCount function tops out at n = 10. And for the range of values Mathematica does support, 2^{2n^3/27} grossly overestimates the number of groups.

    Table[{FiniteGroupCount[2^n], 2^((2./27) n^3)}, {n, 1, 11}]

    {{1, 1.05269}, {2, 1.50795}, {5, 4.}, {14, 26.7365}, 
    {51, 612.794}, {267, 65536.}, {2328, 4.45033*10^7}, 
    {56092, 2.61121*10^11}, {10494213, 1.80144*10^16}, 
    {49487365422, 1.98847*10^22}, {FiniteGroupCount[2048], 4.7789*10^29}}

OEIS has entries for the sizes of various groups, and the entry for powers of 2 confirms the asymptotic formula above. Maybe it’s not a good approximation until n is very large. (Update: Here’s a reference for the asymptotic expression for all p.)

The OEIS entry for number of groups of order powers of 5 has an interesting result:

For a prime p >= 5, the number of groups of order p^n begins 1, 1, 2, 5, 15,
61 + 2*p + 2*gcd (p – 1, 3) + gcd (p – 1, 4),
3*p^2 + 39*p + 344 + 24*gcd(p – 1, 3) + 11*gcd(p – 1, 4) + 2*gcd(p – 1, 5), …

We can duplicate this with Mathematica.

    Table[FiniteGroupCount[5^n], {n, 0, 6}]

    {1, 1, 2, 5, 15, 77, 684}

and the last two numbers match the calculations given in OEIS.

There’s something interesting going on with Mathematica. It doesn’t seem to know, or agree with, the formula above for groups of order p4. For example,

    Table[FiniteGroupCount[7^n], {n, 0, 6}]

    {1, 1, 2, 5, FiniteGroupCount[2401], 83, 860}

I get similar results when I use larger primes: it can’t handle the fourth power.

    Table[FiniteGroupCount[389^n], {n, 0, 6}]

    {1, 1, 2, 5, FiniteGroupCount[22898045041], 845, 469548}

The results for n = 5 and 6 agree with OEIS.

Is OEIS wrong about the number of groups of order  p4 or should Mathematica simply return 15 but there’s a gap in the software?

Also, does anybody know why the agreement with the asymptotic formula above is so bad? It’s just as bad or worse for other primes that I tried.

Related posts

How to keep unwanted content out of your Twitter stream

How do you keep things you don’t want out of your Twitter stream? You might say just don’t follow people who post things you don’t want to read, but it’s not that simple.

Some people post worthwhile original material, but they retweet things that are offensive or just not interesting. You can fix that by turning off retweets from that person. Then you’ll just see tweets they compose.

Except until yesterday, there was no way to turn off “likes.” You’d randomly see things someone “liked” even if you turned off their retweets. Now there’s a way to simply see the content you’ve subscribed to. Not only that, you’ll see it in order! Numerous times I’ve tried to go back and find something but couldn’t because Twitter saw fit to edit and rearrange my stream since the last time I looked at it.

The way to simply see your Twitter stream in order isn’t obvious. You have to go to

Settings and privacy -> Account

and uncheck the box that says “Show the best Tweets first.”

Timeline: Show the best Tweets first

Who wouldn’t want to see the best tweets first? Sounds good to me. But by unchecking the box you’re effectively saying “Let me decide what’s best by who I choose to follow.”

I’m pleased by this new feature (actually, new ability to turn off a feature). I’ve tried to maintain a decent signal to noise ratio in my Twitter stream and Twitter has continually tried to erode it, until now.

Group statistics

I just ran across FiniteGroupData and related functions in Mathematica. That would have made some of my earlier posts easier to write had I used this instead of writing my own code.

Here’s something I find interesting. For each n, look at the groups of order at most n and count how many are Abelian versus non-Abelian. At first there are more Abelian groups, but the non-Abelian groups soon become more numerous. Also, the number of Abelian groups grows smoothly, while the number of non-Abelian groups has big jumps, particularly at powers of 2.

Counting Abelian and non-Abelian groups

Here’s the Mathematica code:

    fgc = FoldList[Plus, 0, Table[FiniteGroupCount[n], {n, 1, 300}]]
    fga = FoldList[Plus, 0, Table[FiniteAbelianGroupCount[n], {n, 1, 300}]]
    ListLogPlot[ {fgc - fga, fga }, 
        PlotLegends -> {"Non-Abelian", "Abelian"}, 
        Joined -> True, 
        AxesLabel -> {"order", "count"}]

I see the plot legend on my screen, but when saving the plot to a file the legend wasn’t included. Don’t know why. (Update: See footnote [1]). The jagged blue curve is the number of non-Abelian groups of size up to n. The smooth gold curve is the corresponding curve for Abelian groups.

Here’s the same plot carried out further to show the jumps at 512 and 1024.

Counting Abelian and non-Abelian groups

Related posts

[1] Someone from Wolfram Research saw this post and sent me a fix:

pl = ListLogPlot[...]
Export["~/Desktop/img.png", pl]

The permutation symbol

Sometimes simple notation can make a big difference. One example of this is the Kronecker delta function δij which is defined to be 1 if ij and zero otherwise. Because branching logic is built into the symbol, it can keep branching logic outside of your calculation. That is, you don’t have to write “if … else …” in when doing your calculation. You let the symbol handle it.

The permutation symbol εijk is similar. It has some branching logic built into its definition, which keeps branching out of your calculation, letting you handle things more uniformly. In other words, the symbol encapsulates some complexity, keeping it out of your calculation. This is analogous to how you might reduce the complexity of a computer program. [1]

Definition

The permutation symbol, sometimes called the Levi-Civita symbol, can have any number of subscripts. If any two of the subscripts are equal, the symbol evaluates to 0. Otherwise, the symbol evaluates to 1 or -1. If you can order the indices with an even number of swaps, the sign of the permutation is 1. If it takes an odd number of swaps, the sign is -1. You could think of putting the indices into a bubble sort algorithm and counting whether the algorithm does an even or odd number of swaps.

(There’s an implicit theorem here saying that the definition above makes sense. You could change one order of indices to another by different series of swaps. Two different ways of getting from one arrangement to another may use a different number of swaps, but the number of swaps in both approaches will have the same parity.)

Incidentally, I mentioned even and odd permutations a few days ago in the context of finite simple groups. One of the families of finite simple groups are the alternating groups, the group of even permutations on a set with at least five elements. In other words, permutations whose permutation symbol is 1.

Examples

For example, ε213 = -1 because it takes one adjacent swap, exchanging the 2 and the 1, to put the indices in order. ε312 = 1 because you can put the indices in order with two adjacent swaps: 3 <-> 1, then 3 <-> 2. The symbol ε122 is 0 because the last two indices are equal.

Mathematica

You can compute permutation symbols in Mathematica with the function Signature. For example,

    Signature[{3, 1, 2}]

returns 1. The function works with more indices as well. For example,

    Signature[{3, 1, 2, 5, 4}]

returns -1.

Python

SymPy has a function LeviCivita for computing the permutation symbol. It also has Eijk as an alias for LeviCivita. Both take a variable number of integers as arguments, not a list of integers as Mathematica does. If you do have a list of integers, you can use the * operator to unpack the list into separate arguments.

    from sympy import Eijk, LeviCivita
    from numpy.random import permutation

    print( LeviCivita(3, 1, 2) )
    print( Eijk(3, 1, 2, 5, 4) )
    p = permutation(5)
    assert(Eijk(*p) == LeviCivita(*p))

Product formula

When all indices are distinct, the permutation symbol can be computed from a product. For two indices,

\epsilon_{ij} = \frac{j-i}{|j-i|}

For three indices,

\epsilon_{ijk} = \frac{(j - i)}{|j - i|} \frac{(k-i)}{|k-i|} \frac{(k - j)}{|k-j|}

and in general

\epsilon_{i_1i_2\cdotsi_n} = \prod{p > q} \frac{i_p - i_q}{|i_p - i_q|}

Cross products

An example use of the permutation symbol is cross products. The ith component of the cross product of b × c is

(b \times c)^i = \epsilon_{ijk} b^j c^k

Here we’re using tensor notation where components are indicated by superscripts rather than subscripts, and there’s an implied summation over repeated indices. So here we’re summing over j and k, each running from 1 to 3.

Similarly, the triple product of vectors a, b and c is

a \cdot (b \times c) = \epsilon_{ijk} \,a^i b^j c^k

This is also the determinant of the matrix whose rows are the vectors ab, and c. Determinants of larger matrices work the same way.

Relation to Kronecker delta

This post started out by talking about the more familiar Kronecker delta as an introduction to the permutation symbol. There is a nice relation between the two given below.

\epsilon_{ijk} \epsilon_{rst} = \begin{vmatrix} \delta_{ir} & \delta_{is} & \delta_{it} \\ \delta_{jr} & \delta_{js} & \delta_{jt} \\ \delta_{kr} & \delta_{ks} & \delta_{kt} \end{vmatrix}

If we set ri we get the special case

\epsilon_{ijk} \epsilon_{ist} = \delta_{js} \delta_{kt} - \delta_{jt} \delta_{ks}

Related posts

[1] One way of measuring the complexity of a computer program is the maximum number of logic branches in any function. If you have a moderately complex function, and you replace an if-then statement with a call to a small function that has an if-then statement, you’ve reduced the overall complexity. This is sort of what the delta and permutation functions do.

A strange sort of product rule

Let u be a real-valued function of n variables, and let v be a vector-valued function of n variables, a function from n variables to a vector of size n. Then we have the following product rule:

D(uv) = v Duu Dv.

It looks strange that the first term on the right isn’t Du v.

The function uv is a function from n dimensions to n dimensions, so it’s derivative must be an n by n matrix. So the two terms on the right must be n by n matrices, and they are. But Du v is a 1 by 1 matrix, so it would not make sense on the right side.

Here’s why the product rule above looks strange: the multiplication by u is a scalar product, not a matrix product. Sometimes you can think of real numbers as 1 by 1 matrices and everything works out just fine, but not here. The product uv doesn’t make sense if you think of the output of u as a 1 by 1 matrix. Neither does the product u Dv.

If you think of v as an n by 1 matrix and Du as a 1 by n matrix, everything works. If you think of v and Du as vectors, then v Du is the outer product of the two vectors. You could think of Du as the gradient of u, but be sure you think of it horizontally, i.e. as a 1 by n matrix. And finally, D(uv) and Dv are Jacobian matrices.

Update: As Harald points out in the comments, the usual product rule applies if you write the scalar-vector product uv as the matrix product vu where now are are thinking of u as a 1 by 1 matrix! Now the product rule looks right

D(vu) = Dv uv Du

but the product vu looks wrong because you always write scalars on the left. But here u isn’t a scalar!

Koide’s coincidence

The p-norm of a vector is defined to be the pth root of the sum of its pth powers.

|| x ||_p = \left(\sum_{i=1}^n x_i^p\right )^{1/p}

Such norms occur frequently in application [1]. Yoshio Koide discovered in 1981 that if you take the masses of the electron, muon, and tau particles, the ratio of the 1 norm to the 1/2 norm is very nearly 2/3. Explicitly,

\frac{m_e + m_{\mu} + m_{\tau}}{(\sqrt{m_e}+\sqrt{m_{\mu}}+\sqrt{m_{\tau}})^2} \approx \frac{2}{3}

to at least four decimal places. Since the ratio is tantalizingly close to 2/3, some believe there’s something profound going on here and the value is exactly 2/3, but others believe it’s just a coincidence.

The value of 2/3 is interesting for two reasons. Obviously it’s a small integer ratio. But it’s also exactly the midpoint between the smallest and largest possible value. More on that below.

Is the value 2/3 within the measure error of the constants? We’ll investigate that with a little Python code.

Python code

The masses of particles are available in the physical_constants dictionary in scipy.constants. For each constant, the dictionary contains the best estimate of the value, the units, and the uncertainty (standard deviation) in the measurement [2].

    from scipy.constants import physical_constants as pc
    from scipy.stats import norm
    
    def pnorm(v, p):
        return sum([x**p for x in v])**(1/p)
    
    def f(v):
        return pnorm(v, 1) / pnorm(v, 0.5)
    
    m_e = pc["electron mass"]
    m_m = pc["muon mass"]
    m_t = pc["tau mass"]
    
    v0 = [m_e[0], m_m[0], m_t[0]]
    
    print(f(v0))

This says that the ratio of the 1 norm and 1/2 norm is 0.666658, slightly less than 2/3. Could the value be exactly 2/3 within the resolution of the measurements? How could we find out?

Measurement uncertainty

The function f above is minimized when its arguments are all equal, and maximized when its arguments are maximally unequal. To see this, note that f(1, 1, 1) = 1/3 and f(1, 0, 0) = 1. You can prove that those are indeed the minimum and maximum values. To see if we can make f larger, we want to increase the largest value, the mass of tau, and decrease the others. If we move each value one standard deviation in the desired direction, we get

    v1 = [m_e[0] - m_e[2], 
          m_m[0] - m_m[2], 
          m_t[0] + m_t[2]]
    print(f(v1))

which returns 0.6666674, just slightly bigger than 2/3. Since the value can be bigger than 2/3, and less than 2/3, the intermediate value theorem says there are values of the constants within one standard deviation of their mean for which we get exactly 2/3.

Now that we’ve shown that it’s possible to get a value above 2/3, how likely is it? We can do a simulation, assuming each measurement is normally distributed.

    N = 1000
    above = 0
    for _ in range(N):
        r_e = norm(m_e[0], m_e[2]).rvs()
        r_m = norm(m_m[0], m_m[2]).rvs()
        r_t = norm(m_t[0], m_t[2]).rvs()
        t = f([r_e, r_m, r_t])
        if t > 2/3:
            above += 1
    print(above)

When we I ran this, I got 168 values above 2/3 and the rest below. So based solely on our calculations here, not taking into account any other information that may be important, it’s plausible that Koide’s ratio is exactly 2/3.

***

[1] Strictly speaking, we should take the absolute values of the vector components. Since we’re talking about masses here, I simplified slightly by assuming the components are non-negative.

Also, what I’m calling the p “norm” is only a norm if p is at least 1. Values of p less than 1 do occur in application, even though the functions they define are not norms. I’m pretty sure I’ve blogged about such an application, but I haven’t found the post.

[2] The SciPy library obtained its values for the constants and their uncertainties from the CODATA recommended values, published in 2014.

Density of the Great Pacific Garbage Patch

The Great Pacific Garbage Patch (GPGP) is a huge region of ocean trash twice the area of Texas. I’m trying to understand how dense it is, and running into contradictory information.

This article describes a project, Ocean Cleanup, that aims to clean up half the GPGP in five years. How could you possibly clean up a garbage patch bigger than Texas in five years? That made me suspect the GPGP isn’t as dense a garbage patch I imagined, and it’s not.

The article mentioned above says Ocean Cleanup would remove 5.5 metric tons of trash a month, and clean up half the GPGP in five years [1]. (I hope they can!) That implies the GPGP contains 660 metric tons of trash. Wikipedia says it contains 80,000 metric tons of trash. Somebody is off by two orders of magnitude! If Wikipedia is right about the mass, and if Ocean Cleanup is right that they can remove half of it in five years, then they’ll have to remove 700 tons of trash per month.

Not exactly a garbage patch

The Wikipedia article on the GPGP does say that “garbage patch” is misleading.

There has been some controversy surrounding the use of the term “garbage patch” and photos taken off the coast of Manila in the Philippines in attempts to portray the patch in the media often misrepresenting the true scope of the problem and what could be done to solve it. Angelicque White, Associate Professor at Oregon State University, who has studied the “garbage patch” in depth, warns that “the use of the phrase ‘garbage patch’ is misleading. … It is not visible from space; there are no islands of trash; it is more akin to a diffuse soup of plastic floating in our oceans.”

Density

So how dense is it? Let’s assume 80,000 metric tons over an area twice the size of Texas. The area of Texas is 700,000 km² , so that’s 8 × 1010 grams of trash over 1.4 × 1012 square meters, or 57 milligrams per square meter.

An empty water bottle weighs about 20 grams, and an American football field covers 5300 square meters, so this would be the same density of plastic as 15 empty water bottles scattered over a football field. This is an average. No doubt the density is higher in some areas and lower in others.

***

[1] The video in the article says Ocean Cleanup would remove half the GPGP every five years, implying that the rate of clean up will decline exponentially.

Enough group theory for now

I’ve written three blog posts lately about the classification of finite simple groups. I’m done with that topic for now. I may come back and revisit it in the future. So if group theory isn’t your favorite topic, don’t worry. I don’t know what I’ll blog about next, but it’ll probably be one of the topics I often write about.

 

Overlap in the classification of finite simple groups

The previous post defined the groups PSL(nq) where n is a positive integer and q is a prime power. These are finite simple groups for n ≥ 2 except for PSL(2, 2) and PSL(2, 3).

Overlap among PSL(nq)

There are a couple instances where different values of n and q lead to isomorphic groups: PSL(2, 4) and PSL(2, 5) are isomorphic, and PSL(2, 7) and PSL(3, 2) are isomorphic. These are the only instances [1].

With the exceptions stated above, distinct values of n and q lead to distinct groups. Is it possible for different choices of n and q to lead to groups of the same size, even though the groups are not isomorphic to each other? Yes, PSL(3, 4) and PSL(4, 2) both have order 20160, but the groups are not isomorphic. This is the only example [2].

Overlap between PSL and alternating groups

The first post in this series mentioned that for n ≥ 5, the alternating group An, the group of even permutations on a set of n elements, is a simple group. Three of the alternating groups are isomorphic to PSL groups:

  • PSL(2, 4) = PSL(2, 5) = A5
  • PSL(2, 9) = A6
  • PSL(4, 2) = A8

Here “=” really means isomorphic. We mentioned PSL(4, 2) above. It has the same order as PSL(3, 4). This means that A8 and PSL(3, 4) have the same order but are not isomorphic.

I suspect that with a small number of exceptions, the order of a finite simple group determines the group. I haven’t proven that, but numerical exploration suggests its true. This page lists non-Abelian finite simple groups of order less than 10 billion, and there are only seven orders that correspond to more than 1 group, the largest example being order 25,920.

One last overlap

There is only one other duplication in the lists of groups in the CFSG theorem, and that is PSU(4, 2) = PSp(4, 3). I haven’t written about these groups yet.

Notes

[1] See The Finite Simple Groups by Robert A. Wilson

[2] In fact, aside from the groups mentioned in this post, the orders of all the finite simple groups are unique except for two non-isomorphic families that have orders: PΩ2n+1(q) and PSp2n(q) for n ≥ 3 and odd prime powers q. See discussion on Math Overflow.