Stephen Covey and Pope Leo X

The second habit in Stephen Covey’s best-selling book The 7 Habits of Highly Effective People is “Begin with the end in mind.” Pope Leo X would have disagreed.

Portrait of Leo X from Wikipedia

Twyla Tharp tells the story of Pope Leo X and his frustration with Leonardo da Vinci in her book The Creative Habit. da Vinci was experimenting with varnishes and so forth when the Pope thought he should be painting. Leo’s assessment of da Vinci:

This man will never do anything, for he begins thinking about the end before the beginning of his work.

Related posts:

Searching for John Francis

There was an odd story in NA Digest a couple days ago, John Francis of QR found. When I saw that someone was found, I assumed he had lost as in lost at sea, like Jim Gray. But that wasn’t the case.

John Francis developed the QR algorithm, an algorithm for finding the eigenvalues and eigenvectors of a matrix. Some experts regard the QR algorithm as one of the 10 most important numerical algorithms of the 20th century. He developed the algorithm in 1959 but then left the numerical analysis community three years later. The NA Digest article doesn’t say whether Francis became a recluse or simply moved on to a job outside mathematics. No one in numerical analysis knew anything about him until a couple folks tracked him down recently. He is doing well. He remembers his earlier work clearly but was unaware of the impact it had had.

Related post: Simple legacy (how people often underestimate the importance of their most useful work)

When discoveries stay discovered

In what sense did Christopher Columbus discover America? Obviously he wasn’t the first human to step foot on the New World. Columbus wasn’t even the first European. Norwegian explorer Leif Erikson seems to have arrived 500 years before Columbus. But as Stephen Mills famously stated,

There have been other people before Columbus, but when Columbus discovered the New World, it stayed discovered.

The same principle could be used to resolve debates about priorities in mathematical discoveries.There is some debate over whether John Tukey or Carl Gauss discovered the Fast Fourier Transform (FFT). But there is no doubt that after Tukey discovered it, the FFT stayed discovered. The algorithm is now used in digital signal processing applications everywhere.

Gauss and Tukey were both brilliant mathematicians. Tukey, however, also had an aptitude for creating memorable names. For example, you may have heard of “software,” a term he coined.

Related posts:

What does “classical” mean in science?

The word “classical” has a standard meaning in the humanities, but not in science.

Ward Cheney and Will Light give a candid definition of “classical” in the scientific sense in the introduction to their book on approximation theory:

… the “classical” portion of approximation theory — understood to be the parts of the subject that were already in place when the authors were students.

There you have it: whatever was known when you were in school is classical. Yes, this definition is entirely relative. And it describes common usage pretty well.

Kim Possible and cancer research

When I hear of naked mole rats, I think of Rufus, the animated character from Kim Possible.

rufus icon

But it turns out the real rodents might be useful in cancer research. According to a recent 60-Second Science podcast, naked mole rats live in low-oxygen environments. The core of large tumors is also a low-oxygen environment, and so maybe studying naked mole rats can tell us something about cancer. So far researchers have found three genes in common between naked mole rats and cancer cells.

MIT replaces Scheme with Python

According to an article on The Snowtide Blog, MIT has decided to teach beginning CS classes in Python rather than Scheme. (Thanks to procoders.net for the link.) The article paraphrases remarks by Gerdald Sussman at the announcement.The main rationale was that the world is more complicated now than when the course was designed and SICP was written. Now programmers spend more time researching libraries than writing everything from scratch.

Here’s something I expected Sussman to say though apparently he did not: you can use Python as a functional language if you like, you just don’t have to. I don’t know why more people don’t emphasize this. Python is not a functional language, but Python does make it easy to declare functions on the fly, pass functions as arguments, etc. You’re free to write Scheme-like programs in Python if you want to, but you’re also free to use other styles when they are more appropriate.

Springs, resistors, and harmonic means

Harmonic mean has come up in a couple posts this week (with numbers and functions). This post will show how harmonic means come up in physical problems involving springs and resistors.

Suppose we have two springs in series with stiffness k1 and k2:

Then the combined stiffness k of the two springs satisfies 1/k = 1/k1 + 1/k2. Think about what this says in the extremes. If one of the springs were infinitely stiff, say k2 = ∞. Then k = k1. It would be as if the second spring were not there. Being infinitely stiff, we could think of it as an extension of the block it is attached to. Now think of one of the springs having no stiffness at all, say k1 = 0. Then k = 0. One mushy spring makes the combination mushy.

Next think of two springs in parallel:

Now the combined stiffness of the two springs is k = k1 + k2. Again think of the two extremes. If one spring is infinitely stiff, say k1 = ∞, then k = ∞ and the combination is infinitely stiff. And if one spring has no stiffness, say k2 = 0, then k = k1. We could imagine the spring with no stiffness isn’t there.

The stiffness of springs in series adds harmonically. The stiffness of the combination is half the harmonic mean of the two individual stiffnesses.

Electrical resistors combine in a way that is the opposite of mechanical springs. Resistors in parallel combine like springs in series, and vice versa.

If two resistors have resistance r1 and r2, the combined resistance r of the two resistors in parallel satisfies 1/r = 1/r1 + 1/r2. If one of the resistors has infinite resistance, say r2 = ∞, then r = r1. It would be as if the second resistor were not there. All electrons would flow through the first resistor.

If the two resistors were in series, then r = r1 + r2. If one resistor has infinite resistance, so does the combination. Electrons cannot flow through the combination if they cannot flow through one of the resistors. And if one resistor has zero resistance, say r2 = 0, then r = r1. Since the second resistor offers no resistance to the flow of electrons, it may as well not be there.

These physical problems illustrate why zeros as handled specially in the definition of means.

Image credit: Wikipedia

Means and inequalities for functions

A post on Monday looked at means an inequalities for a lists of non-negative numbers. This post looks at analogous means and inequalities for non-negative functions. We go from means defined in terms of sums to means defined in terms of integrals.

Let p(x) be a non-negative function that integrates to 1. (That makes p(x) a probability density, but you don’t have to think of this in terms of probability.) Think of p(x) as being a weighting function. The dependence on p(x) will be implicit. For a non-negative function f(x) and real number r ≠ 0, define Mr( f ) by

M_r(f) = left(int p(x) left( f(x) right)^r , dx right)^{1/r}

If r > 0 and the integral defining Mr( f ) diverges, we say Mr( f ) = ∞ and M-r( f ) = 0.

Define the geometric mean of the function f by

G(f) = expleft( int p(x) log f(x), dx right)

There are a few special cases to consider when defining G(f). See Inequalities for details.

First we give several limiting theorems.

  • As r → –∞, Mr( f ) → min( f )
  • As r → +∞, Mr( f ) → max( f )
  • As r → 0+, Mr( f ) → G( f )

And now for the big theorem: If rs, then Mr( f ) ≤ Ms( f ).The conditions under which equality hold are a little complicated. Again, see Inequalities for details.

We could derive analogous results for infinite sums since sums are just a special case of integrals.

The assumption that the weight function p(x) has a finite integral is critical. We could change the definition of Mr( f ) slightly to accommodate the case that the integral of p(x) is finite but not equal to 1, and all the conclusions above would remain true. But if we allowed p(x) to have a divergent interval, the theorems do not hold. Suppose p(x) is constantly 1, and our region of integration is (0, ∞). Then Mr( f ) might be more or less than Ms( f ) depending on f. For example, let f(x) = b exp( – bx ) for some b > 0. M1( f ) = 1, but M( f ) = b. Then M1( f ) is less than or greater than M( f ) depending on whether b is less than or greater than 1.

Redbelt problem solving

In the movie Redbelt, Chiwetel Ejiofor plays Mike Terry, a Jiu Jitsu instructor who will fight but will not compete. He will fight in a real fight if necessary, but he won’t fight in a ring because competitions have arbitrary rules. He is a skilled fighter because he is creative, and competitions take away that creativity. At one point in the movie, someone Terry if he teaches people to win. He says no, he teaches people to prevail. In his mind, you can’t “win” a fight. A fight is a problem to be solved.

Mike Terry’s distinction between fights and contests makes me think of the distinction between practical and academic problem solving. Practical problem solving does not have arbitrary constraints whereas academic problems often do: you can use this technique but not that one, you can use this reference but not that one, etc. These academic limitations serve a purpose in their context, but sometimes we can imagine these constraints are still on us after we leave the classroom.

Sometimes we’ll struggle mightily to solve a problem analytically that could be easily be solved numerically (or vice versa). Or we’ll imagine that a problem must be solved using a particular programming language even though it could be done more easily using a different language. It feels like “cheating” to go for the easier solution. But if you’re not in an academic setting, you can’t “cheat.” (Of course I’m not talking about violating ethical standards to solve a problem, only dismissing artificial restrictions. Where there is no law, there is no sin.)

There may be good reasons for pursuing the more difficult solution. For example, entertainment value. Sometimes we want to see whether we can do something the hard way. There’s nothing wrong with that, as long as we acknowledge that’s what we’re doing. But sometimes we do things the hard way for no good reason other than not having examined our self-imposed limitations. Maybe we’re trying to win rather than solve the problem.

I’m not saying entertainment value is the only reason to go down a more difficult road. Maybe you suspect there will be additional benefits if the more difficult approach succeeds. Again, that’s fine if this is a conscious decision and not a lack of creativity.

Related post: Try the simplest thing that could possibly work

Functional in the small, OO in the large

The most recent episode of Software Engineering Radio is an interview with Luke Hoban, program manager for F#. The interview mentioned the idea of using functional programming in the small and object oriented programming in the large. In other words, individual methods might be implemented using a functional programming paradigm, say in F#, but the code would be bundled into an object for other parts of an application to use. The grand structure would be object oriented, but the small structure would be functional.

I suppose “functional in the small, OO in the large” makes sense, but I could imagine someone arguing for a different way to combine the two approaches to programming. Any thoughts?

Means and inequalities

The arithmetic mean of two numbers a and b is (a + b)/2.

The geometric mean of a and b is √(ab).

The harmonic mean of a and b is 2/(1/a + 1/b).

This post will generalize these definitions of means and state a general inequality relating the generalized means.

Let x be a vector of non-negative real numbers, x = (x1, x2, x3…, xn). Define Mr( x ) to be

M_r(x) = left(  frac{1}{n} sum_{i = 1}^n x_i^r right)^{1/r}

unless r = 0 or  r is negative and one of the xi is zero. If r = 0, define Mr( x ) to be the limit of Mr( x ) as r decreases to 0 . And if r is negative and one of the xi is zero, define Mr( x ) to be zero. The arithmetic, geometric, and harmonic means correspond to M1, M0, and M-1 respectively.

Define M( x ) to be the limit of Mr( x ) as r goes to ∞. Similarly, define M-∞( x ) to be the limit of Mr( x ) as r goes to –∞. Then M( x ) equals max(x1, x2, x3…, xn) and M-∞( x ) equals min(x1, x2, x3…, xn).

In summary, the minimum, harmonic mean, geometric mean, arithmetic mean and maximum are all special cases of Mr( x ) corresponding to r = –∞, –1, 0, 1, and ∞ respectively. Of course other values of r are possible; these five are just the most familiar. Another common example is the root-mean-square (RMS) corresponding to r = 2.

A famous theorem says that the geometric mean is never greater than the arithmetic mean. This is a very special case of the following theorem.

If r s then Mr( x ) ≤ Ms( x ).

In fact we can say a little more. If r < s then Mr( x ) < Ms( x ) unless x1 = x2 = x3 = … = xn or s ≤ 0 and one of the xi is zero.

We could generalize the means Mr a bit more by introducing positive weights pi such that p1 + p2 + p3 + … + pn = 1. We could then define Mr( x ) as

M_r(x) = left( sum_{i = 1}^n p_i x_i^r right)^{1/r}

with the same fine print as in the previous definition. The earlier definition reduces to this new definition with pi = 1/n. The above statements about the means Mr( x ) continue to hold under this more general definition.

For more on means and inequalities, see Inequalities by Hardy, Littlewood, and Pólya.

Update: Analogous results for means of functions, replacing sums with integrals. Also, physical examples of harmonic mean with springs and resistors.

Related post: Old math books

Unicode function names

Keith Hill has a fun blog post on using Unicode characters in PowerShell function names. Here’s an example from his article using the square root symbol for the square root function.

PS> function √($num) { [Math]::Sqrt($num) }
PS> √ 81
9

As Keith points out, these symbols are not practical since they’re difficult to enter, but they’re fun to play around with.

Here’s another example using the symbol for pounds sterling

for the function to convert British pounds to US dollars.

PS> function £($num) { 1.44*$num }
PS> £ 300.00
432

(As I write this, a British pound is worth $1.44 USD. If you wanted to get fancy, you could call a web service in your function to get the current exchange rate.)

I read once that someone (Larry Wall?) had semi-seriously suggested using the Japanese Yen currency symbol

for the “zip” function in Perl 6 since the symbol looks like a zipper.

Mathematica lets you use Greek letters as variable and function names, and it provides convenient ways to enter these characters, either graphically or via their TeX representations. I think this is a great idea. It could make mathematical source code much more readable. But I don’t use it because I’ve never got into the habit of doing so.

There are some dangers to allowing Unicode characters in programming languages. Because Unicode characters are semantic rather than visual, two characters may have the same graphical representation. Here are a couple examples. The Roman letter A (U+0041) and the capital Greek letter Α (U+0391) look the same but correspond to different characters. Also, the the Greek letter Ω (U+03A9) and the symbol Ω (U+2126) for Ohms (unit of electrical resistance) have the same visual representation but are different characters. (Or at least they may have the same visual representation. A font designer may choose, for example, to distinguish Omega and Ohm, but that’s not a concern to the Unicode Consortium.)

The buck stops with the programmer

The programmer is the last link in the chain of a software project. Everyone higher up the organization chart can leave out details, but the programmer cannot. Anything left unspecified will be decided by the programmer. He cannot pass the buck because he has to make something work. Programmers make good decisions, bad decisions, and many arbitrary but neutral decisions.

Photo of President Truman with the sign on his desk saying 'the buck stops here'

When you see software with a silly interface, odds are the interface details were not specified and the programmer chose the path of least effort. Why should I press # after entering my five-digit zip code on a phone? It’s logically possible to determine what my zip code is as soon as I enter the fifth digit, but it makes life easier for some phone system programmer if the pound sign is a universal end-of-input signal. Or why should I select an account at the ATM if I only have one account? Again, I’m sure this made life easier for some programmer.

Sign from the desk of Harry Truman saying the buck stops here

Some programmers are lazy. But some are unsung heroes. They understand gritty details of their company that no one else knows about. They have to: they have to specify these details to dumb machines. Most people don’t want to solve problems down to the final detail and programmers — for better and for worse — are the ones who fill in the gaps.

Some of the details that programmers fill in regard what to do when things go wrong. A client says a program is supposed to collect the user’s Social Security number (SSN). Fine. But what if the user doesn’t have an SSN? What if they enter an invalid SSN? (Is there a way to know whether an SSN is valid?)  After a few questions like that, the client will throw up his hands and leave it up to the developer. But the developer cannot throw up his hands. He has to decide something, even if he passively decides to let the software crash in case of unexpected input.

Update: Changed “analyst” to “client” above. See discussion below. “Client” more accurately reflects what I meant.

Related posts:

Paper doesn’t abort
Where does the programming effort go?
Your job is trivial. (But I couldn’t do it.)