Two meanings of “argument”

The most common use of the word “argument” is to describe a disagreement.  So the first time you hear “argument” to mean something you pass into a function (either a mathematical function or a programming language function), it sounds odd. How did “argument” come to mean two very different things? Here is an explanation.

It is curious to track the path by which the word “argument” came to have two different meanings, one in mathematics and the other in everyday English. According to the Oxford English Dictionary, the word derives from the Latin for “to make clear, prove”; thus it came to mean, by one thread of derivation, “the evidence offered as proof”, which is to say, “the information offered”, which led to its meaning in Lisp. But in the other thread of derivation, it came to mean “to assert in a manner against which others may make counter assertions”, which led to the meaning of the word as a disputation.

Taken from An Introduction to Programming in Emacs Lisp.

Update: As Dave Richeson points out in the comments below, there are really three meanings of “argument” being discussed.

Eclectic mix podcast

If you’re looking for a way to discover some new music, check out Eclectic Mix. The show lives up to its name, featuring all kinds of music. For example, here’s a show with Latin Giants of Jazz and here’s one with The Monks and Choirs of Kiev Pechersk Lavra.

85% functional language purity

James Hague offers this assessment of functional programming:

My real position is this: 100% pure functional programming doesn’t work. Even 98% pure functional programming doesn’t work. But if the slider between functional purity and 1980s BASIC-style imperative messiness is kicked down a few notches — say to 85% — then it really does work. You get all the advantages of functional programming, but without the extreme mental effort and unmaintainability that increases as you get closer and closer to perfectly pure.

More functional programming posts

Idea people versus results people

I liked this quote from Hugh MacLeod the other day:

Idea-Driven People come up with Ideas (and Results), more often than Results-Driven People come up with Results (and Ideas).

His quote brings up two related fallacies.

  1. People who are good at one thing must be bad at something else.
  2. People who specialize in something must be good at it.

Neither of these is necessarily true. It’s wrong to assume that because someone is good at coming up with ideas, they must be bad at implementing them. It’s also wrong to assume that someone produces results just because they call themselves results-driven.

The first fallacy comes up all the time in hiring. Job seekers may leave credentials off their résumé to keep employers from assuming that strength in one area implies weakness in another area. When I was looking for my first programming job, some companies assumed I must be a bad programmer because I had a PhD in math. One recruiter suggested I take my degree off my résumé. I didn’t do that, and fortunately I found a job with a company that needed a programmer who could do signal processing.

Andrew Gelman addressed the second fallacy in what he calls the Pinch-Hitter Syndrome:

People whose job it is to do just one thing are not always so good at that one thing.

As he explains here,

The pinch-hitter is the guy who sits on the bench and then comes up to bat, often in a key moment of a close game. When I was a kid, I always thought that pinch hitters must be the best sluggers in baseball, because all they do (well, almost all) is hit. But … pinch hitters are generally not the best hitters.

This makes sense in light of the economic principle of comparative advantage. You shouldn’t necessarily do something just because you’re good at it. You might be able to do something else more valuable. When people in some area don’t do their job particularly well, it may be because those who can to the job better have moved on to something else.

Related post: Self-sufficiency is the road to poverty

Best management decision

In his book The Design of Design, Frederick Brooks describes his most productive decision as a manager at IBM.

My most productive single act as an IBM manager had nothing to do with product development. It was sending a promising engineer to go as a full-time IBM employee in mid-career to the University of Michigan to get a PhD. This action … had a payoff for IBM beyond my wildest dreams.

That engineer was E. F. Codd, father of relational databases.

Related post: Many hands make more work

Many hands make more work

Frederick Brooks is best known as the author of The Mythical Man-Month, a book on software project management first written in 1975 and still popular 35 years later. Brooks has a new collection of essays entitled The Design of Design that was just released this month. In his chapter on collaboration in design, Brooks notes

“Many hands make light work” — Often
But many hands make more work — Always

Collaboration may reduce the amount of work per person, but it will certainly increase the total amount of work to be done. In addition, collaboration is likely to reduce the quality of a design. Earlier in the same chapter Brooks says

Most great works have been made by one mind. The exceptions have been made by two minds.

He gives a long list of designers to support this claim: Homer, Bach, Shakespeare, Gilbert and Sullivan, Michelangelo, Watt, Edison, the Wright Brothers …

The great works Brooks alludes to may have been implemented by teams, but they were not designed by teams.

You can hear Brooks explain why he believes design work doesn’t partition well in his talk “Collaboration and Telecollaboration in Design.” There’s a link to the audio in my blog post on Brooks and conceptual integrity.

C++ 0X overview

Scott Meyers gives an overview of the new C++ standard in his interview on Software Engineering Radio.

On the one hand, some of the new features sound very nice. For example, C++ will gain

  • Type inference. This will make it easier to work with complex (i.e. template) type definitions.
  • Lambda expressions. This will make working with the standard library much easier.
  • Raw string literals. This will make regular expressions easier to read and write.
  • R-value references. This will make some code more efficient.

On the other hand, the new standard will be about twice the size of the previous standard. A complex language is about to become a lot more complex.

The show notes have several links to much more information on the new standard.

More C++ posts

Chances a card doesn’t move in a shuffle

Take a deck of 52 cards and shuffle it well. What is the probability that at least one card will be in the same position as when you started? To answer that question, we first have to define derangements and subfactorials.

A derangement is a permutation of a set that leaves no element where it started. For example, the set {1, 2, 3} has two derangements: {2, 3, 1} and {3, 1, 2}. The arrangement {2, 1, 3}, for example, is not a derangement because it leave 3 in place. The number of derangements of a set with n elements is !n, the subfactorial of n. (Note that factorial puts the exclamation point after the number and subfactorial puts the exclamation point in front. This is unfortunate notation, but it’s commonly used.)

It turns out !n is given by

!n = n!\left( 1 - \frac{1}{1!} + \frac{1}{2!} - \frac{1}{3!} \cdots + (-1)^n \frac{1}{n!} \right)

Note that the expression in parentheses is the power series for exp(-x) evaluated at x = 1 and truncated after the nth term. It follows that for positive n, you can calculate !n by first computing n!/e and rounding the result to the nearest integer.

If a permutation is randomly selected so that all permutations are equally likely, the probability that a permutation is a derangement is !n / n!. As n increases, this probability rapidly approaches 1/e. That means that for moderately large n, the probability that a random permutation will leave at least one element fixed is approximately 1 -1/e or about 63%. So there’s about a 63% chance that at least one of the cards is in the same position after shuffling the deck.

The approximation above says that !n / n! ≈ 1/e. How good is that approximation? The error is the error in truncating the series for exp(-x). This is an alternating series, so the absolute error is bounded by the first term left out. In other words, the error is less than 1/(n+1)!. So for n = 52, the error is on the order of 10-70. Even for n as small as 10, the error is on the order of 10-8.

This says that the probability that at least one card stays in its original position after the deck is shuffled hardly depends on the size of the deck. For example, if the deck had 20 cards or 100 cards rather than 52, the probability of at least one card being unmoved after the cards are shuffled would be essentially the same.

Extra credit

So far we only looked at the probability of no card staying in the same place (or the complementary probability, at least one card not staying in the same place.) What is the probability that exactly k cards stay in the same place after shuffling? This is given by

\frac{1}{n!} {n \choose k} \left\lfloor \frac{(n-k)!}{e} \right\rfloor

As n increases, this probability approaches 1/(k! e).

See Nonplussed! by Julian Havil for more details.

Related posts

Rewarding complexity

Clay Shirky wrote an insightful article recently entitled The Collapse of Complex Business Models. The last line of the article contains the observation

… when the ecosystem stops rewarding complexity, it is the people who figure out how to work simply in the present, rather than the people who mastered the complexities of the past, who get to say what happens in the future.

It’s interesting to think how ecosystems reward complexity or simplicity.

Academia certainly rewards complexity. Coming up with ever more complex models is the safest road to tenure and fame. Simplification is hard work and isn’t good for your paper count.

Political pundits are rewarded for complex analysis, though politicians are rewarded for oversimplification.

The software market has rewarded complexity, but that may be changing.

Related posts