Stochastic independence

Independence in probability can be both intuitive and mysterious. Intuitively, two events are independent if they have nothing to do with each other. Suppose I ask you to guess whether the next person walking down the street is left handed. I stop this person, and before I ask which hand he writes with, I ask him for the last digit of his phone number. He says it’s 3. Does knowing the last digit of his phone number make you change your estimate of the chances this stranger is a south paw? No, phone numbers and handedness are independent. Presumably about 10% of right-handers and the same percentage of left-handers have this distinction. Even if the phone company is more likely or less to assign numbers ending in 3, there’s no reason to believe they take customer handedness into account when handing out numbers. On the other hand, if I tell you the stranger is an artist, that should change your estimate: a disproportionate number of artists are lefties.

Formally, two events A and B are independent if P(A and B) = P(A) P(B). This implies that P(A | B), the probability of A happening given that B happened, is just P(A). Similarly P(B | A) = P(B).  Knowing whether or not one of the events happened tells you nothing about the likelihood of the other. Knowing someone’s phone number doesn’t help you guess which hand they write with, unless you use the phone number to call them and ask about their writing habits.

Now lets extend the definition to more events. A set of events is mutually independent if the probability of any subset of two or more events is the product of the probabilities of each event separately.

So, let’s look at three events: A, B, and C. If we know P(A and B and C) = P(A) P(B) P(C), are the three events mutually independent? Not necessarily. It is possible for the above equation to hold and yet P(A and B) is not equal to P(A) P(B). The definition of mutual independence requires something of every subset of {A, B, C} with two or more elements, not just the subset consisting of all elements. So we have to look at the subsets {A, B}, {B, C}, and {A, C} as well.

What if A and B are independent, B and C are independent, and A and C are independent? In other words, every pair of events is independent. Is that enough for mutual independence? Surprisingly, the answer is no. It is possible to construct a simple example where

  • P(A and B) = P(A) P(B)
  • P(B and C) = P(B) P(C)
  • P(A and C) = P(A) P(C)

and yet P(A and B and C) does not equal P(A) P(B) P(C).

Here is such an example. Select a card at random from a standard deck and note its suit: (c)lubs, (d)iamonds, (h)earts, or (s)pades. Define the events A={c,d}, B={c,h}, and C={c,s}. The events A, B, and C are pairwise independent but not independent.

Obscuring complexity

Here’s a great quote from The Logic of Failure on obscuring complexity.

By labeling a bundle of problems with a single conceptual label, we make dealing with that problem easier — provided we’re not interested in solving it. … A simple label can’t make the complex nature of the problem go away, but it can so obscure complexity that we lose sight of it. And that, of course, we find a great relief.

Revisiting six degrees and hubs

In 1967, Stanley Milgram conducted the famous experiment that told us there are six degrees of separation between any two people. He gave letters to 160 people in Nebraska and asked them to pass them along to someone who could eventually get the letters to a particular stock broker in New York. It took about six links for each letter to get to the stock broker.

In 2001, Duncan Watts repeated the experiment, asking 61,000 people to forward emails to eventually reach 18 targets worldwide. The emails took roughly six hops to reach their targets, giving Milgram’s original conclusion more credibility due to the larger sample.

But a secondary conclusion of Milgram didn’t hold up. In Milgram’s experiment, half of the letters reached the target via the same three friends of the stock broker. These people were deemed “hubs.” In Watts’s experiment, only 5% of messages reached their targets via hubs. Watts’s thesis is that while hubs exist — some people are far more connected than others — they’re not as important in spreading ideas as once supposed.

See “Is the Tipping Point Toast?” in the February 2008 issue of Fast Company for more on Duncan Watts and his skepticism regarding the importance of hubs.

Literate programming and statistics

Sweave, mentioned in my previous post, is a tool for literate programming. Donald Knuth invented literate programming and gives this description of the technique in his book by the same name:

I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature. Hence, my title: “Literate Programming.”

Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what each variable means. He or she strives for a program that is comprehensible because its concepts have been introduced in an order that is best for human understanding, using a mixture of formal and informal methods that reinforce each other.

Knuth says the quality of his code when up dramatically when he started using literate programming. When he published the source code for TeX as a literate program and a book, he was so confident in the quality of the code that he offered cash rewards for bug reports, doubling the amount of the reward with each edition. In one edition, he goes so far as to say “I believe that the final bug in TeX was discovered and removed on November 27, 1985.” Even though TeX is a large program, this was not an idle boast. A few errors were discovered after 1985, but only after generations of Stanford students studied the source code carefully and multitudes of users around the world put TeX through its paces.

While literate programming is a fantastic idea, it has failed to gain a substantial following. And yet Sweave might catch on even though literate programming in general has not.

In most software development, documentation is an after thought. When push comes to shove, developers are rewarded for putting buttons on a screen, not for writing documentation. Software documentation can be extremely valuable, but it’s most valuable to someone other than the author. And the benefit of the documentation may only be realized years after it was written.

But statisticians are rewarded for writing documents. In a statistical analysis, the document is the deliverable. The benefits of literate programming for a statistician are more personal and more immediate. Statistical analyses are often re-run, with just enough time between runs for the previous work to be completely flushed from term memory. Data is corrected or augmented, papers come back from review with requests for changes, etc. Statisticians have more self-interest in making their work reproducible than do programmers.

Patrick McPhee gives this analysis for why literate programming has not caught on.

Without wanting to be elitist, the thing that will prevent literate programming from becoming a mainstream method is that it requires thought and discipline. The mainstream is established by people who want fast results while using roughly the same methods that everyone else seems to be using, and literate programming is never going to have that kind of appeal. This doesn’t take away from its usefulness as an approach.

But statisticians are more free to make individual technology choices than programmers are. Programmers typically work in large teams and have to use the same tools as their colleagues. Statisticians often work alone. And since they deliver documents rather than code, statisticians are free to use Sweave without their colleagues’ knowledge or consent. I doubt whether a large portion of statisticians will ever be attracted to literate programming, but technological minorities can thrive more easily in statistics than in mainstream software development.

Irreproducible analysis

Journals and granting agencies are prodding scientists to make their data public. Once the data is public, other scientists can verify the conclusions. Or at least that’s how it’s supposed to work. In practice, it can be extremely difficult or impossible to reproduce someone else’s results. I’m not talking here about reproducing experiments, but simply reproducing the statistical analysis of experiments.

It’s understandable that many experiments are not practical to reproduce: the replicator needs the same resources as the original experimenter, and so expensive experiments are seldom reproduced. But in principle the analysis of an experiment’s data should be repeatable by anyone with a computer. And yet this is very often not possible.

Published analyses of complex data sets, such as microarray experiments, are seldom exactly reproducible. Authors inevitably leave out some detail of how they got their numbers. In a complex analysis, it’s difficult to remember everything that was done. And even if authors were meticulous to document every step of the analysis, journals do not want to publish such great detail. Often an article provides enough clues that a persistent statistician can approximately reproduce the conclusions. But sometimes the analysis is opaque or just plain wrong.

I attended a talk yesterday where Keith Baggerly explained the extraordinary steps he and his colleagues went through in an attempt to reproduce the results in a medical article published last year by Potti et al. He called this process “forensic bioinformatics,” attempting to reconstruct the process that lead to the published conclusions. He showed how he could reproduce parts of the results in the article in question by, among other things, reversing the labels on some of the groups. (For details, see “Microarrays: retracing steps” by Kevin Coombes, Jing Wang, and Keith Baggerly in Nature Medicine, November 2007, pp 1276-1277.)

While they were able to reverse-engineer many of the mistakes in the paper, some remain a mystery. In any case, they claim that the results of the paper are just wrong. They conclude “The idea … is exciting. Our analysis, however, suggests that it did not work here.”

The authors of the original article replied that there were a few errors but that these have been fixed and they didn’t effect the conclusions anyway. Baggerly and his colleagues disagree. So is this just a standoff with two sides pointing fingers at each other saying the other guys are wrong? No. There’s an important asymmetry between the two sides: the original analysis is opaque but the critical analysis is transparent. Baggerly and company have written code to carry out every tiny step of their analysis and made the Sweave code available for anyone to download. In other words, they didn’t just publish their paper, they published code to write their paper.

Sweave is a program that lets authors mix prose (LaTeX) with code (R) in a single file. Users do not directly paste numbers and graphs into a paper. Instead, they embed the code to produce the numbers and graphs, and Sweave replaces the code with the results of running the code. (Sweave embeds R inside LaTeX the way CGI embeds Perl inside HTML.) Sweave doesn’t guarantee reproducibility, but it is a first step.

Tips for learning regular expressions

Here are a few realizations that helped me the most when I was learning regular expressions.

1. Regular expressions aren’t trivial. If you think they’re trivial, but you can’t get them to work, then you feel stupid. They’re not trivial, but they’re not that hard either. They just take some study.

2. Regular expressions are not command line wild cards. They contain some of the same symbols but they don’t mean the same thing. They’re just similar enough to cause confusion.

3. Regular expressions are a little programming language. Regular expressions are usually contained inside another programming language, like JavaScript or PowerShell. Think of the expressions as little bits of a foreign language, like a French quotation inside English prose. Don’t expect rules from the outside language to have any relation to the rules inside, no more than you’d expect English grammar to apply inside that French quote.

4. Character classes are a little sub-language within regular expressions. Character classes are their own little world. Once you realize that and don’t expect the usual rules for regular expressions outside character classes to apply, you can see that they’re not very complicated, just different. Failure to realize that they are different is a major source of bugs.

Once you’re ready to dive into regular expressions, read Jeffrey Friedl’s book (ISBN 0596528124). It’s by far the best book on the subject. Read the first few chapters carefully, but then flip the pages quickly when he goes off into NFA engines and all that.

***

For daily tips on regular expressions, follow @RegexTip on Twitter.

Irrelevant uncertainty

Suppose I asked where you want to eat lunch. Then I told you I was about to flip a coin and asked again where you want to eat lunch. Would your answer change? Probably not, but sometimes the introduction of irrelevant uncertainty does change our behavior.

Here’s an example I’ve seen repeatedly. In adaptive clinical trials, a patient’s treatment is influenced by the data on all previous patients. It is often the case that a particular observation has no immediate impact. Suppose Mr. Smith’s outcome is unknown. We calculate what the treatment for the next patient will be if Mr. Smith responds well and what it will be if he does not. If both doses are the same, why wait to know his outcome before continuing? Some people accept this reasoning immediately, but others are quite resistant.

Not only may a patient’s outcome be irrelevant, the outcome of an entire clinical trial may be irrelevant. I heard of a conversation with a drug company where a consultant asked what the company would do if their trial were successful. He then asked what they would do if it were not successful. Both answers were the same. He then asked why do the trial at all, but his question fell on deaf ears.

While it is irrational to wait to resolve irrelevant uncertainty, it is a human tendency. For example, businesses may delay a decision on some action pending the outcome of a presidential election, even if they would take the same action regardless which candidate won. I see how silly this is when other people do it, but it’s not too hard for me to think of analogous situations where I act the same way.

Faith and reason

From Saint Augustine:

Heaven forbid that God should hate in us that by which he made us superior to the animals! Heaven forbid that we should believe in such a way as not to accept or seek reasons, since we could not even believe if we did not possess rational souls.

Musicians, drunks, and Oliver Cromwell

Oliver Cromwell

Jim Berger gives the following example illustrating the difference between frequentist and Bayesian approaches to inference in his book The Likelihood Principle.

Experiment 1:

A fine musician, specializing in classical works, tells us that he is able to distinguish if Hayden or Mozart composed some classical song. Small excerpts of the compositions of both authors are selected at random and the experiment consists of playing them for identification by the musician. The musician makes 10 correct guesses in exactly 10 trials.

Experiment 2:

A drunken man says he can correctly guess in a coin toss what face of the coin will fall down. Again, after 10 trials the man correctly guesses the outcomes of the 10 throws.

A frequentist statistician would have as much confidence in the musician’s ability to identify composers as in the drunk’s ability to predict coin tosses. In both cases the data are 10 successes out of 10 trials. But a Bayesian statistician would combine the data with a prior distribution. Presumably most people would be inclined a priori to have more confidence in the musician’s claim than the drunk’s claim. After applying Bayes theorem to analyze the data, the credibility of both claims will have increased, though the musician will continue to have more credibility than the drunk. On the other hand, if you start out believing that it is completely impossible for drunks to predict coin flips, then your posterior probability for the drunk’s claim will continue to be zero, no matter how much evidence you collect.

Dennis Lindley coined the term “Cromwell’s rule” for the advice that nothing should have zero prior probability unless it is logically impossible. The name comes from a statement by Oliver Cromwell addressed to the Church of Scotland:

I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

In probabilistic terms, “think it possible that you may be mistaken” corresponds to “don’t give anything zero prior probability.” If an event has zero prior probability, it will have zero posterior probability, no matter how much evidence is collected. If an event has tiny but non-zero prior probability, enough evidence can eventually increase the posterior probability to a large value.

The difference between a small positive prior probability and a zero prior probability is the difference between a skeptical mind and a closed mind.

Unbiased estimators can be absurd

An estimator in statistics is a way of guessing a parameter based on data. An estimator is unbiased if over the long run, your guesses converge to the thing you’re estimating. Sounds eminently reasonable. But it might not be.

Suppose you’re estimating something like the number of car accidents per week in Texas and you counted 308 the first week. What would you estimate is the probability of seeing no accidents the next week?

If you use a Poisson model for the number of car accidents, a very common assumption for such data, there is a unique unbiased estimator. And this estimator would estimate the probability of no accidents during a week as 1. Worse, had you counted 307 accidents, the estimated probability would be -1! The estimator alternates between two ridiculous values, but in the long run these values average out to the true value. Exact in the limit, useless on the way there. A slightly biased estimator would be much more practical.

See Michael Hardy’s article for more details: An_Illuminating_Counterexample.pdf