Common sense and statistics

College courses often begin by trying to weaken your confidence in common sense. For example, a psychology course might start by presenting optical illusions to show that there are limits to your ability to perceive the world accurately. I’ve seen at least one physics textbook that also starts with optical illusions to emphasize the need for measurement. Optical illusions, however, take considerable skill to create. The fact that they are so contrived illustrates that your perception of the world is actually pretty good in ordinary circumstances.

For several years I’ve thought about the interplay of statistics and common sense. Probability is more abstract than physical properties like length or color, and so common sense is more often misguided in the context of probability than in visual perception. In probability and statistics, the analogs of optical illusions are usually called paradoxes: St. Petersburg paradox, Simpson’s paradox, Lindley’s paradox, etc. These paradoxes show that common sense can be seriously wrong, without having to consider contrived examples. Instances of Simpson’s paradox, for example, pop up regularly in application.

Some physicists say that you should always have an order-of-magnitude idea of what a result will be before you calculate it. This implies a belief that such estimates are usually possible, and that they provide a sanity check for calculations. And that’s true in physics, at least in mechanics. In probability, however, it is quite common for even an expert’s intuition to be way off. Calculations are more likely to find errors in common sense than the other way around.

Nevertheless, common sense is vitally important in statistics. Attempts to minimize the need for common sense can lead to nonsense. You need common sense to formulate a statistical model and to interpret inferences from that model. Statistics is a layer of exact calculation sandwiched between necessarily subjective formulation and interpretation. Even though common sense can go badly wrong with probability, it can also do quite well in some contexts. Common sense is necessary to map probability theory to applications and to evaluate how well that map works.

Distribution of a range

Suppose you’re drawing random samples uniformly from some interval. How likely are you to see a new value outside the range of values you’ve already seen?

The problem is more interesting when the interval is unknown. You may be trying to estimate the end points of the interval by taking the max and min of the samples you’ve drawn. But in fact we might as well assume the interval is [0, 1] because the probability of a new sample falling within the previous sample range does not depend on the interval. The location and scale of the interval cancel out when calculating the probability.

Suppose we’ve taken n samples so far. The range of these samples is the difference between the 1st and the nth order statistics, and for a uniform distribution this difference has a beta(n-1, 2) distribution. Since a beta(a, b) distribution has mean a/(a+b), the expected value of the sample range from n samples is (n-1)/(n+1). This is also the probability that the next sample, or any particular future sample, will lie within the range of the samples seen so far.

If you’re trying to estimate the size of the total interval, this says that after n samples, the probability that the next sample will give you any new information is 2/(n+1). This is because we only learn something when a sample is less than the minimum so far or greater than the maximum so far.

 

 

Timid medical research

Cancer research is sometimes criticized for being timid. Drug companies run enormous trials looking for small improvements. Critics say they should run smaller trials and more of them.

Which side is correct depends on what’s out there waiting to be discovered, which of course we don’t know. We can only guess. Timid research is rational if you believe there are only marginal improvements that are likely to be discovered.

Sample size increases quickly as the size of the effect you’re trying to find decreases. To establish small differences in effect, you need very large trials.

If you think there are only small improvements on the status quo available to explore, you’ll explore each of the possibilities very carefully. On the other hand, if you think there’s a miracle drug in the pipeline waiting to be discovered, you’ll be willing to risk falsely rejecting small improvements along the way in order to get to the big improvement.

Suppose there are 500 drugs waiting to be tested. All of these are only 10% effective except for one that is 100% effective. You could quickly find the winner by giving each candidate to one patient. For every drug whose patient responded, repeat the process until only one drug is left. One strike and you’re out. You’re likely to find the winner in three rounds, treating fewer than 600 patients. But if all the drugs are 10% effective except one that’s 11% effective,  you’d need hundreds of trials with thousands of patients each.

The best research strategy depends on what you believe is out there to be found. People who know nothing about cancer often believe we could find a cure soon if we just spend a little more money on research. Experts are more sanguine, except when they’re asking for money.

The mean of the mean is the mean

There’s a theorem in statistics that says

E( \bar{X} ) = \mu

You could read this aloud as “the mean of the mean is the mean.” More explicitly, it says that the expected value of the average of some number of samples from some distribution is equal to the expected value of the distribution itself. The shorter reading is confusing since “mean” refers to three different things in the same sentence. In reverse order, these are:

  1. The mean of the distribution, defined by an integral.
  2. The sample mean, calculated by averaging samples from the distribution.
  3. The mean of the sample mean as a random variable.

The hypothesis of this theorem is that the underlying distribution has a mean. Lets see where things break down if the distribution does not have a mean.

It’s tempting to say that the Cauchy distribution has mean 0. Or some might want to say that the mean is infinite. But if we take any value to be the mean of a Cauchy distribution — 0, ∞, 42, etc. — then the theorem above would be false. The mean of n samples from a Cauchy has the same distribution as the original Cauchy! The variability does not decrease with n, as it would with samples from a normal, for example. The sample mean doesn’t converge to any value as n increases. It just keeps wandering around with the same distribution, no matter how large the sample. That’s because the mean of the Cauchy distribution simply doesn’t exist.

On replacing calculus with statistics

Russ Roberts had this to say about the proposal to replacing the calculus requirement with statistics for students.

Statistics is in many ways much more useful for most students than calculus. The problem is, to teach it well is extraordinarily difficult. It’s very easy to teach a horrible statistics class where you spit back the definitions of mean and median. But you become dangerous because you think you know something about data when in fact it’s kind of subtle.

A little knowledge is a dangerous thing, more so for statistics than calculus.

This reminds me of a quote by Stephen Senn:

Statistics: A subject which most statisticians find difficult but in which nearly all physicians are expert.

Related: Elementary statistics book recommendation

 

 

Nomenclatural abomination

David Hogg calls conventional statistical notation a “nomenclatural abomination”:

The terminology used throughout this document enormously overloads the symbol p(). That is, we are using, in each line of this discussion, the function p() to mean something different; its meaning is set by the letters used in its arguments. That is a nomenclatural abomination. I apologize, and encourage my readers to do things that aren’t so ambiguous (like maybe add informative subscripts), but it is so standard in our business that I won’t change (for now).

I found this terribly confusing when I started doing statistics. The meaning is not explicit in the notation but implicit in the conventions surrounding its use, conventions that were foreign to me since I was trained in mathematics and came to statistics later. When I would use letters like f and g for functions collaborators would say “I don’t know what you’re talking about.” Neither did I understand what they were talking about since they used one letter for everything.

What good is an old weather forecast?

Why would anyone care about what the weather was predicted to be once you know what the weather actually was? Because people make decisions based in part on weather predictions, not just weather. Eric Floehr of ForecastWatch told me that people are starting to realize this and are increasingly interested in his historical prediction data.

This morning I thought about what Eric said when I saw a little snow. Last Tuesday was predicted to see ice and schools all over the Houston area closed. As it turned out, there was only a tiny amount of ice and the streets were clear. This morning there actually is snow and ice in the area, though not much, and the schools are all open. (There’s snow out in Cypress where I live, but I don’t think there is in Houston proper.)

Aftermath of last Tuesday’s storm

Related posts:

Heterogeneous data

I have a quibble with the following paragraph from Introducing Windows Azure for IT Professionals:

The problem with big data is that it’s difficult to analyze it when the data is stored in many different ways. How do you analyze data that is distributed across relational database management systems (RDBMS), XML flat-file databases, text-based log files, and binary format storage systems?

If data are in disparate file formats, that’s a pain. And from an IT perspective that may be as far as the difficulty goes. But why would data be in multiple formats? Because it’s different kinds of data! That’s the bigger difficulty.

It’s conceivable, for example, that a scientific study would collect the exact same kinds of data at two locations, under as similar conditions as possible, but one site put their data in a relational database and the other put it in XML files. More likely the differences go deeper. Maybe you have lab results for patients stored in a relational database and their phone records stored in flat files. How do you meaningfully combine lab results and phone records in a single analysis? That’s a much harder problem than converting storage formats.

 

Some fields produce more false results than others

John Ioannidis stirred up a healthy debate when he published Why Most Published Research Findings Are False. Unfortunately, most of the discussion has been over whether the word “most” is correct, i.e. whether the proportion of false results is more or less than 50 percent. At least there is more awareness that some published results are false and that it would be good to have some estimate of the proportion.

However, a more fundamental point has been lost. At the core of Ioannidis’ paper is the assertion that the proportion of true hypotheses under investigation matters. In terms of Bayes’ theorem, the posterior probability of a result being correct depends on the prior probability of the result being correct. This prior probability is vitally important, and it varies from field to field.

In a field where it is hard to come up with good hypotheses to investigate, most researchers will be testing false hypotheses, and most of their positive results will be coincidences. In another field where people have a good idea what ought to be true before doing an experiment, most researchers will be testing true hypotheses and most positive results will be correct.

For example, it’s very difficult to come up with a better cancer treatment. Drugs that kill cancer in a petri dish or in animal models usually don’t work in humans. One reason is that these drugs may cause too much collateral damage to healthy tissue. Another reason is that treating human tumors is more complex than treating artificially induced tumors in lab animals. Of all cancer treatments that appear to be an improvement in early trials, very few end up receiving regulatory approval and changing clinical practice.

A greater proportion of physics hypotheses are correct because physics has powerful theories to guide the selection of experiments. Experimental physics often succeeds because it has good support from theoretical physics. Cancer research is more empirical because there is little reliable predictive theory. This means that a published result in physics is more likely to be true than a published result in oncology.

Whether “most” published results are false depends on context. The proportion of false results varies across fields. It is high in some areas and low in others.

Convenient and innocuous priors

Andrew Gelman has some interesting comments on non-informative priors this morning. Rather than thinking of the prior as a static thing, think of it as a way to prime the pump.

… a non-informative prior is a placeholder: you can use the non-informative prior to get the analysis started, then if your posterior distribution is less informative than you would like, or if it does not make sense, you can go back and add prior information. …

At first this may sound like tweaking your analysis until you get the conclusion you want. It’s like the old joke about consultants: the client asks what 2+2 equals and the consultant counters by asking the client what he wants it to equal. But that’s not what Andrew is recommending.

A prior distribution cannot strictly be non-informative, but there are common intuitive notions of what it means to be non-informative. It may be helpful to substitute “convenient” or “innocuous” for “non-informative.” My take on Andrew’s advice is something like this.

Start with a prior distribution that’s easy to use and that nobody is going to give you grief for using. Maybe the prior doesn’t make much difference. But if your convenient/innocuous prior leads to too vague a conclusion, go back and use a more realistic prior, one that requires more effort or risks more criticism.

It’s odd that realistic priors can be more controversial than unrealistic priors, but that’s been my experience. It’s OK to be unrealistic as long as you’re conventional.

***

Consulting in Bayesian analysis

Elusive statistics

From Controversies in the Foundations of Statistics by Bradley Efron:

Statistics seems to be a difficult subject for mathematicians, perhaps because its elusive and wide-ranging character mitigates against the traditional theorem-proof method of presentation. It may come as some comfort then that statistics is also a difficult subject for statisticians.

Related posts:

Deriving distributions vs fitting distributions

Sometimes you can derive a probability distributions from a list of properties it must have. For example, there are several properties that lead inevitably to the normal distribution or the Poisson distribution.

Although such derivations are attractive, they don’t apply that often, and they’re suspect when they do apply. There’s often some effect that keeps the prerequisite conditions from being satisfied in practice, so the derivation doesn’t lead to the right result.

The Poisson may be the best example of this. It’s easy to argue that certain count data have a Poisson distribution, and yet empirically the Poisson doesn’t fit so well because, for example, you have a mixture of two populations with different rates rather than one homogeneous population. (Averages of Poisson distributions have a Poisson distribution. Mixtures of Poisson distributions don’t.)

The best scenario is when a theoretical derivation agrees with empirical analysis. Theory suggests the distribution should be X, and our analysis confirms that. Hurray! The theoretical and empirical strengthen each other’s claims.

Theoretical derivations can be useful even when they disagree with empirical analysis. The theoretical distribution forms a sort of baseline, and you can focus on how the data deviate from that baseline.

 

Related posts:

No data on the need to bring data

The preface to Elements of Statistical Learning opens with the popular quote

In God we trust, all others bring data. — William Edwards Deming

The footnote to the quote is better than the quote:

On the Web, this quote has been widely attributed to both Deming and Robert W. Hayden; however Professor Hayden told us that he can claim no credit for this quote, and ironically we could find no “data” confirming that Deming actually said this.

Emphasis added.

The fact that so many people attributed the quote to Deming is evidence that Deming in fact said it. It’s not conclusive: popular attributions can certainly be wrong. But it is evidence.

Another piece of evidence for the authenticity of the quote is the slightly awkward phrasing “all others bring data.” The quote is often stated in the form “all others must bring data.” The latter is better, which lends credibility to the former: a plausible explanation for why the more awkward version survives would be that it is what someone, maybe Deming, actually said.

The inconclusive evidence in support of Deming being the source of the quote is actually representative of the kind of data people are likely to bring someone like Deming.