Here’s a homework problem from a class I taught:

… In past years, the average number of accidents per year was 15, and this year it was 10. Is it justified to claim that the accident rate has dropped?

The naive answer is of course the rate has dropped. Ten is less than fifteen. This reminds me of a joke attributed to Abraham Lincoln:

Q: If you call a tail a leg, how many legs does a dog have?

A: Four. Calling a tail a leg doesn’t make it one.

But the homework problem isn’t asking whether ten is less than fifteen. Part of the purpose of the exercise is to state the problem well. The real question is whether there is good evidence that the fundamental causes of automobile accidents have changed for the better or whether there is a fair chance that a random fluctuation caused the decrease. It turns out the latter is the case given the model (Poisson) that the question suggests using.

Think of this example next time you hear politicians say that some measure improved during their administration: economic growth, employment, crime rates, etc. The basic question is whether in fact the measure changed. The next question is whether the change was more likely a coincidence or a genuine improvement. And if there was a real improvement, ask whether the politician deserves any credit.

(The homework exercise came from Statistical Inference, problem 8.2.)

Well said. I have a perfect example of this happening in my recent post “Agreeing to disagree”.

So youre saying that ten is not less than fifteen?

I realize you are talking about significances and degrees of confidence.

Wouldnt it make more sense to point out that a 5% decrease caused by coincidental fluctuation is less important than a 2% decrease caused directly?

I mean, you really cant say that a politician has or has not caused something based solely one whether or not you arbitrarily deem a percentage “big enough”, without knowing the mechanics of the system well enough to know how and where the change came from.

The whole point of “significance” is to make coincidental relationships appear unimportant and unworthy of note. But unfortunately the reverse is also true. Direct relationships also can appear unworthy of note. And you cannot decide which is which without knowing the mechanics of the system based strictly on size of change.

A greater change makes direct relationships more probable, sure. But you are pretty much declaring that smaller changes have no causal relationship, not that one is less probable.

“Blind faith” mathematicians and statisticians use probability values and certainty values and significance values as proof of something or disproof of something… and neither can be the case. The fundamental fallacy of statistical reasoning.

Your last paragraph reminds me of so many examples of government speak. For instance, in government speak, a decrease in spending meant that the amount of budget increase for

nextyear was cut. Dollars are never decreased, but spending is reduced because we don’t increase the budget as much as we intended too.All the analysis is saying is that while the underlying accident rate might have come down, the play of chance can’t be ruled out.

I know that. I realize that. All I was saying was that the very real chance that it wasnt coincidental at all is being underplayed and trivialized. Blaise, you yourself are guilty of the fallacy I was pointing out… I think… the default position you and John have is “chance caused it” SOLELY because the rate of change was small. Why is that the default position? Why is that the assumed reality? I, on the other hand, realize that neither declaration can be made. I dont state either as being “truth”. I dont say that its “probably true”…. because then I sound somehow more informed than anyone else. My “credentials” being the only reason to believe my conclusion (appeal to authority). I would say simply, truthfully, and without bias, that we lack the information to know. The truth is simply that we cannot know… and just because a politician had a small affect doesnt mean he isnt the direct cause of improvement… especially in the face of the bureaucracy and democracy that stands in the way of any politicians policies for change. Its no wonder that a politician has a small or delayed affect in our society – its true of every one. It makes no rational sense to assert anything without “knowing”. That is the fallacy of rhetoricians and empiricists alike…

This is a very timely post for me – in the midst of teaching the idea of statistical significance and going through examples – the most common question I am getting is “but I can see in the sample that the mean is smaller – why do I need to do anything else?”.

So of course, I talk about the very thing you discuss here but I have never thought to connect it to or to also discuss the numbers put out regularly by policiticians. This would be a great part of this topic to discuss in the aim of general statistical literacy.

It’s worth calling out the implicit assumption that the sample size is assumed to be constant.

The correct way to approach this is to use statistical process control (see e.g. http://amzn.to/puH7SK for an excellent introduction, or http://amzn.to/nS6fG2 or http://amzn.to/r89iaK for the juicy bits). The control chart techniques of statistical process control are precisely designed to help distinguish between routine variation and special cause variation (the way the distinction is worded here makes sense in English as well as with their SPC definitions). The especially powerful aspect of this approach is that by virtue of the Vysochanskii-Petunin inequality (http://bit.ly/reGy4I), the analysis performs correctly regardless of the underlying distribution. So in real world applications in which the assumptions (of e.g. the Poisson distribution) may not be sufficiently justified, SPC provides a robust approach.

Re. the Vyscochanskii-Petunin inequality, of course for this we require unimodality, but this just corresponds to ensuring you are measuring one process.