Ignorance doesn’t change reality: statistics pitfall

Here’s an easy error to fall into in statistics. Suppose I have n samples from a normal(μ, σ2) distribution, say n = 16, and σ is unknown. What is the distribution of the average of the samples? A common mistake is to say Student-t: if σ is known, the sample mean has a normal distribution, otherwise it has a t distribution.

But that’s wrong. Your ignorance of σ does not change the distribution of the data. There’s no spooky quantum effect that changes the data based on your knowledge. A linear combination of independent normal random variables is another normal random variable, so the sample mean has a normal distribution, whether or not you know its variance. Your knowledge or ignorance of σ doesn’t change the distribution of the data; it changes what you’re likely to want to do regarding the data. When the variance is unknown, you use procedures involving the sample variance rather than the distribution variance. This doesn’t change the distribution of the data but it changes the distribution you construct (implicitly) in your analysis of the data.

4 thoughts on “Ignorance doesn’t change reality: statistics pitfall

  1. Interesting observation — I had not known folks thought that way about it. That might be important to keep in mind when teaching this stuff, that students may draw the erroneous conclusion.

    I think this all comes from teaching the case of estimating the population mean when the population variance is known (and the population distribution is known to be normal). Has anyone ever encountered this case in real life? Even disregarding the normality assumption?

    Of course it is a nice mathematical introduction to the case when the population variance is unknown, but still.

    Anyway, another illustration is that even when the population variance is known, if you construct your estimator using the sample variance you get exactly the same result as when it is unknown. I guess it looks more obvious that way around.

    By the way, people generally object to corrections for multiple looks on the grounds that there is no spooky quantum effect of having looked at the data already. Same thing for multiple testing.

  2. The sample mean has a N(μ, σ²) distribution. The z-score (x-bar-μ)/(σ/√n) has the t. Student didn’t say anything about the sample mean, but rather the standardized mean under the null hypothesis.

    All this assumes you’re not modeling (μ, σ²) themselves with probability distributions.

  3. Must add to John’s comment. If the sample mean has a N(μ, σ²) distribution then the standardized sample mean (x-bar-μ)/(σ/√n) also has a normal distribution. The t-distribution comes in to play when we must estimate σ with the sample standard deviation, ‘s’. The standardized sample mean, stadardized by using the sample standard deviation (x-bar-μ)/(s/√n) follows a t-distribution. No (correct) theory says that the sample mean follows a t-distribution!

Comments are closed.