Glass melts near absolute zero
Bioengineered blood vessels
Neal Stephenson essays
What the strange persistence of rockets can teach us about innovation
In the beginning was the command line
History of non-Euclidean geometry
Nineteen dubious ways to compute the exponential of a matrix
Could Fisher, Jeffreys and Neyman have agreed on testing?
Top 500 data blogs ranked by influence
Electronically enhanced acoustica
I find myself firmly in Jeffreys’ camp these days. I have some data D. I have two hypotheses H0 and H1. If P(D|H1)>>P(D|H0), then I accept H1 and reject H0, unless there is some other evidence to consider. Fischer’s arguments about the probability of observations “at least as extreme” as a given observation make no sense to me any more. What if the density is on a circle? Then there is no tail. What if the density has zero value at the mean? Then events are *less* likely as they become less extreme.
However, I am not sure of the practical difference. For the normal distribution, you can come up with a Jeffreys-style hypothesis test based on P(D|H0)/P(D|H1) < .05. and get some critical value different from the classic 1.96. But since the .05 is completely arbitrary anyway, what difference does it really make?
John D. Cook
Subscribe by Email