I’m teaching an introduction to Bayesian statistics. My first thought was to start with Bayes theorem, as many introductions do. But this isn’t the right starting point. Bayes’ theorem is an indispensable tool for Bayesian statistics, but it is not the foundational principle. The foundational principle of Bayesian statistics is the decision to represent uncertainty by probabilities. Unknown parameters have probability distributions that represent the uncertainty in our knowledge of their values.

Once you decide to use probabilities to express parameter uncertainty, you inevitably run into the need for Bayes theorem to work with these probabilities. Bayes theorem is applied constantly in Bayesian statistics, and that is why the field takes its name from the theorem’s author, Reverend Thomas Bayes (1702-1761). But “Bayesian” doesn’t describe Bayesian statistics quite the same way that “Frequentist” described frequentist statistics. The term “frequentist” gets to the heart of how frequentist statistics interprets probability. But “Bayesian” refers to a Bayes theorem, a **computational tool** for carrying out probability calculations in Bayesian statistics. If frequentist statistics were analogously named, it might be called “Bernoullian statistics” after Jacob Bernoulli’s law of large numbers.

The term “Bayesian” statistics might imply that frequentist statisticians dispute Bayes’ theorem. That is not the case. Bayes’ theorem is a simple mathematical result. What people dispute is the interpretation of the probabilities that Bayesians want to stick into Bayes’ theorem.

I don’t have a better name for Bayesian statistics. Even if I did, the name “Bayesian” is firmly established. It’s certainly easier to say “Bayesian statistics” than to say “that school of statistics that represents uncertainty in unknown parameters by probabilities,” even though the latter is accurate.

**Related posts**:

Every statistician, from R.A. Fisher on, uses Bayesian inference when it’s appropriate. What makes a Bayesian a Bayesian is that he or she uses Bayesian inference when it’s inappropriate as well. (And, yes, I’m a Bayesian.)

See this post by Andrew Gelman elaborating on his comment above.

Well, how about:

Laplacian Statistics?

Jeffriesian Statistics?

Coxian/Jaynesian Statistics? (Big hint as to what I’ve been reading lately…)

Thanks very much for this clarification — Bayes’ rule always seemed frequentist to me, and now I see it is not the dividing line between the two camps.

In light of Richard Cox’s Theorem, which bring into equivalence Kolmogorov’s axioms for probability theory and the Aristotelian (equivalently Boolean) notions of the AND, OR and NOT relations when extended to the continuum, if one were to use the word ‘Statistics’, I would probably go with Aristotelian Statistics, by virtue of reverence.

Yet, first order logic is ‘Generalized Aristotelian Logic,’ but logicians certainly don’t refer to it that way. Should a person’s name be included in a theory, the theory is automatically weakened due to apparent arbitrariness.

Should you truly believe that Bayesian Inference is the one and only way to do logic under uncertainty, call it ‘The Theory of Inference’ or ‘Inference Theory,’ or take a tip from logicians, and call it just plain ‘Inference’. Let other theories of inference compete, in the axiomatic sense, and see if it stands the test of time.

Like all meaningful debates in binary logic, the debates of rational inference will all come down to the interpretation of the conditional, which it so happens is not the extension of material implication. Recall that one way to interpret Godel’s second incompleteness theorem is that material implication does not equal deductive implication. Were they to be equal, then an unprovable proposition would vacuously be provable. Projecting this form of argument to the probabilistic conditional would certainly be the way I would go about assessing an axiomatization of inference.

-Doug