Bayes isn’t magic

If a study is completely infeasible using traditional statistical methods, Bayesian methods are probably not going to rescue it. Bayesian methods can’t squeeze blood out of a turnip.

The Bayesian approach to statistics has real advantages, but sometimes these advantages are oversold. Bayesian statistics is still statistics, not magic.

8 thoughts on “Bayes isn’t magic

  1. Well, HP must think otherwise and are paying billions of dollars for Autonomy, our Cambridge Business Park next door neighbours. Autonomy was founded by a Cambridge academic and is based exclusively on Bayesian mathematics for information retrieval.

  2. @TriSys: I don’t know whether HP’s expenditure is justified or not.

    I work in Bayesian statistics and there are problems that I can only imagine solving with a Bayesian approach. But Bayesian methods can’t defy gravity.

    If you don’t have much information (either data or prior knowledge) then you can’t draw strong conclusions no matter what brand of statistics you practice. This should go without saying. It’s remarkable that some people have gone from an unjustified suspicion of Bayesian methods to unjustified optimism.

  3. @John:

    It’s remarkable that some people have gone from an unjustified suspicion of Bayesian methods to unjustified optimism.

    Possibly they have a poorly-informed prior.

  4. The typical version I’ve encountered is the unbounded enthusiasm for modeling things you have no information for. I think people tend to understand the mechanics for simulation from a posterior in complex models long before they understand where information for specific parameters comes from. Then it’s easy to fall into the trap of modeling the state (esp. in state-space models) of something you have no information on (other than priors). I assume this is similar to what John’s comment is—you can only analyze something if your study design got you information on it. Specifically information which is not completely confounded with something else.

  5. @Krzysztof Sakrejda-Leavitt,

    Sorry to be late to the party.

    There’s one recent area where some of the Bayesian methods may have a leg up. These come in two forms. One form is what used to be called “Objective Bayes” techniques, or techniques where priors and hyperpriors are not chosen, but come from a broad family of choices, and the choice of prior is made part of model select. The other form is the “likelihood free” methods where the sampling function is not well understood, or is empirically derived, either because it is too complicated, or because it is very expensive to calculate. These also go by the name of “approximate Bayesian computation”. Until recently, model selection with ABC was a problem, but, now, there’s the ABC with random forests work by Pudlo, and company (http://bioinformatics.oxfordjournals.org/content/early/2015/12/23/bioinformatics.btv684.abstract).

    Still no magic, and these can be expensive to calculate. But, on the other hand, these are new methods.

    As far as the other comments go, even in the world of these new methods and all the enthusiasm regarding machine learning methods, it continues to be amazing to me how powerful and clear the state-space methods of Harvey, Durbin, Koopman, Ooms, Grunwalk, Smith, and Kitigawa (among others) can be.

Comments are closed.