This weekend John Myles White and I discussed true versus publishable results in the comments to an earlier post. Methods that make stronger modeling assumptions lead to more statistical confidence, but less actual confidence. That is, they are more likely to produce positive results, but less likely to produce correct results.
JDC: If some scientists were more candid, they’d say “I don’t care whether my results are true, I care whether they’re publishable. So I need my p-value less than 0.05. Make as strong assumptions as you have to.”
JMW: My sense of statistical education in the sciences is basically Upton Sinclair’s view of the Gilded Age: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
Perhaps I should have said that scientists know that their conclusions are true. They just need the statistics to confirm what they know.
Brian Nosek talks about this theme on the EconTalk podcast. He discusses the conflict of interest between creating publishable results and trying to find out what is actually true. However, he doesn’t just grouse about the problem; he offers specific suggestions for how to improve scientific publishing.
Related post: More theoretical power, less real power