True versus Publishable

This weekend John Myles White and I discussed true versus publishable results in the comments to an earlier post. Methods that make stronger modeling assumptions lead to more statistical confidence, but less actual confidence. That is, they are more likely to produce positive results, but less likely to produce correct results.

JDC: If some scientists were more candid, they’d say “I don’t care whether my results are true, I care whether they’re publishable. So I need my p-value less than 0.05. Make as strong assumptions as you have to.”

JMW: My sense of statistical education in the sciences is basically Upton Sinclair’s view of the Gilded Age: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

Perhaps I should have said that scientists know that their conclusions are true. They just need the statistics to confirm what they know.

Brian Nosek talks about this theme on the EconTalk podcast. He discusses the conflict of interest between creating publishable results and trying to find out what is actually true. However, he doesn’t just grouse about the problem; he offers specific suggestions for how to improve scientific publishing.

Related post: More theoretical power, less real power

Tagged with: ,
Posted in Science, Statistics
8 comments on “True versus Publishable
  1. This is the essence of hypothesis testing – if you don’t like the result, you reject the data or the test ;)

  2. John Reid says:

    Perhaps I should have said that scientists know that their conclusions are true. They just need the statistics to confirm what they know.

    I thought scientists never knew their conclusions were true just that they had not yet been disproved.

  3. John says:

    John: That’s a good one! :)

    Yeah, that’s how it works in theory.

  4. Jeff says:

    I think this is one of the hardest things about training graduate students in statistics. We want to encourage them to work with collaborators, but they are frequently in the difficult position of being the gatekeeper of whether results are publishable or not, based on their analyses. There is a delicate balance between being a helpful collaborator on the one hand and a critical eye on the other.

  5. John says:

    Jeff: I’ve had conversations with several folks lately about degrees vs careers. People often get a degree that they enjoy that prepares them for a career they will not enjoy. You bring up a possible example of this.

    If someone doesn’t want to be both a gatekeeper and a collaborator, they need to plan their careers accordingly. Maybe they would prefer being an actuary for an insurance company over being a biostatistician for a research hospital.

  6. Mike says:

    I realize this is heresy especially in a statistical sense but I think often a scientist is convinced they are right long before they have p 0.05 level confidence.

    For example say you are doing computer simulations. You built a model and now want to see how it behaves. You can observe behavior of our system by making movies and observing patterns but it might take hundreds of simulations to have statistics to backup your observational behavior/physical intuition.

    Not necessarily a bad thing but it means often 10X times more work to get your idea proved sufficiently to get published even though your hypothesis/conclusion statement for your paper doesn’t change for that whole period. I guess that is what grad students are for :) But I guess it means often you are in a situation where 10% of the work is novel science and 90% is putting error bars through the vice.

  7. ubpdqn says:

    “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” (A. Einstein)

  8. ubpdqn says:

    or perhaps Jonathan Swift, “you cannot reason a man out of a position they didn’t reason themselves into”