Popular research areas produce more false results

The more active a research area is, the less reliable its results are.

John Ioannidis suggested popular areas of research publish a greater proportion of false results in his paper Why most published research findings are false. Of course popular areas produce more results, and so they will naturally produce more false results. But Ioannidis is saying that they also produce a greater proportion of false results.

Now Thomas Pfeiffer and Robert Hoffmann have produced empirical support for Ioannidis’s theory in the paper Large-Scale Assessment of the Effect of Popularity on the Reliability of Research. Pfeiffer and Hoffmann review two reasons why popular areas have more false results.

First, in highly competitive fields there might be stronger incentives to “manufacture” positive results by, for example, modifying data or statistical tests until formal statistical significance is obtained. This leads to inflated error rates for individual findings: actual error probabilities are larger than those given in the publications. … The second effect results from multiple independent testing of the same hypotheses by competing research groups. The more often a hypothesis is tested, the more likely a positive result is obtained and published even if the hypothesis is false.

In other words,

  1. In a popular area there’s more temptation to fiddle with the data or analysis until you get what you expect.
  2. The more people who test an idea, the more likely someone is going to find data in support of it by chance.

The authors produce evidence of the two effects above in the context of papers written about protein interactions in yeast. They conclude that “The second effect is about 10 times larger than the first one.”

Related posts

5 thoughts on “Popular research areas produce more false results

  1. There is also the counterargument: In popular areas, more people are testing the published results and identify more errors. In less popular areas, there are not that many “vigilantes” and many false results are never tested.

  2. Panos, I agree that could happen. I make that argument about errors in popular areas of mathematics here. Unfortunately, scientists don’t often try to reproduce each other’s results. It’s a lot cheaper for a mathematician to go over a colleague’s proof than for a scientist to reproduce a colleague’s experiment.

  3. True, deliberate reproduction is unusual – what funding agency would want to pay for it after all? But people often use results from other groups to further their own work. And when results from different groups don’t add up, things that should work don’t, and people start looking for why that may be. Or people may need to redo part of an experiment simply to create the raw material or base for their own work.

    In a large, active field there’s enough inadvertent duplication of effort and comparison between related results that there’s a good chance to find the faulty data. In a slow, sparsely worked field the chance is much lower.

  4. What about the idea that people working in less popular areas are either inherently less concerned about scoring big prey than those in more popular fields and may be on average more concerned with accuracy at least partly because they are pursuing real passions? Also, it’s not as if working in an uncrowded area means that funding is not an issue; on the contrary the science budgets in the US has been disproportionately slashed for more ‘basic’ research, which includes many less crowded fields. Im sure it would be hard to find control conditions where the same researcher(s) shifted research from or to a more popular field, or maybe a field changed popularity status around an established researcher, but even a small amount of longitudinal data may be very revealing.

Comments are closed.