Are tweets more accurate than science papers?

John Myles White brings up an interesting question on Twitter:

Ioannidis thinks most published biological research findings are false. Do you think >50% of tweets are false?

I’m inclined to think tweets may be more accurate than research papers, mostly because people tweet about mundane things that they understand. If someone says that there’s a long line at the Apple store, I believe them. When someone says that a food increases or decreases your risk of some malady, I’m more skeptical. I’ll wait to see such a result replicated before I put much faith in it. A lot of tweets are jokes or opinions, but of those that are factual statements, they’re often true.

Tweets are not subject to publication pressure; few people risk losing their job if they don’t tweet. There’s also not a positive publication bias: people can tweet positive or negative conclusions. There is a bias toward tweeting what makes you look good, but that’s not limited to Twitter.

Errors are corrected quickly on Twitter. When I make factual errors on Twitter, I usually hear about it within minutes. As the saga of Anil Potti illustrates, errors or fraud in scientific papers can take years to retract.

(My experience with Twitter may be atypical. I follow people with a relatively high signal to noise ratio, and among those I have a shorter list that I keep up with.)

Related

6 thoughts on “Are tweets more accurate than science papers?

  1. I am sure I write many stupid things on my blog, but you can see evidence that people check my facts and even issue bug reports against the software I share.

    This does not happen with research papers. I cannot recall a single journal reviewer who went and tried my code and reviewed the code. (I don’t expect them to do it anyhow.)

    Also, I don’t have a strong incentive to tweak my experimental results when I present results on my blog. It is not like the blog post is going to appear on my c.v.

  2. I wonder how much that would improve if you restricted the journals? For example, I don’t think 50% of chemistry papers I’ve read are wrong, but then I remember how many terrible journals are out there that I don’t look at.

    Oddly enough, apparently there is a lot of wrong information published in the really good journals, since they tend to push the limits of science, and well, sometimes you are just plain wrong when you try and push things.

  3. Daniel: I agree. My blog has come under much closer scrutiny than anything I’ve published in an academic journal.

    Canageek: The harder sciences have a better accuracy rate than the soft sciences. Chemistry is much easier to quantify than psychology, fewer confounding effects, etc.

    By Bayes theorem, the probability that a result is true given that is it published depends on the probability that researchers pick true propositions to try to prove. In a difficult area of research, more researchers start by trying to prove something that isn’t true, and so they get more false positives by chance. In the hard sciences, there is better theoretical guidance for selecting hypotheses to test and thus a higher prior probability that a result will be true.

  4. I a little dubious about classifying scientific papers as “right” or “wrong”. These concepts have some accuracy, but they can also be misleading.

    The essence of science is testing usefully descriptive, falsifiable concepts, to find better abstractions describing some repeatable phenomena. But we know that these abstractions will always be approximations — they will never be the phenomena itself.

    And, to deal with scientific concepts we introduce further abstractions describing the contexts where the abstractions hold. And we also judge them based on criteria that may approximate testing, such as simplicity, popularity, authoritative approval, and so on…

    Anyways, from a scientific point of view, when we say a paper was wrong, we usually mean that we do not know what conditions are required to repeat the results, and when we say a paper was right we usually mean we have not found a better model yet and that the cases where the results do not hold are not immediately obvious.

    Anyways, my point is that the “right/wrong” model can sometimes be situational and/or temporary.

    Alternative classification systems include “useful/useless” and “relevant/irrelevant” or maybe even the classic “reproducible/not-reproducible” (which unfortunately suffer from some of the same situational constraints as “right/wrong” but typical use conveys their subjective character).

    That said, the “right/wrong” distinction can be both useful and relevant. Still, I suspect that we have better concepts for characterizing universal characteristics of large collections of scientific papers.

  5. Perhaps that is true overall. As Canagreek noted, in part it may depend on the journal. It also depends on the author. When I write something for publication, it is reviewed and revised – often for trivial details, in my opinion, but I am forced to justify every statement. While my blog does not include literature review and long discussions of theoretical justifications of my hypothesis, what it does usually include is the actual code to generate the results. Since some of my work involves open data, anyone who is fairly proficient with programming can replicate my results.

    In my case, unless I am convinced something is as true as can be determined and I have done numerous replications, I don’t publish it – but then, I haven’t needed to worry about tenure review for a great many years, since I left academia.

Comments are closed.