Early evidence-based medicine

In the 1840’s, Ignaz Semmelweis, an assistant professor in the maternity ward of Vienna General Hospital, demonstrated that mortality rates dropped from 12 percent to 2 percent when doctors washed their hands between seeing patients.

His colleagues resisted his findings for a couple reasons. First, they didn’t want to wash their hands so often. Second, Semmelweis had demonstrated association but did not give an underlying cause. (This was a few years before, and led to, the discovery of the germ theory.) He was fired, had a nervous breakdown, and died in a mental hospital at age 47. (Reference: Super Crunchers)

We know now Semmelweis was right and his colleagues wrong. It’s tempting to think that people in the 1840’s were either ignorant or lazy and that we’re different now. But human nature hasn’t changed. If someone asked you to do something you didn’t want to do and couldn’t explain exactly why you should do it, would you listen? You would naturally be skeptical, and it’s a good thing, since most published research results are false.

One thing that has changed since 1840 is the level of sophistication in interpreting data.  Semmelweis could argue today that his results warrant consideration despite the lack of a causal explanation, based on the strength of his data. Such an argument could be evaluated more readily now that we have widely accepted ways of measuring the strength of evidence. On the other hand, even the best statistical evidence does not necessarily cause people to change their behavior.

This New York Times editorial is a typical apologetic for evidence-based medicine. Let’s base medical decisions on evidence! But of course medicine does base decisions on evidence. The question is how medicine should use evidence, and this question is far more complex than it first appears.

Related: Adaptive clinical trial design

One thought on “Early evidence-based medicine

  1. The same thing can be said about current research in education. There is a big push from many sectors to base all decisions on “scientifically-based” research — randomized trials of interventions, methods or programs. Studies that attempt to understand WHY certain methods work, what is happening in successful interventions or exactly how children are responding to particular programs are being devalued and even excluded from important publications, such as the report this year by the National Mathematics Advisory Panel. An educational program can include many elements related in a complex way and a simple p-test of effectiveness really does very little to advance our understanding of how children learn best. Such “evidence-based” analyses are certainly useful, but used in isolation they give a rather distorted picture of reality.

Comments are closed.