Years ago, Dentyne chewing gum ran an advertising campaign with the line “four out of five dentists surveyed recommend sugarless gum for their patients who chew gum.” Of course there’s no mention of sample size. Maybe “four out of five” meant 80% of a large survey, or maybe they literally surveyed five dentists.
Even if they only talked to five dentists, you’d think that if four dentists out of five came to the same conclusion, it is quite likely that they have good advice. Individuals have their biases, but if a large majority comes to the same conclusion independently, maybe some underlying truth is responsible for the consensus rather than a coincidence of prejudices.
However, there is a fallacy in the preceding argument. It implicitly assumes that professionals make up their minds independently and that their prejudices are independent. That may be true on some small objective problem. Several scientists may conduct independent experiments and have independent errors. In that case, if most agree on a measurement, that measurement is likely to be accurate. But ask a group of scientists working in the same area if their area deserves more funding. Of course they’ll agree. Their financial interests are highly correlated.
James Surowiecki’s book The Wisdom of Crowds argues that crowds can be amazingly intelligent. Crowds can also be incredibly foolish. One of the necessary conditions for crowd wisdom is independence. The book gives examples of experiments in which the average independent estimates, such as the weight of a cow or the number of jelly beans in a jar, surprisingly accurate. But if there were an open debate rather than an anonymous poll, the estimates would no longer be independent. If one influential persons offers a guess, other estimates will be anchored by that guess and tend to confirm it.
William Briggs has an excellent article this morning on scientific consensus. The context of his article is climate change, though I don’t want to open a debate here on climate change. For that matter, I don’t want to open a debate on the merits of sugarless chewing gum. I’m more interested in what the article says about how a consensus becomes self-reinforcing.
4 thoughts on “Four out of five dentists surveyed”
Not to open a debate, but…
The weasel words in that ad campaign were “…for their patients who chew gum.”
In other words, every dentist presumably recommends against chewing gum at all. But if you are going to chew gum anyway, four out of five think sugarless is less bad.
That’s the reason why planning poker works so well. Instead of one guy going “I think it will take 8 hours” and everyone else goes up or down but anchored by the original number, all team members must select their card beforehand and then flip at the same time. The interesting thing about doing that is that estimates tend to vary quite a bit early on, but once a team has been working together for a very long time the estimates start to be the same more and more. That is true consensus.
@Nemo And not only that, they don’t show the question that was asked. “If a patient is chewing a high-sugar gum, would you recommend that they switch to Trident, which is sugarless?”
Richard Dawkins is unimpressed by the current jury system where 12 (or 15 in Scotland) people meet and come to a decision. He’d prefer them to come to a decision independently. If you have a consensus, then convict. Personally, I’m not sure whether that would make things better or worse.
You have a big jar jelly beans:
a) You ask 100 people to independently estimate how many sweets are in the jar, and from their estimates compute an average of 3654.
b) You ask a single person to actually count the number of beans in the jar. They count 4632.
You have to assess which is the right answer. There are basically two issues (1) Evidence (2) Trust.
On evidence. For (a) you have a large number of people who have contributed to the measurement, but ultimately their measurement technique is weak being based on just looking at the jar. For (b) the person has actually counted the number of sweets in the jar, which is what you want to know. So (b) provides stronger evidence, despite only one person contributing to the measurement.
On trust. How can you trust (b)? If (b) knows that the jar can be given to other people who can count the beans, lying would be a poor choice because their reputation would be damaged by attempt to reproduce their result. So you can make a reasonable assumption of trust, in a situation where you can attempt to reproduce experiment. Clearly if you get the beans counted by different people and you always get the same answer you confidence grow. Reproducibility is a key criterion when assessing evidence.
It is certainly the case that previous measurements can influence current results.
For example the Millikan oil drop experiment http://en.wikipedia.org/wiki/Oil_drop_experiment for measuring electron charge (e). Millikans value for the viscosity of air was incorrect, which led to an incorrect value for e. Subsequent experiments showed a gradual shift from his
answer towards the correct value, rather than an abrupt jump that you would expect with independent measurement.
Where critical, blind analysis of data can be performed to eliminate the influence of previous results. For example in the analysis of Supernova data http://arxiv.org/abs/0804.4142.
When you ask for scientific views to be independent, what do you mean?
– Independent of evidence?
How valuable would it be to ask 100 people to estimate the number of beans in a jar if they had NOT LOOKED at the jar?
Likewise you could ask 20 people on street if they thought global mean temperature had increase over the past 50 years. Their views may be independent, but would they be valuable?
If you want to answer that question you would have to study the literature
global temperature measurements over the past 50 years. If you asked 20 “experts” who had studied the data would you get an independent answer?
No – because they have all been informed by the same set of literature.
Would their views be valuable? Yes – because it has been informed by evidence. The dependence in their views is because their views are based on evidence.
Scientists are people and so are subject to all the kinds of psychological biases that everyone has…confirmation bias, submission to authority etc. What corrects this is nature itself, you are observing and measuring things which exist in objective reality.
So Milikan may have used a value for the viscosity of oil which was wrong, and the researchers who repeated his work, may have initially been reluctant to disagree with his result, and so overestimate the error bars to achieve consistency. But the electron charge did not change! And eventually the results moved from Millikans value, because they were measuring something real.
In science it is observing and measuring the physical world which drives convergence of views.