I find it amusing when I hear someone say something is “just an approximation” because their “exact” answer is invariably “just an approximation” from someone else’s perspective. When someone says “mere approximation” they often mean “that’s not the kind of approximation my colleagues and I usually make” or “that’s not an approximation I understand.”

For example, I once audited a class in celestial mechanics. I was surprised when the professor spoke with disdain about some analytical technique as a “mere approximation” since his idea of “exact” only extended to Newtonian physics. I don’t recall the details, but it’s possible that the disreputable approximation introduced no more error than the decision to only consider point masses or to ignore relativity. In any case, the approximation violated the rules of the game.

Statisticians can get awfully uptight about numerical approximations. They’ll wring their hands over a numerical routine that’s only good to five or six significant figures but not even blush when they approximate some quantity by averaging a few hundred random samples. Or they’ll make a dozen gross simplifications in modeling and then squint over whether a *p*-value is 0.04 or 0.06.

The problem is not accuracy but familiarity. We all like to draw a circle around our approximation of reality and distrust anything outside that circle. After a while we forget that our approximations are even approximations.

This applies to professions as well as individuals. All is well until two professional cultures clash. Then one tribe will be horrified by an approximation another tribe takes for granted. These conflicts can be a great reminder of the difference between trying to understand reality and playing by the rules of a professional game.

**Related posts**:

“What do you call a group of statisticians?”

‘A quarrel.’

Joke from _The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy_ http://amzn.com/0300169698

I agree about model simplification, though I think there is also another problem these days: models that are absurdly complicated. Imagine something like “Data Structures: A String-Theoretic Approach.” An article will have many pages of highly advanced mathematics or physics, followed by one or two test cases where the theory is applied to a trivial problem.

I agree with your concern about absurdly complicated models. It’s especially annoying when a model is complicated without being any more realistic. I’d like to see a paper say something like this: “We’re using a very simple model here. I’m sure you could think of dozens of ways to make it more complicated, and we could too. But the fact is nobody really understands what’s going on well enough to justify anything more complicated, so we’ll stick with something easy to understand.”

I like this post John!

I sometimes read grant applications. The approximations and assumptions used in power analyses are often appalling. Taxpayers and philanthropists are ultimately the victims of this, either because the proposed research is underpowered to the point of futility, or wastefully overpowered.

More generally, as a result of complacency and other pressures, I feel that experts and academics run the risk of becoming technicians. Science would suffer for it.

Regarding willingness to compromise, see this famous Winston Churchill story.

This reminds me of an article written by Creation Moments. They site a study that says we believe what we hear until their is evidence to reject this new knowledge. Here’s the article: http://www.creationmoments.com/radio/transcripts/do-you-know-what-you-believe

Whoa…. great post

This post reminded me of a quote I’d seen, which turns out to be from the statistician John Tukey:

“…Far better an approximate answer to the right question, than the exact answer to the wrong question, which can always be made precise…”

Ironically, that may be an approximate paraphrasing, since I saw it worded a couple ways online (I ended up pulling it from wikipedia).

Regarding Tukey’s quote:

If the approximation is poor, an approximate solution to the right question is

nobetter than an exact answer to the wrong question! If we consider that poor approximations may go unnoticed, then the former may be much worse than the latter.Better still is a precise answer to the right question. Technology and methods have advanced tremendously in the last 50 years (since Tukey made this argument).

My favorite domain for this kind of thing is the field of Econometrics, which is primarily a collection of extremely specialized statistical methods to be used for regression analysis when the standard assumptions (gaussian errors, homogeneous error variance, etc.) are violated. Far too often, these finicky corrections are applied to data generated by poorly-designed survey instruments, censored samples, incorrectly normalized dollar amounts, autocorrelated time series, etc.

My statistics book says that we estimate p for a sample. Then it talks about the real value of p. Problem? If you had the real value of p, you wouldn’t need p.