Andrew Gelman added a couple more types of error to the standard repertoire of **type I** and **type II** errors. He suggests using **type S** error to describe a result that gets a sign backward, reporting that *A* is bigger than *B* when in fact *B* is bigger than *A*. He also suggests using **type M** error for results that get the magnitude of a result wrong.

Maybe we could add to this list **type R** for reification error: treating an abstraction as if it were real, forgetting that a model is a model and stretching it beyond its limits.

**Related links**:

- Just an approximation
- Floating point error is the least of my worries
- The Pretense of Knowledge

Type R error is a statistical error caused by a bug in the R language interpreter.

I was thinking that Type R error would be blindly trusting the output of your tools without cross-checking with common sense.

In all seriousness, there have been several proposals like this:

1) Florence Nightingale mentioned choosing the test based on the sample.

2) Mosteller mentioned “correctly rejecting the null hypothesis for the wrong reason”.

3) Kimball mentioned “the error committed by giving the right answer to the wrong problem” (Isn’t this just the “modeling error” you wrote about a few weeks ago?)

4) Marascuilo and Levin proposed “the incorrect interpretation of a correctly rejected hypothesis”

Read about these and other Types of errors (some whimsical) at

http://en.wikipedia.org/wiki/False_positive#Type_I_error

Type R error can also be using an interpreting shell to do your statistical analysis by hand, with a great answer at the end of the day but no idea what you did to get it, and no hope of re-doing it when the data get updated.

But I also agree with Bret — especially with BUGS (in the MCMC sense).

To what type of “model” would “type R” refer? Maybe I simply misunderstand, but it seems there are two types of model — theoretical models and statistical models — that could be stretched beyond their limits, and it’s not clear to me that “type R” should refer to both.

A theoretical model could be stretched when it doesn’t provide the degrees of freedom (a better term escapes me) needed to explain a particular phenomenon. In social science, for instance, this may appear as a naive application of rational choice theory. A statistical model, on the other hand, could be stretched when it is used to represent a theoretical model for which it’s not particularly well-suited; e.g., using OLS to model a binary dependent variable.

“reporting that A is bigger than B when in fact B is bigger than A”

The crucial part of a type S error is the first part I think, that the sign is wrong.

This would often be referred to as confusing the map with the territory