Robustness of equal weights

In Thinking, Fast and Slow, Daniel Kahneman comments on The robust beauty of improper linear models in decision making by Robyn Dawes. According to Dawes, or at least Kahneman’s summary of Dawes, simply averaging a few relevant predictors may work as well or better than a proper regression model.

One can do just as well by selecting a set of scores that have some validity for predicting the outcome and adjusting the values to make them comparable (by using standard scores or ranks). A formula that combines these predictors with equal weights is likely to be just as accurate in predicting new cases as the multiple-regression model that was optimal in the original sample. More recent research went further: formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.

If the data really do come from an approximately linear system, and you’ve identified the correct variables, then linear regression is optimal in some sense. If a simple-minded approach works nearly as well, one of these assumptions is wrong.

  1. Maybe the system isn’t approximately linear. In that case it would not be surprising that the best fit of an inappropriate model doesn’t work better than a crude fit.
  2. Maybe the linear regression model is missing important predictors or has some extraneous predictors that are adding noise.
  3. Maybe the system is linear, you’ve identified the right variables, but the application of your model is robust to errors in the coefficients.

Regarding the first point, it can be hard to detect nonlinearities when you have several regression variables. It is especially hard to find nonlinearities when you assume that they must not exist.

Regarding the last point, depending on the purpose you put your model to, an accurate fit might not be that important. If the regression model is being used as a classifier, for example, maybe you could do about as good a job at classification with a crude fit.

The context of Dawes’ paper, and Kahneman’s commentary on it, is a discussion of clinical judgment versus simple formulas. Neither author is discouraging regression but rather saying that a simple formula can easily outperform clinical judgment in some circumstances.

Related posts

4 thoughts on “Robustness of equal weights

  1. It sounds to me like they’re rediscovering a crude version of the stein effect. They’re basically applying extreme shrinkage in the equal-weights model and comparing it against a regression for which there is not enough shrinkage.

  2. vl: Interesting way to look at it.

    Maybe there’s a psychological justification for shrinking toward the equal-weights model. The predictive factors didn’t come out of nowhere: a human thought to include them, and there was some intuitive processing that went on before the model was constructed.

  3. You have made some good points that apply to the modeling of human decision making specifically and predictive modeling in general. I read Howard Wainer’s paper “Estimating Coefficients in Linear Models: It Don’t Make No Nevermind” when I was in graduate school. Here is the link: http://dionysus.psych.wisc.edu/lit/Articles/WainerH1976a.pdf
    It was short, well-written, included some mathematics, and had a subtitle that you will never forget. This was my introduction to overfitting and the need for cross-validation.

    [JDC: The above link seems to be broken. Here’s an alternative link.]

  4. I think one of the key elements that is omitted here, and which is mentioned in Dawe’s and Kannehman’s writings, is that the weighting of coefficients in a multiple regression based on a SAMPLE may cause weights that, when applied as a predictive model for the overall population, are not accurate because of error/randomness in the sample. In the end, as Dawe’s showed in various examples, equal weighting may be more accurate as a predictor due to this.

Comments are closed.