The phrase necessary but not sufficient refers to something that you’ve got to have, but it isn’t enough. For example, being divisible by 2 is a necessary but not sufficient condition for being divisible by 6. Odd numbers are not divisible by 6, so being even is necessary. But evenness is not sufficient because, for example, 8 is an even number not divisible by 6.
Wrongly believing that nice theoretical properties are sufficient for a good model is known as a reification error. I don’t know of a name for wrongly believing theoretical properties are necessary. Believing theoretical criteria are sufficient when they’re not is a sophomoric error. Believing theoretical criteria are necessary when they’re not is a more subtle error.
Maybe it would be helpful to use a phrase like “beneficial but not sufficient” to indicate that some property increases our confidence in a model, though it may not be necessary.
Perhaps “promising but not a guarantee”?
I come down a little differently on this, if you think that a condition is necessary but it isn’t then you’re just plain wrong. I like your idea of beneficial but not sufficient, I’ve even read terms like that in mathematical texts and papers. But coming from the other end where you already think that the condition is sufficient, then you’re in error…
Dave: I agree with you in some contexts. If you want your model to have properties that reality doesn’t have, the more you succeed the more you diverge from reality.
One of the things I was thinking about was bias and consistency. It’s desirable for a statistical estimator to be unbiased and consistent, but neither are necessary in practice. A slightly biased, slightly inconsistent, but efficient estimator is better than an unbiased, consistent, but inefficient estimator.
Very true!