P. J. Huber gives three desiderata for a statistical method in his book Robust Statistics:

- It should have a reasonably good (optimal or nearly optimal) efficiency at the assumed model.
- It should be robust in the sense that small deviations from the model assumptions should impair the performance only slightly.
- Somewhat larger deviations from the model should not cause a catastrophe.

Robust statistical methods are those that strive for these properties. These are not objective properties *per se* though there are objective ways to try to quantify robustness in various contexts.

(In Nassim Taleb’s tricotomy of fragile, robust, and anti-fragile, robustness is in the middle. Something is fragile if it is harmed by volatility, and anti-fragile if it benefits from volatility. Robustness is in the middle. It is not unduly harmed by volatility, but does not benefit from it either. In the context of statistical methods, volatility is departure from modeling assumptions. Hardly anything benefits from modeling assumptions being violated, but some methods are harmed by such violations more or less than others.)

Here are several blog posts I’ve written about robustness: