Accuracy versus perceived accuracy

Commercial weather forecasters need to be accurate, but they also need to be perceived as being accurate, and sometimes the latter trumps the former.

For instance, the for-profit weather forecasters rarely predict exactly a 50% chance of rain, which might seem wishy-washy and indecisive to customers. Instead, they’ll flip a coin and round up to 60, or down to 40, even though this makes the forecasts both less accurate and less honest.

Forecasters also exaggerate small chances of rain, such as reporting 20% when they predict 5%.

People notice one type of mistake—the failure to predict rain—more than another kind, false alarms. If it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic, whereas an unexpectedly sunny day is taken as a serendipitous bonus.

From The Signal and the Noise. The book gets some of its data from Eric Floehr of ForecastWatch. Read my interview with Eric here.

8 thoughts on “Accuracy versus perceived accuracy

  1. As someone that rides their bike to work, I would say they exaggerate the chance for rain much more than that – I do not worry about rain unless the chance is above 60%. Today there is a 40% chance of rain this morning (sunny), and a 60% chance for the afternoon. There is a 90% chance tonight – so it will probably rain then.

  2. According to the book, the national weather forecast is very well calibrated: on days with a reported x% chance of rain, it really does rain about x% of the time.

    The Weather Channel forecasts are also well calibrated, except for the deliberate distortions described above. Local weather reports, however, are poorly calibrated.

    To see how accurate forecasts are for your area, see Eric Floehr’s site

  3. This is really interesting, John. I live very close to EDDS (Stuttgart) airfield, so the aviation weather forecast for the field (the TAF) for the next 24 hours is valid for me. Once you learn to read it, it’s much more accurate than the commercial forecast.

  4. John: I wonder how this discussion changes outside the US. For example, do German commercial weather forecasts exaggerate small probabilities of rain? Unfortunately Eric’s ForecastAdvisor site only has US data.

    I also wonder whether forecasters in other fields avoid 50% probability predictions. Maybe it’s not an issue: weather forecasts are rounded to multiples of 10% for public consumption. Without such rounding, it’s unlikely anyone would compute exactly a 50% probability.

  5. I just checked my go-to app, which is WeatherPro whose data is supplied by MeteoGroup. They are a Dutch company with, according to Wikipedia, forecasting stations in the large European countries. I suspect they are also aggregating “public” data: aviation weather forecasts and the forecast published by the Deutscher Wetterdienst, the national forecaster.

    Interestingly, the app’s main screen shows the daily probability in 5% increments, and the range for the next few days is 5% to 65% with a good distribution of percentages in between. I don’t think they’re rounding up small percentages: here’s the next 7 days percentage precipitation: 15%, 5%, 10%, 65%, 20%, 40%, 35%. This, in my opinion, tends to be a pretty fair assessment.

    They also supply probabilities in 3-hour blocks (in 1% increments). I’m not sure of the usefulness of these high-resolution blocks; the data tends to change quite a bit right up until the block starts.

    I understand that our area is difficult to forecast anyway – Stuttgart lies in a natural bowl (the “Kessel”) with its own peculiar microclimate, and the weather systems are influenced from the Alps, to the south.

    I just checked the DWD’s public forecast product, and it’s actually quite verbose; they don’t make any prediction for precipitation probability explicitly, just qualitative expressions.

  6. Sounds like a really good book. It hasn’t been published yet, right? How did you manage to get an advance copy?

  7. The rounding up of small probabilities is a good way to handle the likelihood that the model is incorrect. This makes the model more robust to model incorrectness.
    Due to this effect, which by definition cannot be modeled, you expect that if the model predicts some event to have extremely low probability, then the true probability is dominated by the modelling error, and is higher than predicted.

Comments are closed.