What good is an old weather forecast?

Why would anyone care about what the weather was predicted to be once you know what the weather actually was? Because people make decisions based in part on weather predictions, not just weather. Eric Floehr of ForecastWatch told me that people are starting to realize this and are increasingly interested in his historical prediction data.

This morning I thought about what Eric said when I saw a little snow. Last Tuesday was predicted to see ice and schools all over the Houston area closed. As it turned out, there was only a tiny amount of ice and the streets were clear. This morning there actually is snow and ice in the area, though not much, and the schools are all open. (There’s snow out in Cypress where I live, but I don’t think there is in Houston proper.)

Aftermath of last Tuesday’s storm

Related posts

9 thoughts on “What good is an old weather forecast?

  1. I think this is a great point – in the case of many things, it matters not so much what the reality is but what people think the reality is (or will be).

    I heard an interesting definition of relationship awkwardness once (can’t remember where) that went something like this: ‘A relationship is awkward when one person thinks that the other person views the relationship on a different level than them.’ So for example, if you view someone as a close friend, but you think that they view you just as a casual friend, you’ll most likely find the relationship somewhat awkward, regardless of the fact that they actually view you as a very close friend as well.

    I think monetary value is another area that can illustrate this: a gemstone might have little concrete value, but just the fact that someone thinks it is valuable makes it so.

  2. I have already thought of doing a web-crawler which would generate a “2D” (actually it would be N-D) dataset. One axis would be the actual weather for a given day. The other would be the prediction from the previous N days (to a typical maximum of 10 days). That would allow for some interesting calculations, like “which kind of climatic situation is more stable to be predicted” or “what time of year has a larger variation in prediction as the predicted day gets closer”…

  3. “Operational” centers routinely calculate statistics of forecast quality. They subtract the forecast from the analysis fields and calculate stats.

    FYI, operational centers are mainly NCEP (run by NOAA) and AFWA (USAF). Analyses are gridded fields that have assimilated all available data at a cut-off time. Cut-off times vary be assimilation cycles and the time the forecast is due. Real-time data dropouts are not rare. Post-hoc analyses are performed after all data arrive. Then the difference statistics are calculated. Sometimes, grids at different times are compared to see if the forecast was mostly accurate, with a slight time offset.

    BTW, the post-hoc analyses are then collected to compute climate statistics.

    Anyway, these statistics are routinely calculated and studied by operational numerical weather prediction centers. The stats are published so you can probably find them on the web. If you can’t, just send an email request to NCEP or AFWA or whoever supplies your forecasts. (No, Accuweather and the weather channel are not real operational WX centers. They are entertainers.)

  4. I’m not sure I agree that the general public are getting more interested in this but forecast validation has certainly been done since prehistoric times.

    There are a bunch of different approaches:

    http://www.cawcr.gov.au/projects/verification/index.php

    A subtlety that is sometimes missed is that environmental forecasts are generally probabilistic so when evaluating these things we check the whole distribution, not just the median. Decisions need reliable forecasts, not just accurate ones.

    CRPS is metric which kind of measures both:

    http://www.eumetcal.org/resources/ukmeteocal/verification/www/english/msg/ver_prob_forec/uos3b/uos3b_ko1.htm

  5. I wouldn’t say “the general public” is interested in forecast validation, but some business certainly are, enough to keep ForecastWatch afloat.

    What I had in mind in this post was not quantifying forecast accuracy but analyzing customer behavior and weather predictions. For example, how much does a forecast of rain impact truck sales? What do grocery shoppers buy more of when there’s a forecast of snow? Etc.

  6. Comparing forecasts with results is an excellent tool for developing a feel for the reliability of forecasts. Seeing which situations are forecast accurately (typically, weather associated with fast moving fronts) and which are not (weather associated with stationary fronts) makes it possible to evaluate the next forecast.

    I spend a lot of time with my flying students doing this: what made this forecast wrong?

Comments are closed.