Model Mania: The Limitations of Weather Models

Hello everyone!

This is the last in our Model Mania series which hopes to explain the basics of weather modeling for those without a rigorous background in atmospheric science. It’s my hope that each of these articles is informative even if you don’t read the whole series, but if something in this post doesn’t make sense, chances are I’ve explained it earlier in the series, so I do suggest taking a look at the previous posts if you have the time. After explaining the basics of weather models, the different types of weather models, and digging a bit into the differences between the ECMWF and GFS and why both of them are important, I’d like to wrap this series up by discussing the limitations of weather models and why humans still have a role in the weather forecast process, especially in high-consequence situations.

Errors in weather model forecasts arise because we don’t know what every molecule of air in the atmosphere is doing, and even if we did, we have an imperfect understanding of how these molecules interact with each other at various scales, and even if we did, we don’t have computers powerful enough to calculate what all these molecules might be up to some time from now. While our models are really good considering how immensely complex the task of weather prediction is, they’ll never be perfect.

Most of the time, you’ll be experiencing quiet weather and you’ll never notice the errors that weather models make. If a model predicts a high temperature of 75 degrees, and it ends up being 73, the vast majority of us will never know. That’s why models are great for most everyday weather predictions. Need to know whether to grab a coat on your way out the door? Will it be warm enough to go swimming? Is it likely to rain tomorrow? These are the kinds of questions models are great at answering, which is why we’ve given you dozens of options to look at weather model data at weather.us and weathermodels.com.

Here’s a look at how ECMWF forecast high temperatures (bold first number) compared to observed high temperatures (second number) over a portion of the Midwest during the first 10 days of December 2019. If your eyes glaze over trying to compare each value, the cells are color-coded by the magnitude and direction of the model error. Warm colors (red/orange) represent cases where the model forecast was too warm, while cool colors (green/blue/purple) represent cases where the model forecast was too cold. As you can see, while there are a few big “misses” of >10F in the first couple days (attributable to errors in the model’s initial conditions and perhaps its low resolution failing to resolve small-scale clouds, lakes, or other features that contribute to a particular location’s high temperature), forecast skill is generally pretty high for the first 5-6 days. After day 6, the number of “hits” (for the purposes of high temperature forecasts, if you’re within 3F you can usually claim victory, unless your 33 was actually a 31 or vice versa) is far outnumbered by the “misses”, all of which are larger than 5F by day 10. That being said, for an area experiencing generally quiet weather (no major storm systems), the model does a really good job predicting high temperatures (as well as low temps/precip/winds/etc.) with nonzero but relatively minor errors.

However, when high-impact weather threatens, model errors start to become a lot more noticeable. A two degree error in temperatures won’t matter on a summer day, but if the model predicts 33F with heavy rain and you end up with 31F and heavy snow, you’ll be caught off guard in a potentially dangerous situation. Similarly, if model guidance is predicting that the core of a major hurricane will hit a hundred miles up the coast from your location, you may be inclined to forgo preparations thinking that the worst of the storm will miss your town. Unfortunately, at lead times of just 3-5 days, 100 mile track errors are very common for tropical cyclones, though the models won’t tell you that. Model mistakes will always happen, and when those mistakes become consequential, it’s important to turn to a human forecaster with the relevant expertise to help you make an informed decision about how to plan for the weather headed your way.

This example of two different model forecasts (GFS and ECMWF) for Hurricane Dorian show how small (<100 mile) discrepancies in the forecast for the track of a hurricane can be hugely consequential for those in the path. The GFS forecast would have brought extremely dangerous conditions to parts of Florida’s West Coast while the ECMWF’s forecast would’ve brought gusty showers. What actually happened? Both models were about 150-200 miles too far west, and the storm stayed off Florida’s East Coast. Small forecast errors count when you’re dealing with destructive storms like hurricanes!

It’s also important to remember the types of questions no model, no matter how good can answer. Where exactly will the hurricane hit? Will a thunderstorm pop up over my town or the next town over? Where and when will a tornado form? These questions are all far beyond the scope of any model currently available, and it will likely remain that way for many years to come. Remember that this is the case even though models like the HRRR try to predict exactly where thunderstorms or snow squalls might pop up. This illusion of forecast skill that’s not possible makes it hard to interpret this data productively, so remember to use caution with regards to model output, especially when small-scale features like thunderstorms are involved!

While there’s so much about weather models I haven’t had time to cover, this brings the Model Mania series to a close. It is my hope that this series of posts has been interesting and informative, even though it skipped over many important topics. Now that you know a bit more about the different types of weather models out there, how they work, and why they’re flawed, you’re better equipped to make good use of the vast array of weather model data available at weather.us and weathermodels.com. Remember not to take model output too literally, especially when high-impact weather threatens. If you need to make high-consequence decisions based on weather information, make sure you turn to a qualified meteorologist who has the expertise to produce the most accurate forecast, even when all the models may be saying different things.

-Jack

Comment