I understand where you're coming from here.Before they even use a model, they test it through a technique called hindcasting. By their very nature, you're going to see some variation. These are models, not real world observations. It's important that they accurately represent the real world, but the variation comes from the fact that certain unique conditions or randomized events, though the model may account for them, may occur at a different time, or even not occur at all. To phrase it thusly, the equation may be different (20+20+20+20=80 compared to 10+10+29+30=79 or 11+40+10+20=81. This is a gross oversimplification, but you get the idea.), but the end result should be a fairly close match.The models do ok from 1990 on. Does that mean you win? Or did time exist before 1990. I'm only interested because there are some memories I'd like to keep and others I'd like to forget....so I guess I'm not rooting for either way as it would be a wash.
For example, if you start a sophisticated model run in 1900, it might accurately include an event like the dust bowl. However, rather than having it occur in the 1930s, it may occur sometime in the 50s or 60s. Consider also things like 100 year storms. We call them that because statistically you'll see 1 every 100 years. Some runs may not have a storm at all. Some might have 2 similar storms. Most should accurately predict 1. Same with volcanoes. Throw in things like this, El Niño and La Niña events and it's obvious why they'll be different from observations. They'll generally make several runs to give them a mean and compare that with actual measurements. Still won't be perfect, but hey, you do what you can.
An example of the models predictive power is shown by Hansen's relatively primitive models. They accurately predicted a 1991 volcanic eruption that lead to a short term cooling event in 1992. They've come a long way since then and still have a long way to go, but they are useful.
Last edited: