In theoretical physics, your model must make some prediction about the real world that can then be validated by an experiment. It must pass the test of falsifiability. This test must be replicable; someone else following your procedure and your data analysis methods must be able to get the same result.
Ideally, this test gets run multiple times, and with slightly different permutations, to see if the same result can be approached in different ways.
Similar standards apply to chemistry, and that's the experimental model in a nutshell. In the early years of both physics and chemistry (and largely what's going on in geology), you have to build a model that isolates enough factors that it can have a predictive value.
I am not sure that GCM models are at that state of rigor yet; I build models for a living - albeit simple ones meant for entertainment. I have a healthy suspicion of corner cases in models, and I know quite well that the biases of the modeler, and the person using the model lead to blind spots.
Has anyone made a prediction with a climate model and said "If this prediction proves false, this particular model needs to be abandoned?" Note the term there - abandoned, not refined. You can make a critique that's hard (at least for me) to refute by saying that climate models, as currently implemented, are a too heavy on tweaked input parameters light on predictions. The usual thing that happens when a climate model doesn't forward verify is to go back into it and tweak parameters. I have seen references to a model for tropospheric temperature predictions that is deprecated, because they got results that didn't match the weather balloon and satellite data in IPCC AR4. However, I don't know if that deprecation is because they decided they didn't have enough variables modeled, or because they felt the variables they were modeling were handled incorrectly.
EG, is it something they can fix, or is it a disproof?
Of course, from the physics reductionist perspective, it's not enough to winnow out the bad models. You also have to have a model that accurately predicts something. In physics, the number of variables is smaller and it's easier to isolate things. When you slam beams of protons into lead nuclei and look at the energy tracks of the debris, they either work the way you expected, or they didn't. (As Asimov wrote, "Advances in science aren't made by "Eureka!" moments, they're made by "Huh. That's weird. Can we get the same result a second time?"
So, one argument from physics reductionists is that we should make a prediction based on the model, we should then wait long enough to see if the prediction is accurate, determine what happened to make it inaccurate, and refine the model. By which point, given we're talking about models that get more statistically accurate in multi-decadal sweeps, we should have a good climate model somewhere around 2200 AD. :)
Or is this "Physics Arrogance" writ large?
Related to this is whether or not we are currently at an optimum point for climate. There is a natural tendency in the human brain to fill in the edge of the map (or the data graph) with shibboleths and monsters. Our current measurements are baselined from 1960, which was an unusually cool period in the 20th century, and our earlier baselines are coming from the tail end of the Maunder Minimum and the end of the Little Ice Age from 1600 to 1880.
Both of these are interesting questions, and by raising them, I hope to create a discussion - I'm not raising them because I think they refute global warming as a grand conspiracy. I raise them because I think they need discussion
Here's an interesting take:
ReplyDeletehttp://initforthegold.blogspot.com/2008/08/climate-models-is-there-better-way.html
A lower effort approach ar showing how to reproduce the research number crunching:
ReplyDeletehttp://iowahawk.typepad.com/iowahawk/2009/12/fables-of-the-reconstruction.html