For a while, I have criticized the practice both in climate and economics of using computer models to increase our apparent certainty about natural phenomenon. We take shaky assumptions and guesstimates of certain constants and natural variables and plug them into computer models that produce projections with triple-decimal precision. We then treat the output with a reverence that does not match the quality of the inputs.
I have had trouble explaining this sort of knowledge laundering and finding precisely the right words to explain it. But this week I have been presented with an excellent example from climate science, courtesy of Roger Pielke, Sr. This is an excerpt from a recent study trying to figure out if a high climate sensitivity to CO2 can be reconciled with the lack of ocean warming over the last 10 years (bold added).
“Observations of the sea water temperature show that the upper ocean has not warmed since 2003. This is remarkable as it is expected the ocean would store that the lion’s share of the extra heat retained by the Earth due to the increased concentrations of greenhouse gases. The observation that the upper 700 meter of the world ocean have not warmed for the last eight years gives rise to two fundamental questions:
- What is the probability that the upper ocean does not warm for eight years as greenhouse gas concentrations continue to rise?
- As the heat has not been not stored in the upper ocean over the last eight years, where did it go instead?
These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”
Pielke goes on to deconstruct the study, but just compare the two bolded statements. First, that there is not sufficiently extensive and accurate observational data to test a hypothesis. BUT, then we will create a model, and this model is validated against this same observational data. Then the model is used to draw all kinds of conclusions about the problem being studied.
This is the clearest, simplest example of certainty laundering I have ever seen. If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?
A model is no different than a hypothesis embodied in code. If I have a hypothesis that the average width of neckties in this year's Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis (though it may be enough to get me media attention). The model is merely a software implementation of my original hypothesis. In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.
This is not just a climate problem. The White House studies on the effects of the stimulus were absolutely identical. They had a hypothesis that government deficit spending would increase total economic activity. After they spent the money, how did they claim success? Did they measure changes to economic activity through observational data? No, they had a model that was programmed with the hypothesis that government spending increased job creation, ran the model, and pulled a number out that said, surprise, the stimulus created millions of jobs (despite falling employment). And the press reported it like it was a real number.