Using Computer Models To Launder Certainty

For a while, I have criticized the practice both in climate and economics of using computer models to increase our apparent certainty about natural phenomenon.   We take shaky assumptions and guesstimates of certain constants and natural variables and plug them into computer models that produce projections with triple-decimal precision.   We then treat the output with a reverence that does not match the quality of the inputs.

I have had trouble explaining this sort of knowledge laundering and finding precisely the right words to explain it.  But this week I have been presented with an excellent example from climate science, courtesy of Roger Pielke, Sr.  This is an excerpt from a recent study trying to figure out if a high climate sensitivity to CO2 can be reconciled with the lack of ocean warming over the last 10 years (bold added).

“Observations of the sea water temperature show that the upper ocean has not warmed since 2003. This is remarkable as it is expected the ocean would store that the lion’s share of the extra heat retained by the Earth due to the increased concentrations of greenhouse gases. The observation that the upper 700 meter of the world ocean have not warmed for the last eight years gives rise to two fundamental questions:

  1. What is the probability that the upper ocean does not warm for eight years as greenhouse gas concentrations continue to rise?
  2. As the heat has not been not stored in the upper ocean over the last eight years, where did it go instead?

These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

Pielke goes on to deconstruct the study, but just compare the two bolded statements.  First, that there is not sufficiently extensive and accurate observational data to test a hypothesis.  BUT, then we will create a model, and this model is validated against this same observational data.  Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen.  If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?

A model is no different than a hypothesis embodied in code.   If I have a hypothesis that the average width of neckties in this year's Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis  (though it may be enough to get me media attention).  The model is merely a software implementation of my original hypothesis.  In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.

This is not just a climate problem.  The White House studies on the effects of the stimulus were absolutely identical.  They had a hypothesis that government deficit spending would increase total economic activity.  After they spent the money, how did they claim success?  Did they measure changes to economic activity through observational data?  No, they had a model that was programmed with the hypothesis that government spending increased job creation, ran the model, and pulled a number out that said, surprise, the stimulus created millions of jobs (despite falling employment).  And the press reported it like it was a real number.

  • Shane

    Calculate with a calculator, measure with a tape measure and cut with a hacksaw. :)

  • Steve

    Bend to shape, pound to fit, paint to match.

  • a leap at the wheel

    This is exactly correct. This is a concept you can demonstrate to a middle schooler a box of crayons and a few sheets of graph paper, but this gets through an entire peer review process without a hick-up.

  • William Newman

    I'd probably say something like "to kite certainty" or "to launder preordained conclusions," but yeah.

  • Don

    What I wonder, as a CS grad, is where the hell are all the CS grads on this? Perhaps other CS departments didn't cover modeling, but I know it was required for me and the first thing you realize is that a model is never MORE accurate than the data and observations it's made from (and usually far less accurate).

    I think this is the magic-box syndrome. You used to see it in the 60's and 70's TV shows all the time (like Star Trek) where somebody would type a question into a computer it flash some lights and magically spit back the answer. Nobody ever questions what had to go on in the middle of the INPUT and OUTPUT, and nobody seems to do so here either.

    Amazing!

  • steve

    I am sure some people in these fields (climatology, econometrics) do question the validity of these methods. However, I expect they are defunded and weeded out over time.

    I think this is little more then a modern day example of how the Catholic Church went from opposing the worship of the Roman State to supporting indulgences and the divine right of kings over time. Lets hope the reformation won't be as bloody this time around.

  • Dr. T

    Did you see this story: http://www.ibtimes.com/articles/189386/20110729/global-warming-roy-spencer-nasa-terra-debunked-al-gore-climate-change.htm

    The scientist, Roy Spencer, is the US team leader for the Advanced Microwave Scanning Radiometer. He looked at NASA's upper atmosphere temperature data for the past eleven years and concludes that heat loss to space is greater than predicted (and greater than what is used in IPCC climate models). His secondary conclusion is that the models assign too much warming to CO2 concentrations (hardly a surprise, since the IPCC deliberately chose the highest value from a broad range of studies). Naturally, the AGW believers claim that Spencer's work is unreliable, that it was a political ploy, that it was data manipulation (very charred pot calling the kettle black), that the debate is already over, yadda, yadda. The fact that Spencer's findings fit perfectly with the lack of ocean water temperature change over the same period is completely ignored.

  • Sam L.

    A lot of people have never heard "Garbage In, Garbage Out". If it comes from a computer, they assume it's right. They've heard that figures lie, and liars figure, but not that "liars can write code, too".

  • http://jamescrain.org jhc

    Don & Sam L. - Too right, gents.

    I've been writing scientific & engineering software for 30 years, and it *still* amazes me how much faith people will put in a program's output -- even after I explain in detail its limitations. It's like the magician showing you how he does the trick: you still believing your lying eyes when he does it again.

    I think the components to this faith are
    (a) it feels better to point to some result (no matter how suspect) than to admit ignorance and
    (b) it's easier to point to some result (no matter how suspect) than it is to work through it personally.

    You can see both of those components plainly in the AGW debate.

    Don and Coyote are both right: any software is loaded with limiting assumptions. If it weren't, it wouldn't be practical to write. If software even came close to modeling the complexity of a climate system or of an entire financial market, it'd never be finished.

    My wife used to develop econometric models for one of the Bell Operating companies so it could estimate how to price services: Call Notes, call forwarding, etc. The assumptions she & her work mates routinely made - just to get their models to run - we both found laughable. And these were all people with post-grad degrees who spent full time working on models for *one particular product*, not an entire market segment.

  • ErisGuy

    "Certainty Laundering"

    Good phrase.

    People had a dream that the scientific method--that is, the methods of science combined with public criticism--could be used for good government, and they called that dream 'technocracy.' For the most part it worked for while until they learned the engineers, mathematicians, and scientists can be hired to lie just like everyone else. And now all the data is bad from false crime statistics to phony academic departments to vaporous sciences like 'global warming.'

  • steve

    ErisGuy - "...engineers, mathematicians, and scientists can be hired to lie just like everyone else."

    That is exactly right.

    Although, the scientific method has real power in it even when facing deep corruption and steep odds. 1,000 government climate scientists vs 2 independents and the 1,000 are losing (At least I think they are, hard to tell sometimes).

    The same can be said for public criticism.

    Just because they don't quickly win every contest hands down doesn't mean they aren't valuable tools.

  • http://lorenzo-thinkingoutaloud.blogspot.com/ Lorenzo from Oz

    'Certainty laundering' is a good phrase. Models tell you the results of your premises operating on particular measurements. That tells you something; but is not evidence about the world, merely about your premises. It is truly amazing, how many people treat computer models as oracles.