Posts tagged ‘Russ Roberts’

Libertarian / OWS Nexus

I continue to be fascinated by the frequent intersection of classical liberals / libertarians and Occupy Wall Street, at least in the diagnosis of what ails us.  This post by Russ Roberts I linked previously is a great example.   Both groups get energized by criticisms of the corporate state and crony government.

Where they diverge, of course, is in solution-making.  The OWS folks see the root cause in the behavior and incentives of private corporations which corrupt government actors with their money, and thus advocate solutions which increase state power over these private entities.  In contrast, libertarians like myself see the problem as too much state power to create winners and losers in the market and shift wealth from one group to another.  Given this power, the financial incentives to harness it in ones favor are overwhelming and will never go away, so the only way to tackle it is to reduce the power to play favorites.

Trusting Experts and Their Models

Russ Roberts over at Cafe Hayek quotes from a Cathy O’Neill review of Nate Silvers recent book:

Silver chooses to focus on individuals working in a tight competition and their motives and individual biases, which he understands and explains well. For him, modeling is a man versus wild type thing, working with your wits in a finite universe to win the chess game.

He spends very little time on the question of how people act inside larger systems, where a given modeler might be more interested in keeping their job or getting a big bonus than in making their model as accurate as possible.

In other words, Silver crafts an argument which ignores politics. This is Silver’s blind spot: in the real world politics often trump accuracy, and accurate mathematical models don’t matter as much as he hopes they would....

My conclusion: Nate Silver is a man who deeply believes in experts, even when the evidence is not good that they have aligned incentives with the public.

Distrust the experts

Call me “asinine,” but I have less faith in the experts than Nate Silver: I don’t want to trust the very people who got us into this mess, while benefitting from it, to also be in charge of cleaning it up. And, being part of the Occupy movement, I obviously think that this is the time for mass movements.

Like Ms. O'Neill, I distrust "authorities" as well, and have a real problem with debates that quickly fall into dueling appeals to authority.  She is focusing here on overt politics, but subtler pressure and signalling are important as well.  For example, since "believing" in climate alarmism in many circles is equated with a sort of positive morality (and being skeptical of such findings equated with being a bad person) there is an underlying peer pressure that is different from overt politics but just as damaging to scientific rigor.  Here is an example from the comments at Judith Curry's blog discussing research on climate sensitivity (which is the temperature response predicted if atmospheric levels of CO2 double).

While many estimates have been made, the consensus value often used is ~3°C. Like the porridge in “The Three Bears”, this value is just right – not so great as to lack credibility, and not so small as to seem benign.

Huybers (2010) showed that the treatment of clouds was the “principal source of uncertainty in models”. Indeed, his Table I shows that whereas the response of the climate system to clouds by various models varied from 0.04 to 0.37 (a wide spread), the variation of net feedback from clouds varied only from 0.49 to 0.73 (a much narrower relative range). He then examined several possible sources of compensation between climate sensitivity and radiative forcing. He concluded:

“Model conditioning need not be restricted to calibration of parameters against observations, but could also include more nebulous adjustment of parameters, for example, to fit expectations, maintain accepted conventions, or increase accord with other model results. These more nebulous adjustments are referred to as ‘tuning’.”  He suggested that one example of possible tuning is that “reported values of climate sensitivity are anchored near the 3±1.5°C range initially suggested by the ad hoc study group on carbon dioxide and climate (1979) and that these were not changed because of a lack of compelling reason to do so”.

Huybers (2010) went on to say:

“More recently reported values of climate sensitivity have not deviated substantially. The implication is that the reported values of climate sensitivity are, in a sense, tuned to maintain accepted convention.”

Translated into simple terms, the implication is that climate modelers have been heavily influenced by the early (1979) estimate that doubling of CO2 from pre-industrial levels would raise global temperatures 3±1.5°C. Modelers have chosen to compensate their widely varying estimates of climate sensitivity by adopting cloud feedback values countering the effect of climate sensitivity, thus keeping the final estimate of temperature rise due to doubling within limits preset in their minds.

There is a LOT of bad behavior out there by models.  I know that to be true because I used to be a modeler myself.  What laymen do not understand is that it is way too easy to tune and tweak and plug models to get a preconceived answer -- and the more complex the model, the easier this is to do in a non-transparent way.  Here is one example, related again to climate sensitivity

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic assertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

By the way, this aerosol issue is central to recent work that is pointing to a much lower climate sensitivity to CO2 than has been reported in past IPCC reports.

Science and Complexity: The Convergence of Climate and Economics

I continue to be fascinated by the similarity between climate science and macro-economics.  Both study unbelievably complex multi-variable systems where we would really like to isolate the effect of one variable.  Because we only have one each of climates and economies  (we can define smaller subsets, but they are always going to be subject to boundary effects from the larger system) it is really hard to define good controlled experiments to isolate single variables.  And all of this is done in a highly charged political environment where certain groups are predisposed to believe their variable is the key element.

In this post by Russ Roberts, one could easily substitute "climate" for "economy" and "temperature" for "unemployment."

Suppose the economy does well this year–growth is robust and unemployment falls. What is the reason for the improvement? Will it be because of the natural rebound of an economy after a downturn that has lasted longer than people thought? The impact of the stimulus finally kicking in? The psychological or real impact of extending the Bush tax cuts? The psychological or real impact of the November election results? The steady hand of Obama at the tiller? All of the above? Can any model of the economy pass the test and answer these questions?

The reason macroeconomics is not a science and not even scientific is that the question I pose above is not answerable. If the economy improves, there will be much talk about the reason. Data and evidence will be trotted out in support of the speaker’s viewpoint. But that is not science. We don’t have a way of distinguishing between those different theories or of giving them weights to measure their independent contribution.

I’m with Arnold Kling. This is a time for humility. It should be at the heart of our discipline. The people who yell the loudest and with the most certainty are the least trustworthy. And the reason for that goes back to Hayek. We can’t measure many of the things we would have to measure to have any reasonable amount of certainty about the chains of connection and causation.

I have heard it said that the only way nowadays to advance pure science is to be working on arcana like the first microsecond of the universe or behavior of the 9th dimension in string theory.   There is still room for a ton of useful work on the analysis, solution, and forecasting of complex multi-variable systems, even if it is just a Goedel-like proof of where the boundaries of our potential understanding can be drawn.

By the way, I wrote my own piece about the limits of macroeconomics here.

I Warned You -- Here Comes the Corporate State

In a European-style corporate state, very large corporations (and their unions) get special protections, privileges, and exemptions, to the detriment of consumers, entrepreneurs, small businesses, and taxpayers.  Here we go, via Russ Roberts:

Nearly a million workers won't get a consumer protection in the U.S. health reform law meant to cap insurance costs because the government exempted their employers.

Thirty companies and organizations, including McDonald's (MCD) and Jack in the Box (JACK), won't be required to raise the minimum annual benefit included in low-cost health plans, which are often used to cover part-time or low-wage employees.

The Department of Health and Human Services, which provided a list of exemptions, said it granted waivers in late September so workers with such plans wouldn't lose coverage from employers who might choose instead to drop health insurance altogether.

Without waivers, companies would have had to provide a minimum of $750,000 in coverage next year, increasing to $1.25 million in 2012, $2 million in 2013 and unlimited in 2014.

"The big political issue here is the president promised no one would lose the coverage they've got," says Robert Laszewski, chief executive officer of consulting company Health Policy and Strategy Associates. "Here we are a month before the election, and these companies represent 1 million people who would lose the coverage they've got."

Actually, the real political question is why McDonald's gets special treatment, but the folks who run the deli downstairs in my building, who effectively compete with McDonald's, does not get to operate under the same law, merely because they are not large enough to get the President's special attention.

Food Miles Silliness

Maybe its because I live in Phoenix, but the local food movement has always seemed silly to me.  To somehow argue that food grown in our 6 inches of annual rainfall is better for the environment than trucking product in from more suitable growing regions has always struck me as crazy.  Russ Roberts links several good articles on the local food movement, one of which included this nice snarky observation:

The result has been all kinds of absurdities. For instance, it is sinful in New York City to buy a tomato grown in a California field because of the energy spent to truck it across the country; it is virtuous to buy one grown in a lavishly heated greenhouse in, say, the Hudson Valley.