Posts tagged ‘IPCC’

Denying the Climate Catastrophe: 5a. Arguments For Attributing Past Warming to Man

This is part A of Chapter 5 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made (this article); B) Natural Attribution
  6. Climate Models vs. Actual Temperatures

Having established that the Earth has warmed over the past century or so (though with some dispute over how much), we turn to the more interesting -- and certainly more difficult -- question of finding causes for past warming.  Specifically, for the global warming debate, we would like to know how much of the warming was due to natural variations and how much was man-made.   Obviously this is hard to do, because no one has two thermometers that show the temperature with and without man's influence.

I like to begin each chapter with the IPCC's official position, but this is a bit hard in this case because they use a lot of soft words rather than exact numbers.  They don't say 0.5 of the 0.8C is due to man, or anything so specific.   They use phrases like "much of the warming" to describe man's affect.  However, it is safe to say that most advocates of catastrophic man-made global warming theory will claim that most or all of the last century's warming is due to man, and that is how we have put it in our framework below:

click to enlarge

By the way, the "and more" is not a typo -- there are a number of folks who will argue that the world would have actually cooled without manmade CO2 and thus manmade CO2 has contributed more than the total measured warming.  This actually turns out to be an important argument, since the totality of past warming is not enough to be consistent with high sensitivity, high feedback warming forecasts.  But we will return to this in part C of this chapter.

Past, Mostly Abandoned Arguments for Attribution to Man

There have been and still are many different approaches to the attributions problem.  In a moment, we will discuss the current preferred approach.  However, it is worth reviewing two other approaches that have mostly been abandoned but which had a lot of currency in the media for some time, in part because both were in Al Gore's film An Inconvenient Truth.

Before we get into them, I want to take a step back and briefly discuss what is called paleo-climatology, which is essentially the study of past climate before the time when we had measurement instruments and systematic record-keeping for weather.   Because we don't have direct measurements, say, of the temperature in the year 1352, scientists must look for some alternate measure, called a "proxy,"  that might be correlated with a certain climate variable and thus useful in estimating past climate metrics.   For example, one might look at the width of tree rings, and hypothesize that varying widths in different years might correlate to temperature or precipitation in those years.  Most proxies take advantage of such annual layering, as we have in tree rings.

One such methodology uses ice cores.  Ice in certain places like Antarctica and Greenland is laid down in annual layers.  By taking a core sample, characteristics of the ice can be measured at different layers and matched to approximate years.  CO2 concentrations can actually be measured in air bubbles in the ice, and atmospheric temperatures at the time the ice was laid down can be estimated from certain oxygen isotope ratios in the ice.  The result is that one can plot a chart going back hundreds of thousands of years that estimates atmospheric CO2 and temperature.  Al Gore showed this chart in his movie, in a really cool presentation where the chart wrapped around three screens:

click to enlarge

As Gore points out, this looks to be a smoking gun for attribution of temperature changes to CO2.  From this chart, temperature and CO2 concentrations appear to be moving in lockstep.  From this, CO2 doesn't seem to be a driver of temperatures, it seems to be THE driver, which is why Gore often called it the global thermostat.

But there turned out to be a problem, which is why this analysis no longer is treated as a smoking gun, at least for the attribution issue.  Over time, scientists got better at taking finer and finer cuts of the ice cores, and what they found is that when they looked on a tighter scale, the temperature was rising (in the black spikes of the chart) on average 800 years before the CO2 levels (in red) rose.

This obviously throws a monkey wrench in the causality argument.  Rising CO2 can hardly be the cause of rising temperatures if the CO2 levels are rising after temperatures.

It is now mostly thought that what this chart represents is the liberation of dissolved CO2 from oceans as temperatures rise.  Oceans have a lot of dissolved CO2, and as the oceans get hotter, they will give up some of this CO2 to the atmosphere.

The second outdated attribution analysis we will discuss is perhaps the most famous:  The Hockey Stick.  Based on a research paper by Michael Mann when he was still a grad student, it was made famous in Al Gore's movie as well as numerous other press articles.  It became the poster child, for a few years, of the global warming movement.

So what is it?  Like the ice core chart, it is a proxy analysis attempting to reconstruct temperature history, in this case over the last 1000 years or so.  Mann originally used tree rings, though in later versions he has added other proxies, such as from organic matter laid down in sediment layers.

Before the Mann hockey stick, scientists (and the IPCC) believed the temperature history of the last 1000 years looked something like this:

click to enlarge

Generally accepted history had a warm period from about 1100-1300 called the Medieval Warm Period which was warmer than it is today, with a cold period in the 17th and 18th centuries called the "Little Ice Age".  Temperature increases since the little ice age could in part be thought of as a recovery from this colder period.  Strong anecdotal evidence existed from European sources supporting the existence of both the Medieval Warm Period and the Little Ice Age.  For example, I have taken several history courses on the high Middle Ages and every single professor has described the warm period from 1100-1300 as creating a demographic boom which defined the era (yes, warmth was a good thing back then).  In fact, many will point to the famines in the early 14th century that resulted from the end of this warm period as having weakened the population and set the stage for the Black Death.

However, this sort of natural variation before the age where man burned substantial amounts of fossil fuels created something of a problem for catastrophic man-made global warming theory.  How does one convince the population of catastrophe if current warming is within the limits of natural variation?  Doesn't this push the default attribution of warming towards natural factors and away from man?

The answer came from Michael Mann (now Dr. Mann but actually produced originally before he finished grad school).  It has been dubbed the hockey stick for its shape:

 

click to enlarge

The reconstructed temperatures are shown in blue, and gone are the Medieval Warm Period and the Little Ice Age, which Mann argued were local to Europe and not global phenomena.  The story that emerged from this chart is that before industrialization, global temperatures were virtually flat, oscillating within a very narrow band of a few tenths of a degree.  However, since 1900, something entirely new seems to be happening, breaking the historical pattern.  From this chart, it looks like modern man has perhaps changed the climate.  This shape, with the long flat historical trend and the sharp uptick at the end, is why it gets the name "hockey stick."

Oceans of ink and electrons have been spilled over the last 10+ years around the hockey stick, including a myriad of published books.  In general, except for a few hard core paleoclimatologists and perhaps Dr. Mann himself, most folks have moved on from the hockey stick as a useful argument in the attribution debate.  After all, even if the chart is correct, it provides only indirect evidence of the effect of man-made CO2.

Here are a few of the critiques:

  • Note that the real visual impact of the hockey stick comes from the orange data on the far right -- the blue data alone doesn't form much of a hockey stick.  But the orange data is from an entirely different source, in fact an entirely different measurement technology -- the blue data is from tree rings, and the orange is form thermometers.  Dr. Mann bristles at the accusation that he "grafted" one data set onto the other, but by drawing the chart this way, that is exactly what he did, at least visually.  Why does this matter?  Well, we have to be very careful with inflections in data that occur exactly at the point that where we change measurement technologies -- we are left with the suspicion that the change in slope is due to differences in the measurement technology, rather than in the underlying phenomenon being measured.
  • In fact, well after this chart was published, we discovered that Mann and other like Keith Briffa actually truncated the tree ring temperature reconstructions (the blue line) early.  Note that the blue data ends around 1950.  Why?  Well, it turns out that many tree ring reconstructions showed temperatures declining after 1950.  Does this mean that thermometers were wrong?  No, but it does provide good evidence that the trees are not accurately following current temperature increases, and so probably did not accurately portray temperatures in the past.
  • If one looks at the graphs of all of Mann's individual proxy series that are averaged into this chart, astonishingly few actually look like hockey sticks.  So how do they average into one?  McIntyre and McKitrick in 2005 showed that Mann used some highly unusual and unprecedented-to-all-but-himself statistical methods that could create hockey sticks out of thin air.  The duo fed random data into Mann's algorithm and got hockey sticks.
  • At the end of the day, most of the hockey stick (again due to Mann's averaging methods) was due to samples from just a handful of bristle-cone pine trees in one spot in California, trees whose growth is likely driven by a number of non-temperature factors like precipitation levels and atmospheric CO2 fertilization.   Without these few trees, most of the hockey stick disappears.  In later years he added in non-tree-ring series, but the results still often relied on just a few series, including the Tiljander sediments where Mann essentially flipped the data upside down to get the results he wanted.  Taking out the bristlecone pines and the abused Tiljander series made the hockey stick go away again.

There have been plenty of other efforts at proxy series that continue to show the Medieval Warm Period and Little Ice Age as we know them from the historical record

 

click to enlarge

As an aside, Mann's hockey stick was always problematic for supporters of catastrophic man-made global warming theory for another reason.  The hockey stick implies that the world's temperatures are, in absence of man, almost dead-flat stable.   But this is hardly consistent with the basic hypothesis, discussed earlier, that the climate is dominated by strong positive feedbacks that take small temperature variations and multiply them many times.   If Mann's hockey stick is correct, it could also be taken as evidence against high climate sensitivities that are demanded by the catastrophe theory.

 

The Current Lead Argument for Attribution of Past Warming to Man

So we are still left wondering, how do climate scientists attribute past warming to man?  Well, to begin, in doing so they tend to focus on the period after 1940, when large-scale fossil fuel combustion really began in earnest.   Temperatures have risen since 1940, but in fact nearly all of this rise occurred in the 20 year period from 1978 to 1998:

 

click to enlarge

To be fair, and better understand the thinking at the time, let's put ourselves in the shoes of scientists around the turn of the century and throw out what we know happened after that date.  Scientists then would have been looking at this picture:

click to enlarge

Sitting in the year 2000, the recent warming rate might have looked dire .. nearly 2C per century...

click to enlarge

Or possibly worse if we were on an accelerating course...

click to enlarge

Scientists began to develop a hypothesis that this temperature rise was occurring too rapidly to be natural, that it had to be at least partially man-made.  I have always thought this a slightly odd conclusion, since the slope from this 20-year period looks almost identical to the slope centered around the 1930's, which was very unlikely to have much human influence.

 

click to enlarge

But never-the-less, the hypothesis that the 1978-1998 temperature rise was too fast to be natural gained great currency.  But how does one prove it?

What scientists did was to build computer models to simulate the climate.  They then ran the computer models twice.  The first time they ran them with only natural factors, or at least only the natural factors they knew about or were able to model (they left a lot out, but we will get to that in time).  These models were not able to produce the 1978-1998 warming rates.  Then, they re-ran the models with manmade CO2, and particularly with a high climate sensitivity to CO2 based on the high feedback assumptions we discussed in an earlier chapter.   With these models, they were able to recreate the 1978-1998 temperature rise.   As Dr. Richard Lindzen of MIT described the process:

What was done, was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multidecadal Oscillation), claim that such models nonetheless accurately depicted natural internal climate variability, and use the fact that these models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man.

Another way to put this argument is "we can't think of anything natural that could be causing this warming, so by default it must be man-made.  With various increases in sophistication, this remains the lead argument in favor of attribution of past warming to man.

In part B of this chapter, we will discuss what natural factors were left out of these models, and I will take my own shot at a simple attribution analysis.

The next section, Chapter 6 Part B, on natural attribution is here

Denying the Climate Catastrophe: 4a. Actual Temperature Data

This is the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data (this article);   B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures

In our last chapter, we ended a discussion on theoretical future warming rates by saying that no amount of computer modelling was going to help us choose between various temperature sensitivities and thus warming rates.  Only observational data was going to help us determine how the Earth actually responds to increasing CO2 in the atmosphere.  So in this chapter we turn to the next part of our framework, which is our observations of Earth's temperatures, which is among the data we might use to support or falsify the theory of catastrophic man-made global warming.

click to enlarge

The IPCC position is that the world (since the late 19th century) has warmed about 0.8C.  This is a point on which many skeptics will disagree, though perhaps not as substantially as one might expect from the media.   Most skeptics, myself included, would agree that the world has certainly warmed over the last 100-150 years.  The disagreement tends to be in the exact amount of warming, with many skeptics contending that the amount of warming has been overstated due to problems with temperature measurement and aggregation methodology.

For now, we will leave those issues aside until part B of this section, where we will discuss some of these issues.  One reason to do so is to focus, at least at first, on the basic point of agreement that the Earth has indeed warmed somewhat.  But another reason to put these differences over magnitude aside is that we will find, a few chapters hence, that they essentially don't matter.  Even the IPCC's 0.8C estimate of past warming does not support its own estimates of temperature sensitivity to CO2.

Surface Temperature Record

The most obvious way to measure temperatures on the Earth is with thermometers near the ground.   We have been measuring the temperature at a few select locations for hundreds of years, but it really is only in the last century that we have fairly good coverage of the land surface.  And even then our coverage of places like the Antarctic, central Africa, parts of South America, and all of the oceans (which cover 75% of the Earth) is even today still spotty.  So coming up with some sort of average temperature for the Earth is not a straight averaging exercise -- data must be infilled and estimated, making the process complicated and subject to a variety of errors.

But the problem is more difficult than just data gaps.  How does one actually average a temperature from Denver with a temperature from San Diego?  While a few folks attempt such a straight average, scientists have developed a theory that one can more easily average what are known as temperature anomalies than one can average the temperature itself.  What is an anomaly?  Essentially, for a given thermometer, researchers will establish an average for that thermometer for a particular day of the year.  The exact time period or even the accuracy of this average is not that important, as long as the same time period is used consistently.  Then, the anomaly for any given measurement is the deviation of the measured temperature from its average.   So if the average historical temperature for this day of the year is 25C and the actual measured for the day is 26C, the anomaly for today at this temperature station is +1.0C.

Scientists then develop programs that spatially average these temperature anomalies for the whole Earth, while also adjusting for a myriad of factors, from time-of-day changes in measurement to technology changes over time of the temperature stations to actual changes in the physical location of the measurement.  This is a complicated enough a task, with enough explicit choices that must be made about techniques and adjustments, that there are many different temperature metrics floating around out there, many of which get different results from essentially the same data.  The Hadley Center in England's CRUT4 global temperature metric is generally considered the gold standard, and is the one used preferentially by the IPCC.  Its metric is shown below, with the monthly temperature anomaly in dark blue and the 5 year moving average (centered on its mid-point):

click to enlarge

Again, the zero point of the chart is arbitrary and merely depends on the period of time chosen as the base or average.  Looking at the moving average, one can see the temperature anomaly bounces around -0.3C in the late 19th century and has been around +0.5C over the last several years, which is how we get to about 0.8C warming.

Satellite Temperature Record

There are other ways to take temperature measurements, however.  Another approach is to use satellites to measure surface temperatures (or at least near-surface temperatures).   Satellites measure temperature by measuring the thermal microwave emissions of oxygen atoms in the lower troposphere (perhaps 0-3 miles above the Earth).  Satellites have the advantage of being able to look at the entire Earth without gaps, and are not subject to siting biases for surface temperatures stations (which will be discussed in our part B of this chapter).

The satellite record does, however, rely on a shifting array of satellites all of which have changing orbits for which adjustments must be made.  Of necessity, the satellite record cannot reach as far back into the past.  And the satellites are not actually measuring the temperature of the Earth, but rather a temperature a mile or two up.  Whether that matters is subject to debate, but the clincher for me is that the IPCC and most climate models have always shown that the first and most anthropogenic warming should show up in exactly this spot -- the lower troposphere -- which makes observation of this zone a particularly good way to look for a global warming signal.

Roy Spencer and John Christy have what is probably the leading satellite temperature metric, called "UAH" as a shorthand for University of Alabama, Huntsville's space science center.  The UAH record looks like this:

click to enlarge

Note that the absolute magnitude of the anomaly isn't comparable between the surface and satellite record, as they use different base periods, but changes and growth rates in the anomalies should be comparable between the two indices.

The first thing to note is that, though they are different, both the satellite and surface temperature records show warming since 1980.  For all that some skeptics may want to criticize the authors of the surface temperature databases, and there indeed some grounds for criticism, these issues should not distract us from the basic fact that in every temperature record we have (including other technologies like radiosonde balloons), we see recent warming.

In terms of magnitude, the two indices do not show the same amount of warming -- since 1980 the satellite temperature record shows about 30% less warming than does  the surface temperature record for the same period.   So which is right?  We will discuss this in more depth in part B, but the question is not made any easier by the fact that the surface records are compiled by prominent alarmist scientists while the satellite records are maintained by prominent skeptic scientists.  Which causes each side to accuse the other of having its thumb on the scale, so to speak.  I personally like the satellite record because of its larger coverage areas and the fact that its manual adjustments (which are required of both technologies) are for a handful of instruments rather than thousands, and are thus easier to manage and get right.  But I am also increasingly of the opinion that the differences are minor, and that neither are consistent with catastrophic forecasts.

So instead of getting ourselves involved in the dueling temperature data set food fight (we will dip our toe into this in part B), let's instead apply both these data sets to several propositions we see frequently in the media.  We will quickly see the answers we reach do not depend on the data set chosen.

Test #1:  Is Global Warming Accelerating

One frequent meme you will hear all the time is that "global warming is accelerating."  As of today it had 550,000 results on Google.  For example:

click to enlarge

So.  Is that true?  They can't print it if its not true, right (lol)?  Let's look first at the satellite record through the end of 2015 when this presentation was put together (there is an El Nino driven spike in 2 months after this chart was made, which does not affect the conclusions that follow in the least, but I will update to include ASAP).

click to enlarge

If you want a name for this chart, I could call it the "bowl of cherries" because it has become a cherry-picker's delight.   Everyone in the debate can find a starting point and an end point in this jagged data to find any trend they want to find.  So how do we find an objective basis to define end points for this analysis?  Well, my background is more in economic analysis.  Economists have the same problem in looking at trends for things like employment or productivity because there is a business cycle that adds volatility to these numbers above and beyond any long term trend.  One way they manage this is to measure variables from peak to peak of the economic cycle.

I have done something similar.  The equivalent cyclical peaks in the temperature world are probably the very high Pacific Decadal Oscillation, or El Nino, events.  There was one in 1998 and there is one occurring right now in late 2015/early 2016.  So I defined my period as 18 years from peak to peak.  By this timing, the satellite record shows temperatures to be virtually dead flat for those 18 years.  This is "the pause" that you may have heard of in climate debates.   Such an extended pause is not predicted by global warming theory, particularly when the theory (as in the IPCC main case) assumes high temperature sensitivities to CO2 and low natural variation in temperatures.

So if global warming were indeed accelerating, we would expect the warming rate over the last 18 years to be higher than the rate over the previous 18 years.  But just the opposite is true:

click to enlarge

While "the pause" does not in and of itself disprove the theory of catastrophic manmade global warming, it does easily falsify the myriad statements you see that global warming is accelerating.  At least for the last 20 years, it has been decelerating.

By the way, this is not somehow an artifact of just the satellite record.  This is what the surface record looks like for the same periods:

click to enlarge

Though it shows (as we discussed earlier) higher overall warming rates, the surface temperature record also shows a deceleration rather than acceleration over the last 20 years.

 

Test #2:  Are Temperatures Rising Faster than Expected

OK, let's consider another common meme, that the "earth is warming faster than predicted."

click to enlarge

Again, there over 500,000 Google matches for this meme.  So how do we test it?  Well, certainly not against the last IPCC forecasts -- they are only a few years old.  The first real high-sensitivity or catastrophic forecast we have is from James Hansen, often called the father of global warming.

click to enlarge

In June of 1988, Hanson made a seminal presentation to Congress on global warming, including this very chart (sorry for the sucky 1980's graphics).  In his testimony, he presented his models for the Earth's temperature, which showed a good fit with history**.  Using his model, he then created three forecasts:  Scenario A, with high rates of CO2 emissions;  Scenario B, with more modest emissions; and scenario C, with drastic worldwide emissions cuts (plus volcanoes, that tend to belch dust and chemicals that have a cooling effect).  Surprisingly, we can't even get agreement today about which forecast for CO2 production was closer to the mark (throwing in the volcanoes makes things hard to parse) but it is pretty clear that over the 30 years after this forecast, the Earth's CO2 output has been somewhere between A and B.

click to enlarge

As it turns out, it doesn't matter whether we actually followed the CO2 emissions from A or B.  The warming forecasts for scenario A and B turn out to be remarkably similar.  In the past, I used to just overlay temperature actuals onto Hansen's chart, but it is a little hard to get the zero point right and it led to too many food fights.  So let's pull the scenario A and B forecasts off the chart and compare them a different way.

click to enlarge

The left of chart shows Hanson's scenario A and B, scanned right from his chart.  Scenario A implies a warming rate from 1986 to 2016 of 3.1C per century.  Scenario B is almost as high, at 2.8C per century.  But as you can see on the right, the actual warming rates we have seen over the same period are well below these forecasts.  The surface temperature record shows only about half the warming, and the satellite record shows only about a third the warming, that Hansen predicted.   There is no justification for saying that recent warming rates have been higher than expected or forecast -- in fact, the exact opposite has been true.

We see the same thing when looking at past IPCC forecasts.  At each of its every-five-year assessments, the IPCC has included a forecast range for future temperatures.  In this case, though, we don't have to create a comparison with actuals because the most recent (5th) IPCC Assessment did it for us:

click to enlarge

The colored bands are their past forecasts.  The grey areas are the error bands on the forecast.  The black dots are global temperatures (which actually are shown with error bars, which is good practice but seldom done except perhaps when they are trying to stretch to get into the forecast range).  As you can see, temperatures have been so far below forecasts that they are dropping out of the low end of even the most generous forecast bands.  If temperatures were rising faster than expected, the black dots would be above the orange and yellow bands.  We therefore have to come to the conclusion that, at least for the last 20-30 years, temperatures have not been rising faster than expected, they have been rising slower than expected.

Day vs. Night

There is one other phenomenon we can see in the temperature data that we will come back to in later chapters:  that much of the warming over the last century has been at night, rather than in the daytime.   There are two possible explanations for this.  The first is that most anthropogenic warming models predict more night time warming than they do day time warming.  The other possibility is that a portion of the warming in the 20th century temperature record is actually spurious bias from the urban heat island effect due to siting of temperature stations near cities, since urban heat island warming shows up mainly at night.  We will discuss the latter effect in part B of this chapter.

Whatever the cause, much of the warming we have seen has occurred at night, rather than during the day.  Here is a great example from the Amherst, MA temperature station (Amherst was the first location where I gave this presentation, if that seems an odd choice).

Click to enlarge

As you can see, the warming rate since 1945 is 5 times higher at night than during the day.  This directly affects average temperatures since daily average temperature for a location in the historic record is the simple average of the daily high and daily low.  Yes, I know that this is not exactly accurate, but given technology in the past, this is the best that could be done.

The news media likes to cite examples of heat waves and high temperature records as a "proof" of global warming.   We will discuss this later, but this is obviously a logical fallacy -- one can't prove a trend in noisy data simply by citing isolated data points in one tail of the distribution.  But it is also fallacious for another reason -- we are not actually seeing any upwards trends in high temperature records, at least for daytime highs:

Click to enlarge

To get this chart, we obviously have to eliminate newer temperature stations from the data set -- any temperature station that is only 20 years old will have all of its all time records in the last 20 years (you would be surprised at how many otherwise reputable scientists miss simple things like this).  Looking at just the temperature stations in the US we have a long record for, we see with the black line that there is really no upwards trend in the number of high temperature records (Tmax) being set.   The 1930s were brutally hot, and if not for some manual adjustments we will discuss in part B of this section, they would likely still show as the hottest recent era for the US.   It turns out, with the grey line (Tmin), that while there is still no upward trend, we are actually seeing more high temperature records being set with daily lows (the highest low, as it were) than we are with daily highs.  The media is, essentially, looking in the wrong place, but I sympathize because a) broiling hot daytime highs are sexier and b) it is brutally hard to talk about highest low temperatures without being confusing as hell.

In our next chapter, or really part B of this chapter, we will discuss some of the issues that may be leading the surface temperature record to be exaggerated, or at least inaccurate.

Chapter 4, Part B on problems with the surface temperature record continues here.

If you want to skip Part B, and get right on with the main line of the argument, you can go straight to Chapter 5, part A, which starts in on the question of how much of past warming can be attributed to man.

 

** Footnote:  The history of Wall Street is full of bankrupt people whose models exactly matched history.  I have done financial and economic modeling for decades, and it is surprisingly easy to force multi-variable models to match history.  The real test is how well the model works going forward.  Both Hanson's 1988 models and the IPCC's many models do an awesome job matching history, but quickly go off the rails in future years.  I am reminded of a simple but famous example of the perfect past correlation between certain NFL outcomes and Presidential election outcomes.   This NFL model of presidential elections perfectly matches history, but one would be utterly mad to bet future elections based on it.

Why Do Climate Change Claims Consistently Get a Fact-Checker Pass?

It is almost impossible to read a media story any more about severe weather events without seeing some blurb about such and such event being the result of manmade climate change.  I hear writers all the time saying that it is exhausting to run the gauntlet of major media fact checkers, so why do they all get a pass on these weather statements?  Even the IPCC, which we skeptics think is exaggerating manmade climate change effects, refused to link current severe weather events with manmade CO2.

The California drought brings yet another tired example of this.  I think pretty much everyone in the media has operated from the assumption that the current CA drought is 1. unprecedented and 2. man-made. The problem is that neither are true.  Skeptics have been saying this for months, pointing to 100-year California drought data and pointing to at 2-3 other events in the pre-manmade-CO2 era that were at least as severed.  But now the NOAA has come forward and said roughly the same thing:

Natural weather patterns, not man-made global warming, are causing the historic drought parching California, says a study out Monday from federal scientists.

"It's important to note that California's drought, while extreme, is not an uncommon occurrence for the state," said Richard Seager, the report's lead author and professor with Columbia University's Lamont Doherty Earth Observatory. The report was sponsored by the National Oceanic and Atmospheric Administration. The report did not appear in a peer-reviewed journal but was reviewed by other NOAA scientists.

"In fact, multiyear droughts appear regularly in the state's climate record, and it's a safe bet that a similar event will happen again," he said.

The persistent weather pattern over the past several years has featured a warm, dry ridge of high pressure over the eastern north Pacific Ocean and western North America. Such high-pressure ridges prevent clouds from forming and precipitation from falling.

The study notes that this ridge — which has resulted in decreased rain and snowfall since 2011 — is almost opposite to what computer models predict would result from human-caused climate change.

There is an argument to be made that this drought was made worse by the fact that the low precipitation was mated with higher-than average temperatures that might be partially attributable to man-made climate change.  One can see this in the Palmer drought severity index, which looks at more factors than just precipitation.  While the last 3 years was not the lowest for rainfall in CA over the last 100, I believe the Palmer index was the lowest for the last 3 years of any period in the last 100+ years.  The report did not address this warming or attempt to attribute some portion of it to man, but it is worth noting that temperatures this year in CA were, like the drought, not unprecedented, particularly in rural areas (urban areas are going to be warmer than 50 years ago due to increasing urban heat island effect, which is certainly manmade but has nothing to do with CO2.)

Update:  By the way, note the article is careful to give several paragraphs after this bit to opponents who disagree with the findings.  Perfectly fine.  But note that this is the courtesy that is increasingly denied to skeptics when the roles are reversed.  Maybe I should emulate climate alarmists and be shouting "false balance!  the science is settled!"

Listening to California Parks People Discuss Climate Change

Some random highlights:

  • I watched a 20 minute presentation in which a woman from LA parks talked repeatedly about the urban heat island being a result of global warming
  • I just saw that California State Parks, which is constantly short of money and has perhaps a billion dollars in unfunded maintenance needs, just spent millions of dollars to remove a road from a beachfront park based solely (they claimed) based on projections that 55 inches of sea level rise would cause the road to be a problem.  Sea level has been rising 3-4mm a year for over 150 years and even the IPCC, based on old much higher temperature increase forecasts, predicted about a foot of rise.
  • One presenter said that a 3-5C temperature rise over the next century represent the low end of reasonable forecasts.  Most studies of later are showing a climate sensitivity of 1.5-2.0 C (I still predict 1C) with warming over the rest of the century of about 1C, or about what we saw last century
  • I watched them brag for half an hour about spending tons of extra money on make LEED certified buildings.  As written here any number of times, most LEED savings come through BS gaming of the rules, like putting in dedicated electric vehicle parking sites (that do not even need a charger to get credit).  In a brief moment of honesty, the architect presenting admitted that most of the LEED score for one building came from using used rather than new furniture in the building.
  • They said that LEED buildings were not any more efficient than most other commercial buildings getting built, just a matter of whether you wanted to pay for LEED certification -- it was stated that the certification was mostly for the plaque.  Which I suppose is fine for private businesses looking for PR, but why are cash-strapped public agencies doing it?

Great Moments in "Science"

You know that relative of yours, who last Thanksgiving called you anti-science because you had not fully bought into global warming alarm?

Well, it appears that the reason we keep getting called "anti-science" is because climate scientists have a really funny idea of what exactly "science" is.

Apparently, a number of folks have been trying for years to get articles published in peer reviewed journals comparing the IPCC temperature models to actual measurements, and in the process highlighting the divergence of the two.  And they keep getting rejected.

Now, the publisher of Environmental Research Letters has explained why.  Apparently, in climate science it is "an error" to attempt to compare computer temperature forecasts with the temperatures that actually occurred.  In fact, he says that trying to do so "is harmful as it opens the door for oversimplified claims of 'errors' and worse from the climate sceptics media side".  Apparently, the purpose of scientific inquiry is to win media wars, and not necessarily to discover truth.

Here is something everyone in climate should remember:  The output of models merely represents a hypothesis.  When we have complicated hypotheses in complicated systems, and where such hypotheses may encompass many interrelated assumptions, computer models are an important tool for playing out, computationally, what results those hypotheses might translate to in the physical world.  It is no different than if Newton had had a computer and took his equation Gmm/R^2 and used the computer to project future orbits for the Earth and other planets (which he and others did, but by hand).   But these projections would have no value until they were checked against actual observations.  That is how we knew we liked Newton's models better than Ptolemy's -- because they checked out better against actual measurements.

But climate scientists are trying to create some kind of weird world where model results have some sort of independent reality, where in fact the model results should be trusted over measurements when the two diverge.  If this is science -- which it is not -- but if it were, then I would be anti-science.

Climate Alarmism In One Statement: "Limited Evidence, High Agreement"

From James Delingpole:

The draft version of the report's Summary For Policymakers made the startling admission that the economic damage caused by "climate change" would be between 0.2 and 2 percent of global GDP - significantly less than the doomsday predictions made in the 2006 Stern report (which estimated the damage at between 5 and 20 percent of global GDP).

But this reduced estimate did not suit the alarmist narrative of several of the government delegations at the recent IPCC talks in Yokahama, Japan. Among them was the British one, comprising several members of the deep green Department of Energy and Climate Change (DECC), which insisted on doctoring this section of the Summary For Policymakers in order to exaggerate the potential for more serious economic damage.

"Losses are more likely than not to be greater, rather than smaller, than this range (limited evidence, high agreement)"

There was no evidence whatsoever in the body of the report to justify this statement.

I find it fascinating that there can be "high agreement" to a statement for which there is limited or no evidence.  Fortunately these are all self-proclaimed defenders of science or I might think this was purely a political statement.

Note that the most recent IPCC reports and new published studies on climate sensitivity tend to say that 1) warming in the next century will be 1-2C, not the much higher numbers previously forecast; 2)  That warming will not be particularly expensive to manage and mitigate and 3) we are increasingly less sure that warming is causing all sorts of negative knock-on effects like more hurricanes.  In other words, opinion is shifting to where science-based skeptics have been all along (since 2007 in my case).  No surprise or shame here.  What is shameful though is that as evidence points more and more to the lukewarmer skeptic position, we are still called evil heretical deniers that should be locked in jail.  Like telling Galileo, "you were right about that whole heliocentric thing but we still think you are evil for suggesting it."

Climate Humor from the New York Times

Though this is hilarious, I am pretty sure Thomas Lovejoy is serious when he writes

But the complete candor and transparency of the [IPCC] panel’s findings should be recognized and applauded. This is science sticking with the facts. It does not mean that global warming is not a problem; indeed it is a really big problem.

This is a howler.  Two quick examples.  First, every past IPCC report summary has had estimates for climate sensitivity, ie the amount of temperature increase they expect for a doubling of CO2 levels.  Coming into this IPCC report, emerging evidence from recent studies has been that the climate sensitivity is much lower than previous estimates.  So what did the "transparent" IPCC do?  They, for the first time, just left out the estimate rather than be forced to publish one that was lower than the last report.

The second example relates to the fact that temperatures have been flat over the last 15-17 years and as a result, every single climate model has overestimated temperatures.  By a lot. In a draft version, the IPCC created this chart (the red dots were added by Steve McIntyre after the chart was made as the new data came in).

figure-1-4-models-vs-observations-annotated (1)

 

This chart was consistent with a number of peer-reviewed studies that assessed the performance of climate models.  Well, this chart was a little too much "candor" for the transparent IPCC, so they replaced it with this chart in the final draft:

figure-1-4-final-models-vs-observations

 

What a mess!  They have made the area we want to look at between 1990 and the present really tiny, and then they have somehow shifted the forecast envelopes down on several of the past reports so that suddenly current measurements are within the bands.   They also hide the bottom of the fourth assessment band (orange FAR) so you can't see that observations are out of the envelope of the last report.  No one so far can figure out how they got the numbers in this chart, and it does not match any peer-reviewed work.  Steve McIntyre is trying to figure it out.

OK, so now that we are on the subject of climate models, here is the second hilarious thing Lovejoy said:

Does the leveling-off of temperatures mean that the climate models used to track them are seriously flawed? Not really. It is important to remember that models are used so that we can understand where the Earth system is headed.

Does this make any sense at all?  Try it in a different context:  The Fed said the fact that their economic models failed to predict what actually happened over the last 15 years is irrelevant because the models are only used to see where the economy is headed.

The consistent theme of this report is declining certainty and declining chances of catastrophe, two facts that the IPCC works as hard as possible to obfuscate but which still come out pretty clearly as one reads the report.

The Key Disconnect in the Climate Debate

Much of the climate debate turns on a single logical fallacy.  This fallacy is clearly on display in some comments by UK Prime Minister David Cameron:

It’s worth looking at what this report this week says – that [there is a] 95 per cent certainty that human activity is altering the climate. I think I said this almost 10 years ago: if someone came to you and said there is a 95 per cent chance that your house might burn down, even if you are in the 5 per cent that doesn’t agree with it, you still take out the insurance, just in case.”

"Human activity altering climate" is not the same thing as an environmental catastrophe (or one's house burning down).  The statement that he is 95% certain that human activity is altering climate is one that most skeptics (including myself) are 100% sure is true.  There is evidence that human activity has been altering the climate since the dawn of agriculture.  Man's changing land uses have been demonstrated to alter climate, and certainly man's incremental CO2 is raising temperatures somewhat.

The key question is -- by how much?  This is a totally different question, and, as I have written before, is largely dependent on climate theories unrelated to greenhouse gas theory, specifically that the Earth's climate system is dominated by large positive feedbacks.  (Roy Spenser has a good summary of the issue here.)

The catastrophe is so uncertain that for the first time, the IPCC left estimates of climate sensitivity to CO2 out of its recently released summary for policy makers, mainly because it was not ready to (or did not want to) deal with a number of recent studies yielding sensitivity numbers well below catastrophic levels.  Further, the IPCC nearly entirely punted on the key question of how it can reconcile its past high sensitivity/ high feedback based temperature forecasts with past relative modest measured warming rates, including a 15+ year pause in warming which none of its models predicted.

The overall tone of the new IPCC report is one of declining certainty -- they are less confident of their sensitivity numbers and less confident of their models which have all been a total failure over the last 15 years. They have also backed off of other statements, for example saying they are far less confident that warming is leading to severe weather.

Most skeptics are sure mankind is affecting climate somewhat, but believe that this effect will not be catastrophic.  On both fronts, the IPCC is slowly catching up to us.

Hearing What You Want to Hear from the Climate Report

After over 15 years of no warming, which the IPCC still cannot explain, and with climate sensitivity numbers dropping so much in recent studies that the IPCC left climate sensitivity estimates out of their summary report rather than address the drop, the Weather Channel is running this headline on their site:

weatherch

 

The IPCC does claim more confidence that warming over the past 60 years is partly or mostly due to man (I have not yet seen the exact wording they landed on), from 90% to 95%.  But this is odd given that the warming all came from 1978 to 1998 (see for yourself in temperature data about halfway through this post).  Temperatures are flat or cooling for the other 40 years of the period.  The IPCC cannot explain these 40 years of no warming in the context of high temperature sensitivities to CO2.  And, they can't explain why they can be 95% confident of what drove temperatures in the 20 year period of 1978-1998 but simultaneously have no clue what drove temperatures in the other years.

At some point I will read the thing and comment further.

 

Appeals to Authority

A reader sends me a story of global warming activist who clearly doesn't know even the most basic facts about global warming.  Since this article is about avoiding appeals to authority, so I hate to ask you to take my word for it, but it is simply impossible to immerse oneself in the science of global warming for any amount of time without being able to immediately rattle off the four major global temperature data bases (or at least one of them!)

I don't typically find it very compelling to knock a particular point of view just because one of its defenders is a moron, unless that defender has been set up as a quasi-official representative of that point of view (e.g. Al Gore).  After all, there are plenty of folks on my side of issues, including those who are voicing opinions skeptical of catastrophic global warming, who are making screwed up arguments.

However, I have found over time this to be an absolutely typical situation in the global warming advocacy world.  Every single time I have publicly debated this issue, I have understood the opposing argument, ie the argument for catastrophic global warming, better than my opponent.   In fact, I finally had to write a first chapter to my usual presentation.  In this preamble, I outline the case and evidence for manmade global warming so the audience could understand it before I then set out to refute it.

The problem is that the global warming alarm movement has come to rely very heavily on appeals to authority and ad hominem attacks in making their case.  What headlines do you see? 97% of scientists agree, the IPCC is 95% sure, etc.  These "studies", which Lord Monkton (with whom I often disagree but who can be very clever) calls "no better than a show of hands", dominate the news.  When have you ever seen a story in the media about the core issue of global warming, which is diagnosing whether positive feedbacks truly multiply small bits of manmade warming to catastrophic levels.  The answer is never.

Global warming advocates thus have failed to learn how to really argue the science of their theory.  In their echo chambers, they have all agreed that saying "the science is settled" over and over and then responding to criticism by saying "skeptics are just like tobacco lawyers and holocaust deniers and are paid off by oil companies" represents a sufficient argument.**  Which means that in an actual debate, they can be surprisingly easy to rip to pieces.  Which may be why most, taking Al Gore's lead, refuse to debate.

All of this is particularly ironic since it is the global warming alarmists who try to wrap themselves in the mantle of the defenders of science.  Ironic because the scientific revolution began only when men and women were willing to reject appeals to authority and try to understand things for themselves.

 

** Another very typical tactic:  They will present whole presentations without a single citation.   But make one statement in your rebuttal as a skeptic that is not backed with a named, peer-reviewed study, and they will call you out on it.  I remember in one presentation, I was presenting some material that was based on my own analysis.  "But this is not peer-reviewed" said one participant, implying that it should therefore be ignored.  I retorted that it was basic math, that the data sources were all cited, and they were my peers -- review it.  Use you brains.  Does it make sense?  Is there a flaw?  But they don't want to do that.  Increasingly, oddly, science is about having officially licensed scientists delivery findings to them on a platter.

IPCC: We Count on Lazy Reporters

We will see the final version of the IPCC's Fifth climate assessment soon.  But here is something interesting from the last draft circulated.  First, here is there chart comparing actual temperatures to model forecasts.  As you can see, all the actuals fall outside the published ranges from all previous reports (with a couple of the most recent data points added by Steve McIntyre in red).

click to enlarge

 

A problem, though not necessarily a fatal problem if the divergence can be explained.  And the IPCC is throwing out a lot of last minute explanations, though none of them are backed with any actual science.  I discussed one of these explanations here.  Anyway, you see their data above.  This is what they actually write in the text:

the globally-averaged surface temperatures are well within the uncertainty range of all previous IPCC projections, and generally are in the middle of the scenario ranges”.

This is completely absurd, of course, given their own data, but it has lasted through several drafts, so we will see if it makes it into the final draft.  My guess is that they will leave this issue out entirely in the summary for policy makers (the only part the media reads).  Steve McIntyre discusses the whole history of this divergence issue, along with a series of studies highlighting this divergence that have been consistently kept out of publication by climate gatekeepers.

The frustrating part is that the IPCC is running around saying they can't have a complete answer on this critical issue because it is so new.  By "new" they mean a frequent skeptics' observation and criticism of climate models for over a decade that they have only recently been forced under duress to finally consider.

Update On My Climate Model (Spoiler: It's Doing a Lot Better than the Pros)

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends (click on any image in this post to enlarge it)

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this "model"

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun's output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about "since 1950", but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to... ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can't assume that these factors caused the current "pause" in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued....

Update:  Seriously, on a relative basis, I am kicking ass

click to enlarge

Climate Groundhog Day

I discuss in a bit more detail at my climate blog why I feel like climate blogging has become boring and repetitious.  To prove it, I predict in advance the stories that skeptics will run about the upcoming IPCC report.

I had a reader write to ask how I could be bored when there were still hilarious stories out there of climate alarmists trying to row through the Arctic and finding to their surprise it is full of ice.  But even this story repeats itself.  There have been such stories almost every year in the past five.

We Are 95% Confident in a Meaningless Statement

Apparently the IPCC is set to write:

Drafts seen by Reuters of the study by the U.N. panel of experts, due to be published next month, say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s.

That is up from at least 90 percent in the last report in 2007, 66 percent in 2001, and just over 50 in 1995, steadily squeezing out the arguments by a small minority of scientists that natural variations in the climate might be to blame.

I have three quick reactions to this

  • The IPCC has always adopted words like "main cause" or "substantial cause."  They have not even had enough certainly to use the word "majority cause" -- they want to keep it looser than that.  If man causes 30% and every other cause is at 10% or less, is man the main cause?  No one knows.  So that is how we get to the absurd situation where folks are trumpeting being 95% confident in a statement that is purposely vaguely worded -- so vague that the vast majority of people who sign it would likely disagree with one another on exactly what they have agreed to.
  • The entirety of the post-1950 temperature rise occurred between 1978 and 1998 (see below a chart based on the Hadley CRUT4 database, the same one used by the IPCC

2013 Version 3 Climate talk

Note that temperatures fell from 1945 to about 1975, and have been flat from about 1998 to 2013.  This is not some hidden fact - it was the very fact that the warming slope was so steep in the short period from 1978-1998 that contributed to the alarm.  The current 15 years with no warming was not predicted and remains unexplained (at least in the context of the assumption of high temperature sensitivities to CO2).  The IPCC is in a quandary here, because they can't just say that natural variation counter-acted warming for 15 years, because this would imply a magnitude to natural variability that might have explained the 20 year rise from 1978-1998 as easily as it might explain the warming hiatus over the last 15 years (or in the 30 years preceding 1978).

  • This lead statement by the IPCC continues to be one of the great bait and switches of all time.  Most leading skeptics (excluding those of the talk show host or politician variety) accept that CO2 is a greenhouse gas and is contributing to some warming of the Earth.  This statement by the IPCC says nothing about the real issue, which is what is the future sensitivity of the Earth's temperatures to rising CO2 - is it high, driven by large positive feedbacks, or more modest, driven by zero to negative feedbacks.  Skeptics don't disagree that man has cause some warming, but believe that future warming forecasts are exaggerated and that the negative effects of warming (e.g. tornadoes, fires, hurricanes) are grossly exaggerated.

Its OK not to know something -- in fact, that is an important part of scientific detachment, to admit what one does not know.   But what the hell does being 95% confident in a vague statement mean?  Choose which of these is science:

  • Masses are attracted to each other in proportion to the product of their masses and inversely proportional to the square of their distance of separation.
  • We are 95% certain that gravity is the main cause of my papers remaining on my desk

Best and the Brightest May Finally Be Open To Considering Lower Climate Sensitivity Numbers

For years, readers of this site know that I have argued that:

  • CO2 is indeed a greenhouse gas, and since man is increasing its atmospheric concentration, there is likely some anthropogenic contribution to warming
  • Most forecasts, including those of the IPCC, grossly exaggerate temperature sensitivity to CO2 by assuming absurd levels of net positive feedback in the climate system
  • Past temperature changes are not consistent with high climate sensitivities

Recently, there have been a whole spate of studies based on actual observations rather than computer models that have been arriving at climate sensitivity numbers far below the IPCC number.   While the IPCC settled on 3C per doubling of CO2, it strongly implied that all the risk was to the upside, and many other prominent folks who typically get fawning attention in the media have proposed much higher numbers.

In fact, recent studies are coming in closer to 1.5C - 2C.  I actually still think these numbers will turn out to be high.  For several years now my money has been on a number from 0.8 to 1 C, sensitivity numbers that imply a small amount of negative feedback rather than positive feedback, a safer choice in my mind since most long-term stable natural systems are dominated by negative feedback.

Anyway, in an article that was as surprising as it is welcome, NY Times climate writer Andy Revkin has quite an article recently, finally acknowledging in the paper of record that maybe those skeptics who have argued for alower sensitivity number kind of sort of have a point.

Worse than we thought” has been one of the most durable phrases lately among those pushing for urgent action to stem the buildup of greenhouse gases linked to global warming.

But on one critically important metric — how hot the planet will get from a doubling of the pre-industrial concentration of greenhouse gases, a k a “climate sensitivity” — someclimate researchers with substantial publication records are shifting toward the lower end of the warming spectrum.

By the way, this is the only metric that matters.  All the other BS about "climate change" and "dirty weather" are meaningless without warming.  CO2 cannot change the climate  or raise sea levels or any of that other stuff by any mechanism we understand or that has even been postulated, except via warming.  Anyway, to continue:

There’s still plenty of global warming and centuries of coastal retreats in the pipeline, so this is hardly a “benign” situation, as some have cast it.

But while plenty of other climate scientists hold firm to the idea that the full range of possible outcomes, including a disruptively dangerous warming of more than 4.5 degrees C. (8 degrees F.), remain in play, it’s getting harder to see why the high-end projections are given much weight.

This is also not a “single-study syndrome” situation, where one outlier research paper is used to cast doubt on a bigger body of work — as Skeptical Science asserted over the weekend. That post focused on the as-yet-unpublished paper finding lower sensitivity that was inadvisedly promoted recently by the Research Council of Norway.

In fact, there is an accumulating body of reviewed, published researchshaving away the high end of the range of possible warming estimates from doubled carbon dioxide levels. Chief among climate scientists critical of the high-sensitivity holdouts is James Annan, an experienced climate modeler based in Japan who contributed to the 2007 science report from the Intergovernmental Panel on Climate Change. By 2006, he was already diverging from his colleagues a bit.

The whole thing is good.  Of course, for Revkin, this is no excuse to slow down all the actions supposedly demanded by global warming, such as substantially raising the price and scarcity of hydrocarbons.  Which to me simply demonstrates that people who have been against hydrocarbons have always been against them as an almost aesthetic choice, and climate change and global warming were mere excuses to push the agenda.  After all, as there certainly are tradeoffs to limiting economic growth and energy use and raising the price of energy, how can a reduction in postulated harms from fossil fuels NOT change the balance point one chooses in managing their use?

PS-  I thought this was a great post mortem on Hurricane Sandy and the whole notion that this one data point proves the global warming trend:

In this case several factors not directly related to climate change converged to generate the event. On Sandy’s way north, it ran into a vast high-pressure system over Canada, which prevented it from continuing in that direction, as hurricanes normally do, and forced it to turn west. Then, because it traveled about 300 miles over open water before making landfall, it piled up an unusually large storm surge. An infrequent jet-stream reversal helped maintain and fuel the storm. As if all that weren’t bad enough, a full moon was occurring, so the moon, the earth, and the sun were in a straight line, increasing the moon’s and sun’s gravitational effects on the tides, thus lifting the high tide even higher. Add to this that the wind and water, though not quite at hurricane levels, struck an area rarely hit by storms of this magnitude so the structures were more vulnerable and a disaster occurred.

The last one is a key for me -- you have cities on the Atlantic Ocean that seemed to build and act as if they were immune from ocean storms.  From my perspective growing up on the gulf coast, where one practically expects any structure one builds on the coast to be swept away every thirty years or so, this is a big contributing factor no one really talks about.

She goes on to say that rising sea levels may have made the storm worse, but I demonstrated that it couldn't have added more than a few percentage points to the surge.

Trusting Experts and Their Models

Russ Roberts over at Cafe Hayek quotes from a Cathy O’Neill review of Nate Silvers recent book:

Silver chooses to focus on individuals working in a tight competition and their motives and individual biases, which he understands and explains well. For him, modeling is a man versus wild type thing, working with your wits in a finite universe to win the chess game.

He spends very little time on the question of how people act inside larger systems, where a given modeler might be more interested in keeping their job or getting a big bonus than in making their model as accurate as possible.

In other words, Silver crafts an argument which ignores politics. This is Silver’s blind spot: in the real world politics often trump accuracy, and accurate mathematical models don’t matter as much as he hopes they would....

My conclusion: Nate Silver is a man who deeply believes in experts, even when the evidence is not good that they have aligned incentives with the public.

Distrust the experts

Call me “asinine,” but I have less faith in the experts than Nate Silver: I don’t want to trust the very people who got us into this mess, while benefitting from it, to also be in charge of cleaning it up. And, being part of the Occupy movement, I obviously think that this is the time for mass movements.

Like Ms. O'Neill, I distrust "authorities" as well, and have a real problem with debates that quickly fall into dueling appeals to authority.  She is focusing here on overt politics, but subtler pressure and signalling are important as well.  For example, since "believing" in climate alarmism in many circles is equated with a sort of positive morality (and being skeptical of such findings equated with being a bad person) there is an underlying peer pressure that is different from overt politics but just as damaging to scientific rigor.  Here is an example from the comments at Judith Curry's blog discussing research on climate sensitivity (which is the temperature response predicted if atmospheric levels of CO2 double).

While many estimates have been made, the consensus value often used is ~3°C. Like the porridge in “The Three Bears”, this value is just right – not so great as to lack credibility, and not so small as to seem benign.

Huybers (2010) showed that the treatment of clouds was the “principal source of uncertainty in models”. Indeed, his Table I shows that whereas the response of the climate system to clouds by various models varied from 0.04 to 0.37 (a wide spread), the variation of net feedback from clouds varied only from 0.49 to 0.73 (a much narrower relative range). He then examined several possible sources of compensation between climate sensitivity and radiative forcing. He concluded:

“Model conditioning need not be restricted to calibration of parameters against observations, but could also include more nebulous adjustment of parameters, for example, to fit expectations, maintain accepted conventions, or increase accord with other model results. These more nebulous adjustments are referred to as ‘tuning’.”  He suggested that one example of possible tuning is that “reported values of climate sensitivity are anchored near the 3±1.5°C range initially suggested by the ad hoc study group on carbon dioxide and climate (1979) and that these were not changed because of a lack of compelling reason to do so”.

Huybers (2010) went on to say:

“More recently reported values of climate sensitivity have not deviated substantially. The implication is that the reported values of climate sensitivity are, in a sense, tuned to maintain accepted convention.”

Translated into simple terms, the implication is that climate modelers have been heavily influenced by the early (1979) estimate that doubling of CO2 from pre-industrial levels would raise global temperatures 3±1.5°C. Modelers have chosen to compensate their widely varying estimates of climate sensitivity by adopting cloud feedback values countering the effect of climate sensitivity, thus keeping the final estimate of temperature rise due to doubling within limits preset in their minds.

There is a LOT of bad behavior out there by models.  I know that to be true because I used to be a modeler myself.  What laymen do not understand is that it is way too easy to tune and tweak and plug models to get a preconceived answer -- and the more complex the model, the easier this is to do in a non-transparent way.  Here is one example, related again to climate sensitivity

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic assertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

By the way, this aerosol issue is central to recent work that is pointing to a much lower climate sensitivity to CO2 than has been reported in past IPCC reports.

Worst Chart Ever?

Kevin Drum posts this chart with a straight face as "proof" that sea level rise is out-pacing forecasts.

I don't really think I need to even point out the problem to most of my readers, but you can see the differences in ending value is because the starting values are different.  Likely the two are drawing from different data sources with a shifted zero value.  The slopes are the same, confirmed by the fact that the 3.2 mm trend per year is well within the IPCC forecast range that was centered, if I remember right, around 3.3 mm per year.  It is also well under Al Gore's forecast, which was for 20 feet by 2100, or about 61 mm per year.

I Was Reading Matt Ridley's Lecture at the Royal Society for the Arts....

... and it was fun to see my charts in it!  The lecture is reprinted here (pdf) or here (html).  The charts I did are around pages 6-7 of the pdf, the ones showing the projected curve of global warming for various climate sensitivities, and backing into what that should imply for current warming.  In short, even if you don't think warming in the surface temperature record is exaggerated, there still has not been anywhere near the amount of warming one would expect for the types of higher sensitivities in the IPCC and other climate models.  Warming to date, even if not exaggerated and all attributed to man-made and not natural causes, is consistent with far less catastrophic, and more incremental, future warming numbers.

These charts come right out of the IPCC formula for the relationship between CO2 concentrations and warming, a formula first proposed by Michael Mann.  I explained these charts in depth around the 10 minute mark of this video, and returned to them to make the point about past warming around the 62 minute mark.   This is a shorter video, just three minutes, that covers the same ground.  Watching it again, I am struck by how relevant it is as a critique five years later, and by how depressing it is that this critique still has not penetrated mainstream discussion of climate.  In fact, I am going to embed it below:

The older slides Ridley uses, which are cleaner (I went back and forth on the best way to portray this stuff) can be found here.

By the way, Ridley wrote an awesome piece for Wired more generally about catastrophism which is very much worth a read.

The Real Issue in Climate

I know I hammer this home constantly, but it is often worth a reminder.  The issue in the scientific debate over catastrophic man-made global warming theory is not whether CO2 is a greenhouse gas, or even the approximate magnitude of warming from CO2 directly, but around feedbacks.   Patrick Moore, Greenpeace founder, said it very well:

What most people don't realize, partly because the media never explains it, is that there is no dispute over whether CO2 is a greenhouse gas, and all else being equal would result in a warming of the climate. The fundamental dispute is about water in the atmosphere, either in the form of water vapour (a gas) or clouds (water in liquid form). It is generally accepted that a warmer climate will result in more water evaporating from the land and sea and therefore resulting in a higher level of water in the atmosphere, partly because the warmer the air is the more water it can hold. All of the models used by the IPCC assume that this increase in water vapour will result in a positive feedback in the order of 3-4 times the increase in temperature that would be caused by the increase in CO2 alone.

Many scientists do not agree with this, or do not agree that we know enough about the impact of increased water to predict the outcome. Some scientists believe increased water will have a negative feedback instead, due to increased cloud cover. It all depends on how much, and a t what altitudes, latitudes and times of day that water is in the form of a gas (vapour) or a liquid (clouds). So if  a certain increase in CO2 would theoretically cause a 1.0C increase in temperature, then if water caused a 3-4 times positive feedback the temperature would actually increase by 3-4C. This is why the warming predicted by the models is so large. Whereas if there was a negative feedback of 0.5 times then the temperature would only rise 0.5C.

My slightly lengthier discussions of this same issue are here and here.

A Vivid Reminder of How The Climate Debate is Broken

My Forbes column is up this week.  I really did not want to write about climate, but when Forbes conctributor Steve Zwick wrote this, I had to respond

We know who the active denialists are – not the people who buy the lies, mind you, but the people who create the lies.  Let’s start keeping track of them now, and when the famines come, let’s make them pay.  Let’s let their houses burn.  Let’s swap their safe land for submerged islands.  Let’s force them to bear the cost of rising food prices.

They broke the climate.  Why should the rest of us have to pay for it?

The bizarre threats and ad hominem attacks have to stop.  Real debate is necessary based on an assumption that our opponents may be wrong, but are still people of good will.  And we need to debate what really freaking matters:

Instead of screwing around in the media trying to assign blame for the recent US heat wave to CO2 and threatening to burn down the houses of those who disagree with us, we should be arguing about what matters.  And the main scientific issue that really matters is understanding climate feedback.  I won't repeat all of the previous posts (see here and here), but this is worth repeating:

Direct warming from the greenhouse gas effect of CO2 does not create a catastrophe, and at most, according to the IPCC, might warm the Earth another degree over the next century.  The catastrophe comes from the assumption that there are large net positive feedbacks in the climate system that multiply a small initial warming from CO2 many times.  It is this assumption that positive feedbacks dominate over negative feedbacks that creates the catastrophe.  It is telling that when prominent supporters of the catastrophic theory argue the science is settled, they always want to talk about the greenhouse gas effect (which most of us skeptics accept), NOT the positive feedback assumption.  The assumption of net positive climate feedback is not at all settled -- in fact there is as much evidence the feedback is net negative as net positive -- which may be why catastrophic theory supporters seldom if ever mention this aspect of the science in the media.

I said I would offer a counter-proposal to Mr. Zwick's that skeptics bear the costs of climate change.  I am ready to step up to the cost of any future man-made climate change if Mr. Zwick is ready to write a check for the lost economic activity and increased poverty caused by his proposals.  We are at an exciting point in history where a billion people, or more, in Asia and Africa and Latin America are at the cusp of emerging from millenia of poverty.  To do so, they need to burn every fossil fuel they can get their hands on, not be forced to use rich people's toys like wind and solar.  I am happy to trade my home for an imaginary one that Zwick thinks will be under water.  Not only is this a great way to upgrade to some oceanfront property, but I am fully confident the crazy Al Gore sea level rise predictions are a chimera, since sea levels have been rising at a fairly constant rate since the end of the little ice age..  In return, perhaps Mr. Zwick can trade his job for one in Asia that disappears when he closes the tap on fossil fuels?

I encourage you to read it all, including an appearance by the summer of the shark.

Climate Bait and Switch

Cross posted from Climate Skeptic

This quote from Michael Mann [of Hockey Stick fame] is a great example of two common rhetorical tactics of climate alarmists:

And so I think we have to get away from this idea that in matters of science, it's, you know, that we should treat discussions of climate change as if there are two equal sides, like we often do in the political discourse. In matters of science, there is an equal merit to those who are denying the reality of climate change who area few marginal individuals largely affiliated with special interests versus the, you know, thousands of scientists around the world. U.S. National Academy of Sciences founded by Abraham Lincoln back in the 19th century, all the national academies of all of the major industrial nations around the world have all gone on record as stating clearly that humans are warming the planet and changing the climate through our continued burning of fossil fuels.

Here are the two tactics at play here:

  1. He is attempting to marginalize skeptics so that debating their criticisms is not necessary.  He argues that skeptics are not people of goodwill; or that they say what they say because they are paid by nefarious interests to do so; or that they are vastly outnumbered by real scientists ("real" being defined as those who agree with Dr. Mann).  This is an oddly self-defeating argument, though the media never calls folks like Mann on it.  If skeptics' arguments are indeed so threadbare, then one would imagine that throwing as much sunlight on them as possible would reveal their bankruptcy to everyone, but instead most alarmists are begging the media, as in this quote, to bury and hide skeptics' arguments.  I LOVE to debate people when I know I am right, and have pre-debate trepidation only when I know my position to be weak.
  2. There is an enormous bait and switch going on in the last sentence.  Note the proposition is stated as "humans are warming the planet and changing the climate through our continued burning of fossil fuels."  I, and many other skeptics, don't doubt the first part and would quibble with the second only because so much poor science occurs in attributing specific instances of climate change to human action.  What most skeptics disagree with is an entirely different proposition, that humans are warming the planet to catastrophic levels that justify immensely expensive and coercive government actions to correct.  Skeptics generally accept a degree or so of warming from each doubling of CO2 concentrations but reject the separate theory that the climate is dominated by positive feedback effects that multiple this warming 3x or more.   Mann would never be caught dead in public trying to debate this second theory of positive feedback, despite the fact that most of the warming in IPCC forecasts is from this second theory, because it is FAR from settled.  Again, the media is either uninterested or intellectually unable to call him on this.
I explained the latter points in much more detail at Forbes.com

Fritz Vahrenholt Climate Book

A lot of folks have asked me if I am going to comment on this

One of the fathers of Germany’s modern green movement, Professor Dr. Fritz Vahrenholt, a social democrat and green activist, decided to author a climate science skeptical book together with geologist/paleontologist Dr. Sebastian Lüning. Vahrenholt’s skepticism started when he was asked to review an IPCC report on renewable energy. He found hundreds of errors. When he pointed them out, IPCC officials simply brushed them aside. Stunned, he asked himself, “Is this the way they approached the climate assessment reports?”

I have not seen the book nor the Der Spiegel feature, but I can say that, contrary to the various memes running around, many science-based skeptics became such by exactly this process -- looking at the so-called settled science and realizing a lot of it was really garbage.  Not because we were paid off in oil money or mesmerized by Rush Limbaugh, but because the actual detail behind many of the IPCC conclusions is really a joke.

For tomorrow, I am working on an article I have been trying to write literally for years.  One of the confusing parts of the climate debate is that there are really portions of the science that are pretty solid.  When skeptics point to other parts of the science that is not well-done, defenders tend to run back to the solid parts and point to those.  That is why Michael Mann frequently answers his critics by saying that skeptics are dumb because they don't accept greenhouse gas theory, but most skeptics do indeed accept greenhouse gas theory, what they don't accept is the separate theory that the climate is dominated by positive feedbacks that amplify small warming from CO2 into a catastrophe.

This is an enormous source of confusion in the debate, facilitated by a scientifically illiterate press and alarmists who explicitly attempt to make this bate and switch so they can avoid arguing the tough points.  Even the author linked above is confused on this

Skeptic readers should not think that the book will fortify their existing skepticism of CO2 causing warming. The authors agree it does. but have major qualms about the assumed positive CO2-related feed-backs and believe the sun plays a far greater role in the whole scheme of things.

This is in fact exactly the same position that most skeptics, at least the science-based non-talkshow-host ones have.  Look for my Forbes piece tomorrow.

Katrina Flashback

It is December, 2005.  The Gulf Coast had just been pounded, in succession, by Katrina, Rita, and Wilma.  Everyone was talking about how global warming seemed to be intensifying hurricanes.  In a speech just after Katrina, Al Gore said

 When the corpses of American citizens are floating in toxic floodwaters five days after a hurricane strikes, it is time not only to respond directly to the victims of the catastrophe but to hold the processes of our nation accountable, and the leaders of our nation accountable, for the failures that have taken place....

There are scientific warnings now of another onrushing catastrophe. We were warned of an imminent attack by Al Qaeda; we didn't respond. We were warned the levees would break in New Orleans; we didn't respond. Now, the scientific community is warning us that the average hurricane will continue to get stronger because of global warming. A scientist at MIT has published a study well before this tragedy showing that since the 1970s, hurricanes in both the Atlantic and the Pacific have increased in duration, and in intensity, by about 50 percent....

Two thousand scientists, in 100 countries, engaged in the most elaborate, well-organized scientific collaboration in the history of humankind, have produced long-since a consensus that we will face a string of terrible catastrophes unless we act to prepare ourselves and deal with the underlying causes of global warming....

At about the same time, the IPCC was in the process of preparing its fourth report, later released in 2007.  It said, in part:

Several peer-reviewed studies show a clear global trend toward increased intensity of the strongest hurricanes over the past two or three decades. The strongest trends are in the North Atlantic Ocean and the Indian Ocean. According to the 2007 Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR4), it is “more likely than not” (better than even odds) that there is a human contribution to the observed trend of hurricane intensification since the 1970s. In the future, “it is likely [better than 2 to 1 odds] that future tropical cyclones (typhoons and hurricanes) will become more intense, with larger peak wind speeds and more heavy precipitation associated with ongoing increases of tropical [sea surface temperatures].”

So what happened?  Since Wilma in 2005, we have gone 6 full years without a category 3+ hurricane making landfall in the US, the longest span since 1900 without such an event.  And the clock is still counting.  When alarmists of all stripes were breathlessly predicting hurricane after hurricane in late 2005, the reality is that we wouldn't see another in the US for  over six years.

Of course, US landfall is in fact a terrible indicator of hurricane activity.  Its relevant to us, but it is a pretty random metric.  I said this when there were a lot of landfalls and I say it again since there have been so few.

A better metric is accumulated cyclonic energy, a sort of time integral of all large cyclonic storms worldwide.  Here is the most recent ACE figures:

As it turns out, the total strength of hurricane and hurricane-like storms has been falling almost since the exact day of Al Gore's speech in 2005 (another Gore effect!)  In fact, of late, it has hit numbers close to all-time lows.

Of course this chart will go back up some day, and then back down, and then up ... because hurricane activity has always been cyclical over decadal time scales.

The media loves to trumpet end-of-the-world predictions from folks like Al Gore and Paul Ehrlich, but they never go back five years later and back-check their predictions.  And despite their horrendous record for accuracy, the media eagerly publishes the next one.  Here is a proposed editorial rule for the MSM -- no breathless publication of anyone's next prediction without first revisiting the last one.

The Missing Heat

It is possible for the theory that the climate has a high sensitivity to CO2 (ie that a doubling of CO2 concentrations will lead to global temperature increases of 2.5C or higher) to be correct while still having ten years of flat to declining surface temperatures.  That is because Earth's great surface heat reservoir is the oceans, not the atmosphere, and so the extra heat from the greenhouse effect could be going into the oceans rather than into near-surface air.

However, it is NOT possible, as least as we (and by "we" I mean everyone, skeptics and alarmists alike) understand the climate, for CO2 to be holding a lot of extra heat and it not show up either in surface temperatures or ocean heat content.  The greenhouse effect does not turn off -- its effects may be masked in the chaotic weather systems, perhaps for years, but if the climate sensitivity to CO2 is really as high as the IPCC says, there has to be new heat going somewhere.

That is why a number of folks, including Roger Pielke, have argued for years that the best way to monitor whether we are truly seeing an additional forcing or heat input to the climate is to look at ocean heat content.  Understand, changes in ocean heat content would not tell us where the heat is coming from (e.g. anthropogenic CO2 vs. solar activity).  But it is pretty much impossible for us to imagine a new heat input to the Earth's surface, like greenhouse gas forcing from anthropogenic CO2, without observing its effect in ocean heat content.

I will turn over the story to Jo Nova, who has a good post on the new tools we have to measure ocean heat content since 2003.  In short, though, we have seen no rise in measured ocean heat content since we started measuring with technology dedicated to the task.  This means, if those who believe the climate has a high sensitivity to CO2 are right, something like 50,000 quintillion joules of energy have gone missing since 2003.  This is the "missing heat", and though climate scientists sometimes discuss it in private, they almost never do so in public.  Ocean heat is the dinosaur bone fossil that the creationists simply don't want to acknowledge.

Read the whole thing.  It is very simple and well-written and written.

PS- note in the chart above, the y-axis is mis-labelled a bit, it is not absolute heat content but changes in heat content from some base period.  Scientists call this the "anomaly."  This is typical of many climate charts.

Krugman Unintended Irony: Anyone Who Does Not Unquestioningly Believe Authorities is Anti-Science

here.

It's a wonder how, when over "97 percent to 98 percent" of scientific authorities accepted the Ptolomeic view of the solar system that we ever got past that.  Though I could certainly understand why in the current economy a die-hard Keynesian might be urging an appeal to authority rather than thinking for oneself.

When, by the way, did the children of the sixties not only lose, but reverse their anti-authoritarian streak?

Postscript:  I have always really hated the nose-counting approach to measuring the accuracy of a scientific hypothesis.  If we want to label something as anti-science, how about using straw polls of scientists as a substitute for fact-based arguments?

Yes indeed, the number of people in the newly made-up profession of "climate science" that are allowed by the UN control the content of the IPCC reports and whose funding is dependent on global warming being scary probably is very high.  The number of people in traditional scientific fields like physics, geology, chemistry, oceanography and meteorology who never-the-less study climate related topics that wholeheartedly are all-in for catastrophic man-made global warming theory would be very different

 Decide for yourself - see my video on global warming.  Am I anti-science?