Posts tagged ‘temperature’

Scott Sumner Explains a Lot of Climate Alarmism, Without Discussing Climate

Scott Sumner is actually discussing discrimination, and how discrimination is often "proven" in social studies

The economy operates in very subtle ways, and often when I read academic studies of issues like discrimination, the techniques seem incredibly naive to me. They might put in all the attributes of male and female labor productivity they can think of, and then simply assume than any unexplained residual must be due to "discrimination." And they do this in cases where there is no obvious reason to assume discrimination. It would be like a scientist assuming that magicians created a white rabbit out of thin air, at the snap of their fingers, because they can't think of any other explanation of how it got into the black hat!

Most alarming climate forecasts are based on the period from 1978 to 1998.  During this 20 year period world temperatures rose about a half degree C.  People may say they are talking about temperature increases since 1950, but most if not all of those increases occurred from 1978-1998.  Temperatures were mostly flat or down before and since.

A key, if not the key, argument for CO2-driven catastrophic warming that is based on actual historic data (rather than on theory or models) is that temperatures rose in this 20 year period farther and faster than would be possible by any natural causes, and thus must have been driven by man-made CO2.  Essentially what scientists said was, "we have considered every possible natural cause of warming that we can think of, and these are not enough to cause this warming, so the warming must be unnatural."  I was struck just how similar this process was to what Mr. Sumner describes.  Most skeptics, by the way, agree that some of this warming may have been driven by manmade CO2 but at the same time argue that there were many potential natural effects (e.g. ocean cycles) that were not considered in this original analysis.

Reconciling Seemingly Contradictory Climate Claims

At Real Science, Steven Goddard claims this is the coolest summer on record in the US.

The NOAA reports that both May and June were the hottest on record.

It used to be the the media would reconcile such claims and one might learn something interesting from that reconciliation, but now all we have are mostly-crappy fact checks with Pinocchio counts.  Both these claims have truth on their side, though the NOAA report is more comprehensively correct.  Still, we can learn something by putting these analyses in context and by reconciling them.

The NOAA temperature data for the globe does indeed show May and June as the hottest on record.  However, one should note a couple of things

  • The two monthly records do not change the trend over the last 10-15 years, which has basically been flat.  We are hitting records because we are sitting on a plateau that is higher than the rest of the last century (at least in the NOAA data).  It only takes small positive excursions to reach all-time highs
  • There are a number of different temperature data bases that measure the temperature in different ways (e.g. satellite vs. ground stations) and then adjust those raw readings using different methodologies.  While the NOAA data base is showing all time highs, other data bases, such as satellite-based ones, are not.
  • The NOAA database has been criticized for manual adjustments to temperatures in the past which increase the warming trend.  Without these adjustments, temperatures during certain parts of the 1930's (think: Dust Bowl) would be higher than today.  This was discussed here in more depth.  As is usual when looking at such things, some of these adjustments are absolutely appropriate and some can be questioned.  However, blaming the whole of the warming signal on such adjustments is just wrong -- satellite data bases which have no similar adjustment issues have shown warming, at least between 1979 and 1999.

The Time article linked above illustrated the story of these record months with a video partially on wildfires.  This is a great example of how temperatures are indeed rising but media stories about knock-on effects, such as hurricanes and fires, can be full of it.  2014 has actually been a low fire year so far in the US.

So the world is undeniably on the warm side of average (I won't way warmer than normal because what is "normal"?)  So how does Goddard get this as the coolest summer on record for the US?

Well, the first answer, and it is an important one to remember, is that US temperatures do not have to follow global temperatures, at least not tightly.  While the world warmed 0.5-0.7 degrees C from 1979-1999, the US temperatures moved much less.  Other times, the US has warmed or cooled more than the world has.  The US is well under 5% of the world's surface area.  It is certainly possible to have isolated effects in such an area.  Remember the same holds true the other way -- heat waves in one part of the world don't necessarily mean the world is warming.

But we can also learn something that is seldom discussed in the media by looking at Goddard's chart:

click to enlarge

First, I will say that I am skeptical of any chart that uses "all USHCN" stations because the number of stations and their locations change so much.  At some level this is an apples to oranges comparison -- I would be much more comfortable to see a chart that looks at only USHCN stations with, say, at least 80 years of continuous data.  In other words, this chart may be an artifact of the mess that is the USHCN database.

However, it is possible that this is correct even with a better data set and against a backdrop of warming temperatures.  Why?  Because this is a metric of high temperatures.  It looks at the number of times a data station reads a high temperature over 90F.  At some level this is a clever chart, because it takes advantage of a misconception most people, including most people in the media have -- that global warming plays out in higher daytime high temperatures.

But in fact this does not appear to be the case.  Most of the warming we have seen over the last 50 years has manifested itself as higher nighttime lows and higher winter temperatures.  Both of these raise the average, but neither will change Goddard's metric of days above 90F.  So it is perfectly possible Goddard's chart is right even if the US is seeing a warming trend over the same period.  Which is why we have not seen any more local all-time daily high temperature records set recently than in past decades.  But we have seen a lot of new records for high low temperature, if that term makes sense.  Also, this explains why the ratio of daily high records to daily low records has risen -- not necessarily because there are a lot of new high records, but because we are setting fewer low records.  We can argue about daytime temperatures but nighttime temperatures are certainly warmer.

This chart shows an example with low and high temperatures over time at Amherst, MA  (chosen at random because I was speaking there).  Note that recently, most warming has been at night, rather than in daily highs.

On The Steven Goddard Claim of "Fabricated" Temperature Data

Steven Goddard of the Real Science blog has a study that claims that US real temperature data is being replaced by fabricated data.  Christopher Booker has a sympathetic overview of the claims.

I believe that there is both wheat and chaff in this claim, and I would like to try to separate the two as best I can.  I don't have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  Specifically, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction.
    1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data -- for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA's own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example -- this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable -- my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net results was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth's surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990's.
  6. As is usual in such debates, I find words like "fabrication", "lies",  and "myth" to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

Postscript:  Not exactly on topic, but one thing that is never, ever mentioned in the press but is generally true about temperature trends -- almost all of the warming we have seen is in nighttime temperatures, rather than day time.  Here is an example from Amherst, MA (because I just presented up there).  This is one reason why, despite claims in the media, we are not hitting any more all time daytime highs than we would expect from a normal distribution.  If you look at temperature stations for which we have 80+ years of data, fewer than 10% of the 100-year highs were set in the last 10 years.  We are setting an unusual number of records for high low temperature, if that makes sense.

click to enlarge

 

Great Moments in "Science"

You know that relative of yours, who last Thanksgiving called you anti-science because you had not fully bought into global warming alarm?

Well, it appears that the reason we keep getting called "anti-science" is because climate scientists have a really funny idea of what exactly "science" is.

Apparently, a number of folks have been trying for years to get articles published in peer reviewed journals comparing the IPCC temperature models to actual measurements, and in the process highlighting the divergence of the two.  And they keep getting rejected.

Now, the publisher of Environmental Research Letters has explained why.  Apparently, in climate science it is "an error" to attempt to compare computer temperature forecasts with the temperatures that actually occurred.  In fact, he says that trying to do so "is harmful as it opens the door for oversimplified claims of 'errors' and worse from the climate sceptics media side".  Apparently, the purpose of scientific inquiry is to win media wars, and not necessarily to discover truth.

Here is something everyone in climate should remember:  The output of models merely represents a hypothesis.  When we have complicated hypotheses in complicated systems, and where such hypotheses may encompass many interrelated assumptions, computer models are an important tool for playing out, computationally, what results those hypotheses might translate to in the physical world.  It is no different than if Newton had had a computer and took his equation Gmm/R^2 and used the computer to project future orbits for the Earth and other planets (which he and others did, but by hand).   But these projections would have no value until they were checked against actual observations.  That is how we knew we liked Newton's models better than Ptolemy's -- because they checked out better against actual measurements.

But climate scientists are trying to create some kind of weird world where model results have some sort of independent reality, where in fact the model results should be trusted over measurements when the two diverge.  If this is science -- which it is not -- but if it were, then I would be anti-science.

California Food Sales Tax Rules Are Madness

We have invested a fair amount of time to try to get sales tax treatment on food items in our California stores correct.  But the rules are insane.   Beyond all the crazy rules (e.g. if a customer buys a refrigerated burrito it may be non-taxable, but if he puts it in the microwave in the store to heat it up it becomes taxable for sure) is the fact that sometimes customer intent matters (e.g. will they consume it at one of the picnic tables on site, or take it back to their home or camp site)

When searching for more resources on the topic, I found this flow chart on deciding if CA sales tax applies to food

click to enlarge

Here is more, from the same article

Under California law, foods eaten on the premise of an eatery is taxed while the same item taken to go is not: "Sales of food for human consumption are generally exempt from tax unless sold in a heated condition (except hot bakery items or hot beverages, such as coffee, sold for a separate price), served as meals, consumed at or on the seller's facilities, ordinarily sold for consumption on or near the seller's parking facility, or sold for consumption where there is an admission charge." Exactly which type of foods do and do not fall under the scope of this provision is the frustrating devil in the detail.

Eskenazi notes a few of the ridiculous results of drawing an artificial distinction between hot and cold foods. "A hot sandwich to go would be taxable," for example, "While a prepackaged, cold one would not -- but a cold sandwich becomes taxable if it has hot gravy poured onto it. Cold foods to go are generally not taxable -- but hot foods that have cooled are taxable (meaning a cold sandwich slathered in "hot" gravy that has cooled to room temperature is taxable)."

 

Climate Alarmists Coming Around to At Least One Skeptic Position

As early as 2009 (and many other more prominent skeptics were discussing it much earlier) I reported on why measuring ocean heat content was a potentially much better measure of greenhouse gas changes to the Earth rather than measuring surface air temperatures.  Roger Pielke, in particular, has been arguing this for as long as I can remember.

The simplest explanation for why this is true is that greenhouse gasses increase the energy added to the surface of the Earth, so that is what we would really like to measure, that extra energy.  But in fact the vast, vast majority of the heat retention capacity of the Earth's surface is in the oceans, not in the air.  Air temperatures may be more immediately sensitive to changes in heat flux, but they are also sensitive to a lot of other noise that tends to mask long-term signals.    The best analog I can think of is to imagine that you have two assets, a checking account and your investment portfolio.  Looking at surface air temperatures to measure long-term changes in surface heat content is a bit like trying to infer long-term changes in your net worth by looking only at your checking account, whose balance is very volatile, vs. looking at the changing size of your investment portfolio.

Apparently, the alarmists are coming around to this point

Has global warming come to a halt? For the last decade or so the average global surface temperature has been stabilising at around 0.5°C above the long-term average. Can we all relax and assume global warming isn't going to be so bad after all?

Unfortunately not. Instead we appear to be measuring the wrong thing. Doug McNeall and Matthew Palmer, both from the Met Office Hadley Centre in Exeter, have analysed climate simulations and shown that both ocean heat content and net radiation (at the top of the atmosphere) continue to rise, while surface temperature goes in fits and starts. "In my view net radiation is the most fundamental measure of global warming since it directly represents the accumulation of excess solar energy in the Earth system," says Palmer, whose findings are published in the journal Environmental Research Letters.

First, of course, we welcome past ocean heat content deniers to the club.  But second, those betting on ocean heat content to save their bacon and keep alarmism alive should consider why skeptics latched onto the metric with such passion.   In fact, ocean heat content may be rising more than surface air temperatures, but it has been rising MUCH less than would be predicted from high-sensitivity climate models.

Just When You Thought You Would Never See Any Of That Stuff From Science Fiction Novels...

Via the New Scientist

NEITHER dead or alive, knife-wound or gunshot victims will be cooled down and placed in suspended animation later this month, as a groundbreaking emergency technique is tested out for the first time....

The technique involves replacing all of a patient's blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. "If a patient comes to us two hours after dying you can't bring them back to life. But if they're dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed," says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique.

The benefits of cooling, or induced hypothermia, have been known for decades. At normal body temperature – around 37 °C – cells need a regular oxygen supply to produce energy. When the heart stops beating, blood no longer carries oxygen to cells. Without oxygen the brain can only survive for about 5 minutes before the damage is irreversible.

However, at lower temperatures, cells need less oxygen because all chemical reactions slow down. This explains why people who fall into icy lakes can sometimes be revived more than half an hour after they have stopped breathing.

via Alex Tabarrok

A Bad Chart From My Allies

I try to make it a habit to criticize bad analyses from "my side" of certain debates.  I find this to be a good habit that keeps one from falling for poorly constructed but ideologically tempting arguments.

Here is my example this week, from climate skeptic Steven Goddard.  I generally enjoy his work, and have quoted him before, but this is a bad chart (this is global temperatures as measured by satellite and aggregated by RSS).

click to enlarge

 

 

He is trying to show that the last 17+ years has no temperature trend.  Fine.  But by trying to put a trend line on the earlier period, it results in a mess that understates warming in earlier years.    He ends up with 17 years with a zero trend and 20 years with a 0.05 per decade trend.  Add these up and one would expect 0.1C total warming.   But in fact over this entire period there was, by this data set, 0.3C-0.4C of warming.  He left most of the warming out in the the step between the two lines.

Now there are times this might be appropriate.  For example, in the measurement of ocean heat content, there is a step change that occurs right at the point where the measurement approach changed from ship records to the ARGO floats.  One might argue that it is wrong to make a trend through the transition point because the step change was an artifact of the measurement change.  But in this case there was no such measurement change.  And while there was a crazy El Nino year in 1998, I have heard no argument from any quarter as to why there might have been some fundamental change in the climate system around 1997.

So I call foul.  Take the trend line off the blue portion and the graph is much better.

Global Warming Updates

I have not been blogging climate much because none of the debates ever change.  So here are some quick updates

  • 67% to 90% of all warming in climate forecasts still from assumptions of strong positive feedback in the climate system, rather than from CO2 warming per se (ie models still assuming about 1 degree in CO2 warming is multiplied 3-10 times by positive feedbacks)
  • Studies are still mixed about the direction of feedbacks, with as many showing negative as positive feedback.  No study that I have seen supports positive feedbacks as large as those used in many climate models
  • As a result, climate models are systematically exaggerating warming (from Roy Spenser, click to enlarge).  Note that the conformance through 1998 is nothing to get excited about -- most models were rewritten after that date and likely had plugs and adjustments to force the historical match.

click to enlarge

 

  • To defend the forecasts, modellers are increasingly blaming natural effects like solar cycles on the miss, natural effects that the same modellers insisted were inherently trivial contributions when skeptics used them to explain part of the temperature rise from 1978-1998.
  • By the way, 1978-1998 is still the only period since 1940 when temperatures actually rose, such that increasingly all catastrophic forecasts rely on extrapolations from this one 20-year period. Seriously, look for yourself.
  • Alarmists are still blaming every two or three sigma weather pattern on CO2 on global warming (polar vortex, sigh).
  • Even when weather is moderate, media hyping of weather events has everyone convinced weather is more extreme, when it is not. (effect explained in context of Summer of the Shark)
  • My temperature forecast from 2007 still is doing well.   Back in '07 I regressed temperature history to a linear trend plus a sine wave.

click to enlarge

Temperature Trends and the Farmer Error

My dad grew up in farm country in Iowa.  He told me stories of the early days of commodity futures when a number of farmers lost a lot of money betting the wrong way.  The error they made is that they would look at their local weather and assume everyone was experiencing the same.  For example, some guy in Iowa would be experiencing a drought and facing a poor corn crop, and would buy corn futures assuming the crop would be bad everywhere.  Unfortunately, this was often not the case.

A few climate sites have monthly contests to predict the next month's average global temperature anomaly.   Apparently, everyone really missed in the January betting.  Since most of the participants were American, they assumed that really cold weather in the US would translate to falling global temperatures.  They were wrong.  The global temperature anomaly in January actually rose a bit.

This is a variation of the same effect I often point out in the opposite direction -- that heat waves in even seemingly large areas do not necessarily mean anything for global temperatures.  The US is only about 2% of the global surface area (land and ocean) and since the cold spell was in the eastern half of the US, it therefore affected perhaps 1% of the globe.  And remember, on average, some area representing 1% of the globe should constantly be seeing a 100-year high or low for that particular day.  It's just how averages work.

No particularly point here, except to emphasize just how facile it is to try to draw conclusions about global temperature trends from regional weather events.

Congratulations to Nature Magazine for Catching up to Bloggers

The journal Nature has finally caught up to the fact that ocean cycles may influence global surface temperature trends.  Climate alarmists refused to acknowledge this when temperatures were rising and the cycles were in their warm phase, but now are grasping at these cycles for an explanation of the 15+ year hiatus in warming as a way to avoid abandoning high climate sensitivity assumptions  (ie the sensitivity of global temperatures to CO2 concentrations, which IMO are exaggerated by implausible assumptions of positive feedback).

Here is the chart from Nature:

click to enlarge

 

I cannot find my first use of this chart, but here is a version I was using over 5 years ago.  I know I was using it long before that

click to enlarge

 

It will be interesting to see if they find a way to blame cycles for cooling in the last 10-15 years but not for the warming in the 80's and 90's.

Next step -- alarmists have the same epiphany about the sun, and blame non-warming on a low solar cycle without simultaneously giving previous high solar cycles any credit for warming.  For Nature's benefit, here is another chart they might use (from the same 2008 blog post).  The number 50 below is selected arbitrarily, but does a good job of highlighting solar activity in the second half of the 20th century vs. the first half.

click to enlarge

 

Explaining the Flaw in Kevin Drum's (and Apparently Science Magazine's) Climate Chart

I won't repeat the analysis, you need to see it here.  Here is the chart in question:

la-sci-climate-warming

My argument is that the smoothing and relatively low sampling intervals in the early data very likely mask variations similar to what we are seeing in the last 100 years -- ie they greatly exaggerate the smoothness of history and create a false impression that recent temperature changes are unprecedented (also the grey range bands are self-evidently garbage, but that is another story).

Drum's response was that "it was published in Science."  Apparently, this sort of appeal to authority is what passes for data analysis in the climate world.

Well, maybe I did not explain the issue well.  So I found a political analysis that may help Kevin Drum see the problem.  This is from an actual blog post by Dave Manuel (this seems to be such a common data analysis fallacy that I found an example on the first page of my first Google search).  It is an analysis of average GDP growth by President.  I don't know this Dave Manuel guy and can't comment on the data quality, but let's assume the data is correct for a moment.  Quoting from his post:

Here are the individual performances of each president since 1948:

1948-1952 (Harry S. Truman, Democrat), +4.82%

1953-1960 (Dwight D. Eisenhower, Republican), +3%

1961-1964 (John F. Kennedy / Lyndon B. Johnson, Democrat), +4.65%

1965-1968 (Lyndon B. Johnson, Democrat), +5.05%

1969-1972 (Richard Nixon, Republican), +3%

1973-1976 (Richard Nixon / Gerald Ford, Republican), +2.6%

1977-1980 (Jimmy Carter, Democrat), +3.25%

1981-1988 (Ronald Reagan, Republican), 3.4%

1989-1992 (George H. W. Bush, Republican), 2.17%

1993-2000 (Bill Clinton, Democrat), 3.88%

2001-2008 (George W. Bush, Republican), +2.09%

2009 (Barack Obama, Democrat), -2.6%

Let's put this data in a chart:

click to enlarge

 

Look, a hockey stick , right?   Obama is the worst, right?

In fact there is a big problem with this analysis, even if the data is correct.  And I bet Kevin Drum can get it right away, even though it is the exact same problem as on his climate chart.

The problem is that a single year of Obama's is compared to four or eight years for other presidents.  These earlier presidents may well have had individual down economic years - in fact, Reagan's first year was almost certainly a down year for GDP.  But that kind of volatility is masked because the data points for the other presidents represent much more time, effectively smoothing variability.

Now, this chart has a difference in sampling frequency of 4-8x between the previous presidents and Obama.  This made a huge difference here, but it is a trivial difference compared to the 1 million times greater sampling frequency of modern temperature data vs. historical data obtained by looking at proxies (such as ice cores and tree rings).  And, unlike this chart, the method of sampling is very different across time with temperature - thermometers today are far more reliable and linear measurement devices than trees or ice.  In our GDP example, this problem roughly equates to trying to compare the GDP under Obama (with all the economic data we collate today) to, say, the economic growth rate under Henry the VIII.  Or perhaps under Ramses II.   If I showed that GDP growth in a single month under Obama was less than the average over 66 years under Ramses II, and tried to draw some conclusion from that, I think someone might challenge my analysis.  Unless of course it appears in Science, then it must be beyond question.

If You Don't Like People Saying That Climate Science is Absurd, Stop Publishing Absurd Un-Scientific Charts

Kevin Drum can't believe the folks at the National Review are still calling global warming science a "myth".  As is usual for global warming supporters, he wraps himself in the mantle of science while implying that those who don't toe the line on the declared consensus are somehow anti-science.

Readers will know that as a lukewarmer, I have as little patience with outright CO2 warming deniers as I do with those declaring a catastrophe  (for my views read this and this).  But if you are going to simply be thunderstruck that some people don't trust climate scientists, then don't post a chart that is a great example of why people think that a lot of global warming science is garbage.  Here is Drum's chart:

la-sci-climate-warming

 

The problem is that his chart is a splice of multiple data series with very different time resolutions.  The series up to about 1850 has data points taken at best every 50 years and likely at 100-200 year or more intervals.  It is smoothed so that temperature shifts less than 200 years or so in length won't show up and are smoothed out.

In contrast, the data series after 1850 has data sampled every day or even hour.  It has a sampling interval 6 orders of magnitude (over a million times) more frequent.  It by definition is smoothed on a time scale substantially shorter than the rest of the data.

In addition, these two data sets use entirely different measurement techniques.  The modern data comes from thermometers and satellites, measurement approaches that we understand fairly well.  The earlier data comes from some sort of proxy analysis (ice cores, tree rings, sediments, etc.)  While we know these proxies generally change with temperature, there are still a lot of questions as to their accuracy and, perhaps more importantly for us here, whether they vary linearly or have any sort of attenuation of the peaks.  For example, recent warming has not shown up as strongly in tree ring proxies, raising the question of whether they may also be missing rapid temperature changes or peaks in earlier data for which we don't have thermometers to back-check them (this is an oft-discussed problem called proxy divergence).

The problem is not the accuracy of the data for the last 100 years, though we could quibble this it is perhaps exaggerated by a few tenths of a degree.  The problem is with the historic data and using it as a valid comparison to recent data.  Even a 100 year increase of about a degree would, in the data series before 1850, be at most a single data point.  If the sampling is on 200 year intervals, there is a 50-50 chance a 100 year spike would be missed entirely in the historic data.  And even if it were in the data as a single data point, it would be smoothed out at this data scale.

Do you really think that there was never a 100-year period in those last 10,000 years where the temperatures varied by more than 0.1F, as implied by this chart?  This chart has a data set that is smoothed to signals no finer than about 200 years and compares it to recent data with no such filter.  It is like comparing the annualized GDP increase for the last quarter to the average annual GDP increase for the entire 19th century.   It is easy to demonstrate how silly this is.  If you cut the chart off at say 1950, before much anthropogenic effect will have occurred, it would still look like this, with an anomalous spike at the right (just a bit shorter).  If you believe this analysis, you have to believe that there is an unprecedented spike at the end even without anthropogenic effects.

There are several other issues with this chart that makes it laughably bad for someone to use in the context of arguing that he is the true defender of scientific integrity

  • The grey range band is if anything an even bigger scientific absurdity than the main data line.  Are they really trying to argue that there were no years, or decades, or even whole centuries that never deviated from a 0.7F baseline anomaly by more than 0.3F for the entire 4000 year period from 7500 years ago to 3500 years ago?  I will bet just about anything that the error bars on this analysis should be more than 0.3F, much less the range of variability around the mean.  Any natural scientist worth his or her salt would laugh this out of the room.  It is absurd.  But here it is presented as climate science in the exact same article that the author expresses dismay that anyone would distrust climate science.
  • A more minor point, but one that disguises the sampling frequency problem a bit, is that the last dark brown shaded area on the right that is labelled "the last 100 years" is actually at least 300 years wide.  Based on the scale, a hundred years should be about one dot on the x axis.  This means that 100 years is less than the width of the red line, and the last 60 years or the real anthropogenic period is less than half the width of the red line.  We are talking about a temperature change whose duration is half the width of the red line, which hopefully gives you some idea why I say the data sampling and smoothing processes would disguise any past periods similar to the most recent one.

Update:  Kevin Drum posted a defense of this chart on Twitter.  Here it is:  "It was published in Science."   Well folks, there is climate debate in a nutshell.   An 1000-word dissection of what appears to be wrong with a particular analysis retorted by a five-word appeal to authority.

Update #2:  I have explained the issue with a parallel flawed analysis from politics where Drum is more likely to see the flaws.

Climate Humor from the New York Times

Though this is hilarious, I am pretty sure Thomas Lovejoy is serious when he writes

But the complete candor and transparency of the [IPCC] panel’s findings should be recognized and applauded. This is science sticking with the facts. It does not mean that global warming is not a problem; indeed it is a really big problem.

This is a howler.  Two quick examples.  First, every past IPCC report summary has had estimates for climate sensitivity, ie the amount of temperature increase they expect for a doubling of CO2 levels.  Coming into this IPCC report, emerging evidence from recent studies has been that the climate sensitivity is much lower than previous estimates.  So what did the "transparent" IPCC do?  They, for the first time, just left out the estimate rather than be forced to publish one that was lower than the last report.

The second example relates to the fact that temperatures have been flat over the last 15-17 years and as a result, every single climate model has overestimated temperatures.  By a lot. In a draft version, the IPCC created this chart (the red dots were added by Steve McIntyre after the chart was made as the new data came in).

figure-1-4-models-vs-observations-annotated (1)

 

This chart was consistent with a number of peer-reviewed studies that assessed the performance of climate models.  Well, this chart was a little too much "candor" for the transparent IPCC, so they replaced it with this chart in the final draft:

figure-1-4-final-models-vs-observations

 

What a mess!  They have made the area we want to look at between 1990 and the present really tiny, and then they have somehow shifted the forecast envelopes down on several of the past reports so that suddenly current measurements are within the bands.   They also hide the bottom of the fourth assessment band (orange FAR) so you can't see that observations are out of the envelope of the last report.  No one so far can figure out how they got the numbers in this chart, and it does not match any peer-reviewed work.  Steve McIntyre is trying to figure it out.

OK, so now that we are on the subject of climate models, here is the second hilarious thing Lovejoy said:

Does the leveling-off of temperatures mean that the climate models used to track them are seriously flawed? Not really. It is important to remember that models are used so that we can understand where the Earth system is headed.

Does this make any sense at all?  Try it in a different context:  The Fed said the fact that their economic models failed to predict what actually happened over the last 15 years is irrelevant because the models are only used to see where the economy is headed.

The consistent theme of this report is declining certainty and declining chances of catastrophe, two facts that the IPCC works as hard as possible to obfuscate but which still come out pretty clearly as one reads the report.

Hearing What You Want to Hear from the Climate Report

After over 15 years of no warming, which the IPCC still cannot explain, and with climate sensitivity numbers dropping so much in recent studies that the IPCC left climate sensitivity estimates out of their summary report rather than address the drop, the Weather Channel is running this headline on their site:

weatherch

 

The IPCC does claim more confidence that warming over the past 60 years is partly or mostly due to man (I have not yet seen the exact wording they landed on), from 90% to 95%.  But this is odd given that the warming all came from 1978 to 1998 (see for yourself in temperature data about halfway through this post).  Temperatures are flat or cooling for the other 40 years of the period.  The IPCC cannot explain these 40 years of no warming in the context of high temperature sensitivities to CO2.  And, they can't explain why they can be 95% confident of what drove temperatures in the 20 year period of 1978-1998 but simultaneously have no clue what drove temperatures in the other years.

At some point I will read the thing and comment further.

 

Some Responsible Press Coverage of Record Temperatures

The Phoenix New Times blog had a fairly remarkable story on a record-hot Phoenix summer.  The core of the article is a chart from the NOAA.  There are three things to notice in it:

  • The article actually acknowledges that higher temperatures were due to higher night-time lows rather than higher daytime highs  Any mention of this is exceedingly rare in media stories on temperatures, perhaps because the idea of a higher low is confusing to communicate
  • It actually attributes urban warming to the urban heat island effect
  • It makes no mention of global warming

Here is the graphic:

hottest-summer

 

This puts me in the odd role of switching sides, so to speak, and observing that greenhouse warming could very likely manifest itself as rising nighttime lows (rather than rising daytime highs).  I can only assume the surrounding area of Arizona did not see the same sort of records, which would support the theory that this is a UHI effect.

Phoenix has a huge urban heat island effect, which my son actually measured.  At 9-10 in the evening, we measured a temperature differential of 8-12F from city center to rural areas outside the city.  By the way, this is a fabulous science fair project if you know a junior high or high school student trying to do something different than growing bean plants under different color lights.

Update On My Climate Model (Spoiler: It's Doing a Lot Better than the Pros)

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends (click on any image in this post to enlarge it)

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this "model"

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun's output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about "since 1950", but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to... ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can't assume that these factors caused the current "pause" in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued....

Update:  Seriously, on a relative basis, I am kicking ass

click to enlarge

The Magic Theory

Catastrophic Anthropogenic Climate Change is the magic theory -- every bit of evidence proves it.   More rain, less rain, harder rain, drought, floods, more tornadoes, fewer tornadoes, hotter weather, colder weather, more hurricanes, fewer hurricane -- they all prove the theory.  It is the theory that it is impossible not to confirm.  Example

It will take climate scientists many months to complete studies into whether manmade global warming made the Boulder flood more likely to occur, but the amount by which this event has exceeded past events suggests that manmade warming may have played some role by making the event worse than it otherwise would have been...

An increase in the frequency and intensity of extreme precipitation events is expected to take place even though annual precipitation amounts are projected to decrease in the Southwest. Colorado sits right along the dividing line between the areas where average annual precipitation is expected to increase, and the region that is expected to become drier as a result of climate change.

That may translate into more frequent, sharp swings between drought and flood, as has recently been the case. Last year, after all, was Colorado's second-driest on record, with the warmest spring and warmest summer on record, leading to an intense drought that is only just easing.

Generally one wants to point to a data trend to prove a theory, but look at that last paragraph.  Global warming is truly unique because it can be verified by there being no trend.

I hate to make this point for the five millionth time, but here goes:  It is virtually impossible (and takes far more data, by orders of magnitude, than we posses) to prove a shift in the mean of any phenomenon simply by highlighting occasional tail-of-the-distribution events.  The best way to prove a mean shift is to actually, you know, track the mean.  The problem is that the trend data lines for all these phenomenon -- droughts, wet weather, tornadoes, hurricanes -- show no trend, so the only tool supporters of the theory have at their disposal is to scream "global warming" as loud as they can every time there is a tail-of-the-distribution event.

Let's do some math:  They claim this flood was a one in one thousand year event.  That strikes me as false precision, because we have only been observing this phenomenon with any reliability for 100 years, but I will accept their figure for now.  Let's say this was indeed a one in 1000 year flood that it occurred over, say, half the area of Colorado (again a generous assumption, it was actually less that that).

Colorado is about 270,000 KM^2 so half would be 135,000 KM^2.  The land area of the world (we really should include oceans for this but we will give these folks every break) is about 150,000,000 km^2.  That means that half of Colorado is a bit less than 1/1000 of the world land area.

Our intuition tells us that a 1 in 1000 year storm is so rare that to have one means something weird or unusual or even unnatural must be going on.  But by the math above, since this storm covered 1/1000 of the land surface of the Earth, we should see one such storm on average every year somewhere in the world.  This is not some "biblical" unprecedented event - it is freaking expected, somewhere, every year.  Over the same area we should also see a 1 in 1000 year drought, a 1 in 1000 year temperature high, and a one in one thousand year temperature low -- every single damn year.  Good news if you are a newspaper and feed off of this stuff, but bad news for anyone trying to draw conclusions about the shifts in means and averages from such events.

Climate Theory vs. Climate Data

This is a pretty amazing statement Justin Gillis in the New York Times.

This month, the world will get a new report from a United Nations panel about the science of climate change. Scientists will soon meet in Stockholm to put the finishing touches on the document, and behind the scenes, two big fights are brewing....

In the second case, we have mainstream science that says if the amount of carbon dioxide in the atmosphere doubles, which is well on its way to happening, the long-term rise in the temperature of the earth will be at least 3.6 degrees Fahrenheit, but more likely above 5 degrees. We have outlier science that says the rise could come in well below 3 degrees.

In this case, the drafters of the report lowered the bottom end in a range of temperatures for how much the earth could warm, treating the outlier science as credible.

The interesting part is that "mainstream science" is based mainly on theory and climate models that over the last 20 years have not made accurate predictions (overestimating warming significantly).  "Outlier science" is in a lot of cases based on actual observations of temperatures along with other variables like infrared radiation returning to space.  The author, through his nomenclature, is essentially disparaging observational data that is petulantly refusing to match up to model predictions.  But of course skeptics are anti-science.

We Are 95% Confident in a Meaningless Statement

Apparently the IPCC is set to write:

Drafts seen by Reuters of the study by the U.N. panel of experts, due to be published next month, say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s.

That is up from at least 90 percent in the last report in 2007, 66 percent in 2001, and just over 50 in 1995, steadily squeezing out the arguments by a small minority of scientists that natural variations in the climate might be to blame.

I have three quick reactions to this

  • The IPCC has always adopted words like "main cause" or "substantial cause."  They have not even had enough certainly to use the word "majority cause" -- they want to keep it looser than that.  If man causes 30% and every other cause is at 10% or less, is man the main cause?  No one knows.  So that is how we get to the absurd situation where folks are trumpeting being 95% confident in a statement that is purposely vaguely worded -- so vague that the vast majority of people who sign it would likely disagree with one another on exactly what they have agreed to.
  • The entirety of the post-1950 temperature rise occurred between 1978 and 1998 (see below a chart based on the Hadley CRUT4 database, the same one used by the IPCC

2013 Version 3 Climate talk

Note that temperatures fell from 1945 to about 1975, and have been flat from about 1998 to 2013.  This is not some hidden fact - it was the very fact that the warming slope was so steep in the short period from 1978-1998 that contributed to the alarm.  The current 15 years with no warming was not predicted and remains unexplained (at least in the context of the assumption of high temperature sensitivities to CO2).  The IPCC is in a quandary here, because they can't just say that natural variation counter-acted warming for 15 years, because this would imply a magnitude to natural variability that might have explained the 20 year rise from 1978-1998 as easily as it might explain the warming hiatus over the last 15 years (or in the 30 years preceding 1978).

  • This lead statement by the IPCC continues to be one of the great bait and switches of all time.  Most leading skeptics (excluding those of the talk show host or politician variety) accept that CO2 is a greenhouse gas and is contributing to some warming of the Earth.  This statement by the IPCC says nothing about the real issue, which is what is the future sensitivity of the Earth's temperatures to rising CO2 - is it high, driven by large positive feedbacks, or more modest, driven by zero to negative feedbacks.  Skeptics don't disagree that man has cause some warming, but believe that future warming forecasts are exaggerated and that the negative effects of warming (e.g. tornadoes, fires, hurricanes) are grossly exaggerated.

Its OK not to know something -- in fact, that is an important part of scientific detachment, to admit what one does not know.   But what the hell does being 95% confident in a vague statement mean?  Choose which of these is science:

  • Masses are attracted to each other in proportion to the product of their masses and inversely proportional to the square of their distance of separation.
  • We are 95% certain that gravity is the main cause of my papers remaining on my desk

This Is How We Get In Pointless Climate Flame Wars

The other day I posted a graph from Roy Spencer comparing climate model predictions to actual measurements in the tropical mid-troposphere (the zone on Earth where climate models predict the most warming due to large assumed water vapor positive feedbacks).  The graph is a powerful indictment of the accuracy of climate models.

Spencer has an article (or perhaps a blog post) in the Financial Post with the same results, and includes a graph that does a pretty good job of simplifying the messy spaghetti graph in the original version.  Except for one problem.  Nowhere is it correctly labelled.  One would assume looking at it that it is a graph of global surface temperatures, which is what most folks are used to seeing in global warming articles. But in fact it is a graph of temperatures in the mid-troposphere, between 20 degrees North and 20 degrees South latitude.  He mentions that it is for tropical troposphere in the text of the article, but it is not labelled as such on the graph.  There is a very good reason for that narrow focus, but now the graph will end up on Google image search, and people will start crying "bullsh*t" because they will compare the numbers to global surface temperature data and it won't match.

I respect Spencer's work but he did not do a good job with this.

Climate Model Fail

Dr. Roy Spencer has compared the output of 73 climate models to actual recent temperature measurements.  He has focused on temperatures in the mid-troposphere in the tropics -- this is not the same as global surface temperatures but is of course related.  The reason for this focus is 1) we have some good space-based data sources for temperatures in this region that don't suffer the same biases and limitations as surface thermometers and 2) This is the zone that catastrophic anthropogenic global warming theory says should be seeing the most warming, due to positive feedback effects of water vapor.  The lines are the model results for temperatures, the dots are the actuals.

click to enlarge

As Spencer writes in an earlier post:

I continue to suspect that the main source of disagreement is that the models’ positive feedbacks are too strong…and possibly of even the wrong sign.

The lack of a tropical upper tropospheric hotspot in the observations is the main reason for the disconnect in the above plots, and as I have been pointing out this is probably rooted in differences in water vapor feedback. The models exhibit strongly positive water vapor feedback, which ends up causing a strong upper tropospheric warming response (the “hot spot”), while the observation’s lack of a hot spot would be consistent with little water vapor feedback.

The warming from manmade CO2 without positive feedbacks would be about 1.3C per doubling of CO2 concentrations, a fraction of the 3-10C predicted by these climate models.  If the climate, like most other long-term stable natural systems, is dominated by negative feedbacks, the sensitivity would be likely less than 1C.  Either way, the resulting predicted warming from manmade CO2 over the rest of this century would likely be less than 1 degree C.

More on declining estimates of climate sensitivity based on actual temperature observations rather than computer models here.

Update on Climate Temperature Sensitivity (Good News, the Numbers are Falling)

I have not had the time to write much about climate of late, but after several years of arguing over emails (an activity with which I quickly grew bored), the field is heating up again, as it were.

As I have said many times, the key missing science in the whole climate debate centers around climate sensitivity, or the expected temperature increase from a doubling of CO2 concentrations in the atmosphere  (as reference, CO2 in the industrial age has increased from about 270 ppm to close to 400 ppm, or about half a doubling).

In my many speeches and this video (soon to be updated, if I can just find the time to finish it), I have argued that climate computer models have exaggerated climate sensitivity.  This Wikipedia page is a pretty good rehash of the alarmist position on climate sensitivity.  According to this standard alarmist position, here is the distribution of studies which represent the potential values for sensitivity - note that virtually none are below 2°C.

Frequency_distribution_of_climate_sensitivity,_based_on_model_simulations_(NASA)

The problem is that these are all made with computer models.  They are not based on observational data.  Yes, all these models nominally backcast history reasonably correctly (look at that chart above and think about that statement for a minute, see if you can spot the problem).  But many an investor has been bankrupted by models that correctly backcast history.  The guys putting together tranches of mortgages for securities all had models.   What has been missing is any validation of these numbers with actual, you know, observations of nature.

Way back 6 or 7 years ago I began taking these numbers and projecting them backwards.  In other words, if climate sensitivity is really, say, at 4°C, then what should that imply about historical temperature increases since the pre-industrial age?  Let's do a back of the envelope with the 4°C example.  We are at just about half of a doubling of CO2 concentrations, but since sensitivity is a logarithmic curve, this implies we should have seen about 57% of the temperature increase that we would expect from a full doubling of CO2.  Applied to the 4°C sensitivity figure, this means that if sensitivity really is 4°C, we should have seen a 2.3°C global temperature increase over the last 150 years or so.  Which we certainly have not -- instead we have seen 0.8°C from all causes, only one of which is CO2.

So these high sensitivity models are over-predicting history.  Even a 2°C sensitivity over-predicts the amount of warming we have seen historically.  So how do they make the numbers fit?  The models are tuned and tweaked with a number of assumptions.  Time delays are one -- the oceans act as a huge flywheel on world temperatures and tend to add large lags to getting to the ultimate sensitivity figure.  But even this was not enough for high sensitivity models to back-cast accurately.  To make their models accurately predict history, their authors have had to ignore every other source of warming (which is why they have been so vociferous in downplaying the sun and ocean cycles, at least until they needed these to explain the lack of warming over the last decade).  Further, they have added man-made cooling factors, particularly from sulfate aerosols, that offset some of the man-made warming with man-made cooling.

Which brings us back to the problem I hinted at with the chart above and its distribution of sensitivities.  Did you spot the problem?  All these models claim to accurately back-cast history, but how can a model with a 2°C sensitivity and an 11°C sensitivity both accurately model the last 100 years?  One way they do it is by using a plug variable, and many models use aerosol cooling as the plug.  Why?   Well, unlike natural cooling factors, it is anthropogenic, so they can still claim catastrophe once we clean up the aerosols.  Also, for years the values of aerosol cooling were really uncertain, so ironically the lack of good science on them allowed scientists to assume a wide range of values.  Below is from a selection of climate models, and shows that the higher the climate sensitivity in the model, the higher the negative forcing (cooling) effect assumed from aerosols.  This has to be, or the models would not back-cast.aerosols2

The reasons that these models had such high sensitivities is that they assumed the climate was dominated by net positive feedback, meaning there were processes in the climate system that would take small amounts of initial warming from CO2 and multiply them many times.  The generally accepted value for sensitivity without these feedbacks is 1.2°C or 1.3°C (via work by Michael Mann over a decade ago).  So all the rest of the warming, in fact the entire catastrophe that is predicted, comes not from CO2 but from this positive feedback that multiplies this modest 1.2°C many times.

I have argued, as have many other skeptics, that this assumption of net positive feedback is not based on good science, and in fact most long-term stable natural systems are dominated by negative feedback (note that you can certainly identify individual processes, like ice albedo, that are certainly a positive feedback, but we are talking about the net effect of all such processes combined).  Based on a skepticism about strong positive feedback, and the magnitude of past warming in relation to CO2 increases, I have always argued that the climate sensitivity is perhaps 1.2°C and maybe less, but that we should not expect more than a degree of warming from CO2 in the next century, hardly catastrophic.

One of the interesting things you might notice from the Wikipedia page is that they do not reference any sensitivity study more recent than 2007 (except for a literature review in 2008).  One reason might be that over the last 5 years there have been a series of studies that have begun to lower the expected value of the sensitivity number.   What many of these studies have in common is that they are based on actual observational data over the last 100 years, rather than computer models  (by the way, for those of you who like to fool with Wikipedia, don't bother on climate pages -- the editors of these pages will reverse any change attempting to bring balance to their articles in a matter of minutes).  These studies include a wide range of natural effects, such as ocean cycles, left out of the earlier models.  And, as real numbers have been put on aerosol concentrations and their effects, much lower values have been assigned to aerosol cooling, thus reducing the amount of warming that could be coming from CO2.

Recent studies based on observational approaches are coming up with much lower numbers.   ECS, or equilibrium climate sensitivity numbers (what we would expect in temperature increases if we waited hundreds or thousands of years for all time delays to be overcome) has been coming in between 1.6°C and 2.0°C.  Values for TCS, or transient climate sensitivity, or what we might expect to see in our lifetimes, has been coming in around 1.3°C per doubling of CO2 concentrations.

Matt Ridley has the layman's explanation

Yesterday saw the publication of a paper in a prestigious journal,Nature Geoscience, from a high-profile international team led by Oxford scientists. The contributors include 14 lead authors of the forthcoming Intergovernmental Panel on Climate Change scientific report; two are lead authors of the crucial chapter 10: professors Myles Allen and Gabriele Hegerl.

So this study is about as authoritative as you can get. It uses the most robust method, of analysing the Earth’s heat budget over the past hundred years or so, to estimate a “transient climate response” — the amount of warming that, with rising emissions, the world is likely to experience by the time carbon dioxide levels have doubled since pre-industrial times.

The most likely estimate is 1.3C. Even if we reach doubled carbon dioxide in just 50 years, we can expect the world to be about two-thirds of a degree warmer than it is now, maybe a bit more if other greenhouse gases increase too….

Judith Currey discusses these new findings

Discussion of Otto, one of the recent studies

Nic Lewis discusses several of these results

This is still tough work, likely with a lot of necessary improvement, because it is really hard to dis-aggregate multiple drivers in such a complex system.  There may, for example, be causative variables we don't even know about so by definition were not included in the study.  However, it is nice to see that folks are out there trying to solve the problem with real observations of Nature, and not via computer auto-eroticism.

Postscript:  Alarmists have certainly not quit the field.  The current emerging hypothesis to defend high sensitivities is to say that the heat is going directly into the deep oceans.  At some level this is sensible -- the vast majority of the heat carrying capacity (80-90%) of the Earth's surface is in the oceans, not in the atmosphere, and so they are the best place to measure warming.  Skeptics have said this for years.  But in the top 700 meters or so of the ocean, as measured by ARGO floats, ocean heating over the last 10 years (since these more advanced measuring devices were launched) has been only about 15% of what we might predict with high sensitivity models.  So when alarmists say today that the heat is going into the oceans, they say the deep oceans -- ie that the heat from global warming is not going into the air or the first 700 meters of ocean but directly into ocean layers beneath that.  Again, this is marginally possible by some funky dynamics, but just like the aerosol defense that has fallen apart of late, this defense of high sensitivity forecasts is completely unproven.  But the science is settled, of course.

This Shouldn't Be Necessary, But Here Is Some Information on CO2 and Tornadoes

Well, I have zero desire to score political points off the tragedy in Oklahoma, but unfortunately others are more than eager to do so.  As a result, it is necessary to put a few facts on the table to refute the absurd claim that this tornado is somehow attributable to CO2.

  1. I really should not have to say this, but there is no mechanism by which CO2 has ever been accused of causing tornadoes except via the intervening step of warming.  Without warming, CO2 can't be the cause (even with warming, the evidence is weak, since tornadoes are cause more by temperature differentials, than by temperature per se).  So it is worth noting that there have been no unusually warm temperatures in the area of late, and in fact the US has had one of its coolest springs in several decades.
  2. I should also not have to say this, but major tornadoes occurred in Oklahoma at much lower CO2 levels.

    torgraph-big

  3. In fact, if anything the trend in major tornadoes in the US over the last several decades is down
  4. And, this is actually a really, really low tornado year so far.  So its hard to figure an argument that says that global warming reduced tornadoes in general but caused this one in particular

EF3-EF5

 

Much more at this link

Update:  In 1975, tornado outbreaks blamed in Newsweek on global cooling

Best and the Brightest May Finally Be Open To Considering Lower Climate Sensitivity Numbers

For years, readers of this site know that I have argued that:

  • CO2 is indeed a greenhouse gas, and since man is increasing its atmospheric concentration, there is likely some anthropogenic contribution to warming
  • Most forecasts, including those of the IPCC, grossly exaggerate temperature sensitivity to CO2 by assuming absurd levels of net positive feedback in the climate system
  • Past temperature changes are not consistent with high climate sensitivities

Recently, there have been a whole spate of studies based on actual observations rather than computer models that have been arriving at climate sensitivity numbers far below the IPCC number.   While the IPCC settled on 3C per doubling of CO2, it strongly implied that all the risk was to the upside, and many other prominent folks who typically get fawning attention in the media have proposed much higher numbers.

In fact, recent studies are coming in closer to 1.5C - 2C.  I actually still think these numbers will turn out to be high.  For several years now my money has been on a number from 0.8 to 1 C, sensitivity numbers that imply a small amount of negative feedback rather than positive feedback, a safer choice in my mind since most long-term stable natural systems are dominated by negative feedback.

Anyway, in an article that was as surprising as it is welcome, NY Times climate writer Andy Revkin has quite an article recently, finally acknowledging in the paper of record that maybe those skeptics who have argued for alower sensitivity number kind of sort of have a point.

Worse than we thought” has been one of the most durable phrases lately among those pushing for urgent action to stem the buildup of greenhouse gases linked to global warming.

But on one critically important metric — how hot the planet will get from a doubling of the pre-industrial concentration of greenhouse gases, a k a “climate sensitivity” — someclimate researchers with substantial publication records are shifting toward the lower end of the warming spectrum.

By the way, this is the only metric that matters.  All the other BS about "climate change" and "dirty weather" are meaningless without warming.  CO2 cannot change the climate  or raise sea levels or any of that other stuff by any mechanism we understand or that has even been postulated, except via warming.  Anyway, to continue:

There’s still plenty of global warming and centuries of coastal retreats in the pipeline, so this is hardly a “benign” situation, as some have cast it.

But while plenty of other climate scientists hold firm to the idea that the full range of possible outcomes, including a disruptively dangerous warming of more than 4.5 degrees C. (8 degrees F.), remain in play, it’s getting harder to see why the high-end projections are given much weight.

This is also not a “single-study syndrome” situation, where one outlier research paper is used to cast doubt on a bigger body of work — as Skeptical Science asserted over the weekend. That post focused on the as-yet-unpublished paper finding lower sensitivity that was inadvisedly promoted recently by the Research Council of Norway.

In fact, there is an accumulating body of reviewed, published researchshaving away the high end of the range of possible warming estimates from doubled carbon dioxide levels. Chief among climate scientists critical of the high-sensitivity holdouts is James Annan, an experienced climate modeler based in Japan who contributed to the 2007 science report from the Intergovernmental Panel on Climate Change. By 2006, he was already diverging from his colleagues a bit.

The whole thing is good.  Of course, for Revkin, this is no excuse to slow down all the actions supposedly demanded by global warming, such as substantially raising the price and scarcity of hydrocarbons.  Which to me simply demonstrates that people who have been against hydrocarbons have always been against them as an almost aesthetic choice, and climate change and global warming were mere excuses to push the agenda.  After all, as there certainly are tradeoffs to limiting economic growth and energy use and raising the price of energy, how can a reduction in postulated harms from fossil fuels NOT change the balance point one chooses in managing their use?

PS-  I thought this was a great post mortem on Hurricane Sandy and the whole notion that this one data point proves the global warming trend:

In this case several factors not directly related to climate change converged to generate the event. On Sandy’s way north, it ran into a vast high-pressure system over Canada, which prevented it from continuing in that direction, as hurricanes normally do, and forced it to turn west. Then, because it traveled about 300 miles over open water before making landfall, it piled up an unusually large storm surge. An infrequent jet-stream reversal helped maintain and fuel the storm. As if all that weren’t bad enough, a full moon was occurring, so the moon, the earth, and the sun were in a straight line, increasing the moon’s and sun’s gravitational effects on the tides, thus lifting the high tide even higher. Add to this that the wind and water, though not quite at hurricane levels, struck an area rarely hit by storms of this magnitude so the structures were more vulnerable and a disaster occurred.

The last one is a key for me -- you have cities on the Atlantic Ocean that seemed to build and act as if they were immune from ocean storms.  From my perspective growing up on the gulf coast, where one practically expects any structure one builds on the coast to be swept away every thirty years or so, this is a big contributing factor no one really talks about.

She goes on to say that rising sea levels may have made the storm worse, but I demonstrated that it couldn't have added more than a few percentage points to the surge.