Posts tagged ‘warming’

Reconciling Seemingly Contradictory Climate Claims

At Real Science, Steven Goddard claims this is the coolest summer on record in the US.

The NOAA reports that both May and June were the hottest on record.

It used to be the the media would reconcile such claims and one might learn something interesting from that reconciliation, but now all we have are mostly-crappy fact checks with Pinocchio counts.  Both these claims have truth on their side, though the NOAA report is more comprehensively correct.  Still, we can learn something by putting these analyses in context and by reconciling them.

The NOAA temperature data for the globe does indeed show May and June as the hottest on record.  However, one should note a couple of things

  • The two monthly records do not change the trend over the last 10-15 years, which has basically been flat.  We are hitting records because we are sitting on a plateau that is higher than the rest of the last century (at least in the NOAA data).  It only takes small positive excursions to reach all-time highs
  • There are a number of different temperature data bases that measure the temperature in different ways (e.g. satellite vs. ground stations) and then adjust those raw readings using different methodologies.  While the NOAA data base is showing all time highs, other data bases, such as satellite-based ones, are not.
  • The NOAA database has been criticized for manual adjustments to temperatures in the past which increase the warming trend.  Without these adjustments, temperatures during certain parts of the 1930's (think: Dust Bowl) would be higher than today.  This was discussed here in more depth.  As is usual when looking at such things, some of these adjustments are absolutely appropriate and some can be questioned.  However, blaming the whole of the warming signal on such adjustments is just wrong -- satellite data bases which have no similar adjustment issues have shown warming, at least between 1979 and 1999.

The Time article linked above illustrated the story of these record months with a video partially on wildfires.  This is a great example of how temperatures are indeed rising but media stories about knock-on effects, such as hurricanes and fires, can be full of it.  2014 has actually been a low fire year so far in the US.

So the world is undeniably on the warm side of average (I won't way warmer than normal because what is "normal"?)  So how does Goddard get this as the coolest summer on record for the US?

Well, the first answer, and it is an important one to remember, is that US temperatures do not have to follow global temperatures, at least not tightly.  While the world warmed 0.5-0.7 degrees C from 1979-1999, the US temperatures moved much less.  Other times, the US has warmed or cooled more than the world has.  The US is well under 5% of the world's surface area.  It is certainly possible to have isolated effects in such an area.  Remember the same holds true the other way -- heat waves in one part of the world don't necessarily mean the world is warming.

But we can also learn something that is seldom discussed in the media by looking at Goddard's chart:

click to enlarge

First, I will say that I am skeptical of any chart that uses "all USHCN" stations because the number of stations and their locations change so much.  At some level this is an apples to oranges comparison -- I would be much more comfortable to see a chart that looks at only USHCN stations with, say, at least 80 years of continuous data.  In other words, this chart may be an artifact of the mess that is the USHCN database.

However, it is possible that this is correct even with a better data set and against a backdrop of warming temperatures.  Why?  Because this is a metric of high temperatures.  It looks at the number of times a data station reads a high temperature over 90F.  At some level this is a clever chart, because it takes advantage of a misconception most people, including most people in the media have -- that global warming plays out in higher daytime high temperatures.

But in fact this does not appear to be the case.  Most of the warming we have seen over the last 50 years has manifested itself as higher nighttime lows and higher winter temperatures.  Both of these raise the average, but neither will change Goddard's metric of days above 90F.  So it is perfectly possible Goddard's chart is right even if the US is seeing a warming trend over the same period.  Which is why we have not seen any more local all-time daily high temperature records set recently than in past decades.  But we have seen a lot of new records for high low temperature, if that term makes sense.  Also, this explains why the ratio of daily high records to daily low records has risen -- not necessarily because there are a lot of new high records, but because we are setting fewer low records.  We can argue about daytime temperatures but nighttime temperatures are certainly warmer.

This chart shows an example with low and high temperatures over time at Amherst, MA  (chosen at random because I was speaking there).  Note that recently, most warming has been at night, rather than in daily highs.

Another Plea to Global Warming Alarmists on the Phrase "Climate Denier"

Stop calling me and other skeptics "climate deniers".  No one denies that there is a climate.  It is a stupid phrase.

I am willing, even at the risk of the obvious parallel that is being drawn to the Holocaust deniers, to accept the "denier" label, but it has to be attached to a proposition I actually deny, or that can even be denied.

As help in doing so, here are a few reminders (these would also apply to many mainstream skeptics -- I am not an outlier)

  • I don't deny that climate changes over time -- who could?  So I am not a climate change denier
  • I don't deny that the Earth has warmed over the last century (something like 0.7C).  So I am not a global warming denier
  • I don't deny that man's CO2 has some incremental effect on warming, and perhaps climate change (in fact, man effects climate with many more of his activities other than just CO2 -- land use, with cities on the one hand and irrigated agriculture on the other, has measurable effects on the climate).  So I am not a man-made climate change or man-made global warming denier.

What I deny is the catastrophe -- the proposition that man-made global warming** will cause catastrophic climate changes whose adverse affects will outweigh both the benefits of warming as well as the costs of mitigation.  I believe that warming forecasts have been substantially exaggerated (in part due to positive feedback assumptions) and that tales of current climate change trends are greatly exaggerated and based more on noting individual outlier events and not through real data on trends (see hurricanes, for example).

Though it loses some of this nuance, I would probably accept "man-made climate catastrophe denier" as a title.

** Postscript -- as a reminder, there is absolutely no science that CO2 can change the climate except through the intermediate step of warming.   If you believe it is possible for CO2 to change the climate without there being warming (in the air, in the oceans, somewhere), then you have no right to call anyone else anti-science and you should go review your subject before you continue to embarrass yourself and your allies.

On The Steven Goddard Claim of "Fabricated" Temperature Data

Steven Goddard of the Real Science blog has a study that claims that US real temperature data is being replaced by fabricated data.  Christopher Booker has a sympathetic overview of the claims.

I believe that there is both wheat and chaff in this claim, and I would like to try to separate the two as best I can.  I don't have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  Specifically, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction.
    1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data -- for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA's own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example -- this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable -- my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net results was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth's surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990's.
  6. As is usual in such debates, I find words like "fabrication", "lies",  and "myth" to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

Postscript:  Not exactly on topic, but one thing that is never, ever mentioned in the press but is generally true about temperature trends -- almost all of the warming we have seen is in nighttime temperatures, rather than day time.  Here is an example from Amherst, MA (because I just presented up there).  This is one reason why, despite claims in the media, we are not hitting any more all time daytime highs than we would expect from a normal distribution.  If you look at temperature stations for which we have 80+ years of data, fewer than 10% of the 100-year highs were set in the last 10 years.  We are setting an unusual number of records for high low temperature, if that makes sense.

click to enlarge

 

Great Moments in "Science"

You know that relative of yours, who last Thanksgiving called you anti-science because you had not fully bought into global warming alarm?

Well, it appears that the reason we keep getting called "anti-science" is because climate scientists have a really funny idea of what exactly "science" is.

Apparently, a number of folks have been trying for years to get articles published in peer reviewed journals comparing the IPCC temperature models to actual measurements, and in the process highlighting the divergence of the two.  And they keep getting rejected.

Now, the publisher of Environmental Research Letters has explained why.  Apparently, in climate science it is "an error" to attempt to compare computer temperature forecasts with the temperatures that actually occurred.  In fact, he says that trying to do so "is harmful as it opens the door for oversimplified claims of 'errors' and worse from the climate sceptics media side".  Apparently, the purpose of scientific inquiry is to win media wars, and not necessarily to discover truth.

Here is something everyone in climate should remember:  The output of models merely represents a hypothesis.  When we have complicated hypotheses in complicated systems, and where such hypotheses may encompass many interrelated assumptions, computer models are an important tool for playing out, computationally, what results those hypotheses might translate to in the physical world.  It is no different than if Newton had had a computer and took his equation Gmm/R^2 and used the computer to project future orbits for the Earth and other planets (which he and others did, but by hand).   But these projections would have no value until they were checked against actual observations.  That is how we knew we liked Newton's models better than Ptolemy's -- because they checked out better against actual measurements.

But climate scientists are trying to create some kind of weird world where model results have some sort of independent reality, where in fact the model results should be trusted over measurements when the two diverge.  If this is science -- which it is not -- but if it were, then I would be anti-science.

Skeptics -- Don't Be That Guy Who Gets Us All Tarred as Anti-Science

Alarmists have adopted the seemingly farcical but oddly effective technique of finding the most absurd skeptic argument they can, then beating the carp out of this straw man, and then claiming that this proves that all skeptics are anti-science.

Don't believe me?  Kevin Drum did it yesterday, bravely taking on a claim -- that atmospheric CO2 concentrations have not increased in the last century -- that I have never seen a skeptic make and I am pretty active in the community.  Having beaten up on this odd, outlier position, he then claims this tars everyone who does not agree with him

Nonetheless, there you have it. In the tea party precincts of the conservative movement, even the simplest version of reality doesn't matter. If cheese denial is how you demonstrate you're part of the tribe, then anyone who denies cheese is a hero. The fact that you happen to be happily munching away on a slice of pizza at the time doesn't faze you at all.

Awesome.  So by this logic, everything Kevin Drum says about the environment is wrong because some moron environmental activists signed a petition against dihydrogen monoxide in a Penn and Teller Bullshit! episode

So, as a public service, I wanted to link to Roy Spencer's list of 10 skeptic arguments that don't hold water.  There are quality scientific arguments against catastrophic man-made warming theory.  You don't need to rely on ones that are wrong.

I agree with all of these.  I will say that I used to believe a version of #5, but I have been convinced as to why it is wrong.  However, it is still true that CO2 has a diminishing return effect on warming such that each additional molecule has less effect on warming than the last.  That is why climate sensitivity is most often shown as degrees of warming per doubling of concentration of CO2, meaning 400-800 ppm has the same effect as 800-1600ppm.

Postscript:  Drum choose to lampoon a position that is such an outlier it did not even make Spencer's list.  Spencer assumes even the craziest skeptics accept that CO2 is increasing, such that the bad science he is refuting in #7 relates to the causes of that increase.

Climate Alarmists Coming Around to At Least One Skeptic Position

As early as 2009 (and many other more prominent skeptics were discussing it much earlier) I reported on why measuring ocean heat content was a potentially much better measure of greenhouse gas changes to the Earth rather than measuring surface air temperatures.  Roger Pielke, in particular, has been arguing this for as long as I can remember.

The simplest explanation for why this is true is that greenhouse gasses increase the energy added to the surface of the Earth, so that is what we would really like to measure, that extra energy.  But in fact the vast, vast majority of the heat retention capacity of the Earth's surface is in the oceans, not in the air.  Air temperatures may be more immediately sensitive to changes in heat flux, but they are also sensitive to a lot of other noise that tends to mask long-term signals.    The best analog I can think of is to imagine that you have two assets, a checking account and your investment portfolio.  Looking at surface air temperatures to measure long-term changes in surface heat content is a bit like trying to infer long-term changes in your net worth by looking only at your checking account, whose balance is very volatile, vs. looking at the changing size of your investment portfolio.

Apparently, the alarmists are coming around to this point

Has global warming come to a halt? For the last decade or so the average global surface temperature has been stabilising at around 0.5°C above the long-term average. Can we all relax and assume global warming isn't going to be so bad after all?

Unfortunately not. Instead we appear to be measuring the wrong thing. Doug McNeall and Matthew Palmer, both from the Met Office Hadley Centre in Exeter, have analysed climate simulations and shown that both ocean heat content and net radiation (at the top of the atmosphere) continue to rise, while surface temperature goes in fits and starts. "In my view net radiation is the most fundamental measure of global warming since it directly represents the accumulation of excess solar energy in the Earth system," says Palmer, whose findings are published in the journal Environmental Research Letters.

First, of course, we welcome past ocean heat content deniers to the club.  But second, those betting on ocean heat content to save their bacon and keep alarmism alive should consider why skeptics latched onto the metric with such passion.   In fact, ocean heat content may be rising more than surface air temperatures, but it has been rising MUCH less than would be predicted from high-sensitivity climate models.

Climate Alarmism In One Statement: "Limited Evidence, High Agreement"

From James Delingpole:

The draft version of the report's Summary For Policymakers made the startling admission that the economic damage caused by "climate change" would be between 0.2 and 2 percent of global GDP - significantly less than the doomsday predictions made in the 2006 Stern report (which estimated the damage at between 5 and 20 percent of global GDP).

But this reduced estimate did not suit the alarmist narrative of several of the government delegations at the recent IPCC talks in Yokahama, Japan. Among them was the British one, comprising several members of the deep green Department of Energy and Climate Change (DECC), which insisted on doctoring this section of the Summary For Policymakers in order to exaggerate the potential for more serious economic damage.

"Losses are more likely than not to be greater, rather than smaller, than this range (limited evidence, high agreement)"

There was no evidence whatsoever in the body of the report to justify this statement.

I find it fascinating that there can be "high agreement" to a statement for which there is limited or no evidence.  Fortunately these are all self-proclaimed defenders of science or I might think this was purely a political statement.

Note that the most recent IPCC reports and new published studies on climate sensitivity tend to say that 1) warming in the next century will be 1-2C, not the much higher numbers previously forecast; 2)  That warming will not be particularly expensive to manage and mitigate and 3) we are increasingly less sure that warming is causing all sorts of negative knock-on effects like more hurricanes.  In other words, opinion is shifting to where science-based skeptics have been all along (since 2007 in my case).  No surprise or shame here.  What is shameful though is that as evidence points more and more to the lukewarmer skeptic position, we are still called evil heretical deniers that should be locked in jail.  Like telling Galileo, "you were right about that whole heliocentric thing but we still think you are evil for suggesting it."

Ideological Turing Tests, Climate, and Minimum Wage

Yesterday I was interviewed for a student radio show, I believe from the USC Annenberg school.  I have no quarrel with the staff I worked with, they were all friendly and intelligent.

What depressed me though, as I went through my usual bullet points describing the "lukewarmer" position that is increasingly common among skeptics, was that most of what I said seemed to be new to the interviewer.   It was amazing to see that someone presumably well-exposed to the climate debate would actually not have any real idea what one of the two positions really entailed (see here and here for what I outlined).  This gets me back to the notion I wrote about a while ago about people relying on their allies to tell them everything they need to know about their opponent's position, without ever actually listening to the opponents.

This topic comes up in the blogosphere from time to time, often framed as being able to pass an ideological Touring test.  Can, say, a Republican write a defense of the minimum wage that a reader of the Daily Kos would accept, or will it just come out sounding like a straw man?  I feel like I could do it pretty well, despite being a libertarian opposed to the minimum wage.  For example:

There is a substantial power imbalance between minimum wage workers and employers, such that employers are able to pay such workers far less than their labor is worth, and far less than they would be willing to pay if they had to.  The minimum wage corrects this power imbalance and prevents employers from unfairly exploiting this power imbalance.  It forces employers to pay employees something closer to a living wage, though at $7.25 an hour the minimum wage is still too low to be humane and needs to be raised.  When companies pay below a living wage, they not only exploit workers but taxpayers as well, since they are accepting a form of corporate welfare when taxpayers (through food stamps and Medicare and the like) help sustain their underpaid workers.

Opponents of the minimum wage will sometimes argue that higher minimum wages reduce employment.  However, since in most cases employers of low-skilled workers are paying workers less than they are willing and able to pay, raising the minimum wage has little effect on employment.  Studies of the fast food industry by Card and Walker demonstrated that raising the minimum wage had little effect on employment levels.  And any loss of employment from higher minimum wages would be more than offset by the Keynesian stimulative effect to the economy as a whole of increasing wages among lower income workers, who tend to consume nearly 100% of incremental income.

Despite the fact that I disagree with this position, I feel I understand it pretty well -- far better, I would say, than most global warming alarmists or even media members bother to try to understand the skeptic position.  (I must say that looking back over my argument, it strikes me as more cogent and persuasive than most of the stuff on Daily Kos, so to pass a true Turing test I might have to make it a bit more incoherent).

Back in my consulting days at McKinsey & Company, we had this tradition (in hindsight I would call it almost an affectation) of giving interviewees business cases** to discuss and solve in our job interviews.  If I were running a news outlet, I would require interviewees to take an ideological Touring test - take an issue and give me the argument for each side in the way that each side would present it.

This, by the way, is probably why Paul Krugman is my least favorite person in journalism.  He knows very well that his opponents have a fairly thoughtful and (to them) well intention-ed argument but pretends to his readers that no such position exists.  Which is ironic because in some sense Krugman started the dialog on ideological Turing tests, arguing that liberals can do it easily for conservative positions but conservatives fail at it for liberal positions.

 

** Want an example?  Many of these cases were just strategic choices in some of our consulting work.  But some were more generic, meant to test how one might break down and attack a problem.  One I used from time to time was, "what is the size of the window glass market in Mexico?"  Most applicants were ready for this kind of BS, but I do treasure the look on a few faces of students who had not been warned about such questions.  The point of course was to think it through out loud, ie "well there are different sectors, like buildings and autos.  Each would have both a new and replacement market. Within buildings there is residential and commercial.  Taking one of these, the new residential market would be driven by new home construction times some factor representing windows per house.  One might need to understand if Mexican houses used pre-manufactured windows or constructed them from components on the building site."  etc. etc.

We Are All Lukewarmers Now

Matt Ridley has another very good editorial in the WSJ that again does a great job of outlining what I think of as the core skeptic position.  Read the whole thing, but a few excerpts:

The United Nations' Intergovernmental Panel on Climate Change will shortly publish the second part of its latest report, on the likely impact of climate change. Government representatives are meeting with scientists in Japan to sex up—sorry, rewrite—a summary of the scientists' accounts of storms, droughts and diseases to come. But the actual report, known as AR5-WGII, is less frightening than its predecessor seven years ago.

The 2007 report was riddled with errors about Himalayan glaciers, the Amazon rain forest, African agriculture, water shortages and other matters, all of which erred in the direction of alarm. This led to a critical appraisal of the report-writing process from a council of national science academies, some of whose recommendations were simply ignored.

Others, however, hit home. According to leaks, this time the full report is much more cautious and vague about worsening cyclones, changes in rainfall, climate-change refugees, and the overall cost of global warming.

It puts the overall cost at less than 2% of GDP for a 2.5 degrees Centigrade (or 4.5 degrees Fahrenheit) temperature increase during this century. This is vastly less than the much heralded prediction of Lord Stern, who said climate change would cost 5%-20% of world GDP in his influential 2006 report for the British government.

It is certainly a strange branch of science where major reports omit a conclusion because that conclusion is not what they wanted to see

The IPCC's September 2013 report abandoned any attempt to estimate the most likely "sensitivity" of the climate to a doubling of atmospheric carbon dioxide. The explanation, buried in a technical summary not published until January, is that "estimates derived from observed climate change tend to best fit the observed surface and ocean warming for [sensitivity] values in the lower part of the likely range." Translation: The data suggest we probably face less warming than the models indicate, but we would rather not say so.

Readers of this site will recognize this statement

None of this contradicts basic physics. Doubling carbon dioxide cannot on its own generate more than about 1.1C (2F) of warming, however long it takes. All the putative warming above that level would come from amplifying factors, chiefly related to water vapor and clouds. The net effect of these factors is the subject of contentious debate.

I have reluctantly accepted the lukewarmer title, though I think it is a bit lame.

In climate science, the real debate has never been between "deniers" and the rest, but between "lukewarmers," who think man-made climate change is real but fairly harmless, and those who think the future is alarming. Scientists like Judith Curry of the Georgia Institute of Technology and Richard Lindzen of MIT have moved steadily toward lukewarm views in recent years.

When I make presentations, I like to start with the following (because it gets everyone's attention):  "Yes, I am a denier.  But to say 'denier', implies that one is denying some specific proposition.  What is that proposition?  It can't be 'global warming' because propositions need verbs, otherwise it is like saying one denies weather.  I don't deny that the world has warmed over the last century.  I don't deny that natural factors play a role in this (though many alarmists seem to).  I don't even deny that man has contributed incrementally to this warming.  What I deny is the catastrophe.  Specifically, I deny that man's CO2 will warm the Earth enough to create a catastrophe.  I define "catastrophe" as an outcome where the costs of immediately reducing CO2 output with the associated loss in economic growth would be substantially less than the cost of future adaption and abatement. "

A Bad Chart From My Allies

I try to make it a habit to criticize bad analyses from "my side" of certain debates.  I find this to be a good habit that keeps one from falling for poorly constructed but ideologically tempting arguments.

Here is my example this week, from climate skeptic Steven Goddard.  I generally enjoy his work, and have quoted him before, but this is a bad chart (this is global temperatures as measured by satellite and aggregated by RSS).

click to enlarge

 

 

He is trying to show that the last 17+ years has no temperature trend.  Fine.  But by trying to put a trend line on the earlier period, it results in a mess that understates warming in earlier years.    He ends up with 17 years with a zero trend and 20 years with a 0.05 per decade trend.  Add these up and one would expect 0.1C total warming.   But in fact over this entire period there was, by this data set, 0.3C-0.4C of warming.  He left most of the warming out in the the step between the two lines.

Now there are times this might be appropriate.  For example, in the measurement of ocean heat content, there is a step change that occurs right at the point where the measurement approach changed from ship records to the ARGO floats.  One might argue that it is wrong to make a trend through the transition point because the step change was an artifact of the measurement change.  But in this case there was no such measurement change.  And while there was a crazy El Nino year in 1998, I have heard no argument from any quarter as to why there might have been some fundamental change in the climate system around 1997.

So I call foul.  Take the trend line off the blue portion and the graph is much better.

Want to Make Your Reputation in Academia? Here is an Important Class of Problem For Which We Have No Solution Approach

Here is the problem:  There exists a highly dynamic, multi- multi- variable system.  One input is changed.  How much, and in what ways, did that change affect the system?

Here are two examples:

  • The government makes a trillion dollars in deficit spending to try to boost the economy.  Did it do so?  By how much? (This Reason article got me thinking about it)
  • Man's actions increase the amount of CO2 in the atmosphere.  We are fairly confident that this has some warming effect, but how how much?  There are big policy differences between the response to a lot and a little.

The difficulty, of course, is that there is no way to do a controlled study, and while one's studied variable is changing, so are thousands, even millions of others.  These two examples have a number of things in common:

  • We know feedbacks play a large role in the answer, but the system is so hard to pin down that we are not even sure of the sign, much less the magnitude, of the feedback.  Do positive feedbacks such as ice melting and cloud formation multiply CO2 warming many times, or is warming offset by negative feedback from things like cloud formation?  Similarly in the economy, does deficit spending get multiplied many times as the money gets respent over and over, or is it offset by declines in other categories of spending like business investment?
  • In both examples, we have recent cases where the system has not behaved as expected (at least by some).  The economy remained at best flat after the recent stimulus.  We have not seen global temperatures increase for 15-20 years despite a lot of CO2 prodcution.  Are these evidence that the hypothesized relationship between cause and effect does not exist (or is small), or simply evidence that other effects independently drove the system in the opposite direction such that, for example, the economy would have been even worse without the stimulus or the world would have cooled without CO2 additions.
  • In both examples, we use computer models not only to predict the future, but to explain the past.  When the government said that the stimulus had worked, they did so based on a computer model whose core assumptions were that stimulus works.  In both fields, we get this sort of circular proof, with the output of computer models that assume a causal relationship being used to prove the causal relationship

So, for those of you who may think that we are at the end of math (or science), here is a class of problem that is clearly, just from these two examples, enormously important.  And we cannot solve it -- we can't even come close, despite the hubris of Paul Krugman or Michael Mann who may argue differently.    We are explaining fire with Phlogiston.

I have no idea where the solution lies.  Perhaps all we can hope for is a Goedel to tell us the problem is impossible to solve so stop trying.  Perhaps the seeds of a solution exist but they are buried in another discipline (God knows the climate science field often lacks even the most basic connection to math and statistics knowledge).

Maybe I am missing something, but who is even working on this?  By "working on it" I do not mean trying to build incrementally "better" economics or climate models.  Plenty of folks doing that.  But who is working on new approaches to tease out relationships in complex multi-variable systems?

Global Warming Updates

I have not been blogging climate much because none of the debates ever change.  So here are some quick updates

  • 67% to 90% of all warming in climate forecasts still from assumptions of strong positive feedback in the climate system, rather than from CO2 warming per se (ie models still assuming about 1 degree in CO2 warming is multiplied 3-10 times by positive feedbacks)
  • Studies are still mixed about the direction of feedbacks, with as many showing negative as positive feedback.  No study that I have seen supports positive feedbacks as large as those used in many climate models
  • As a result, climate models are systematically exaggerating warming (from Roy Spenser, click to enlarge).  Note that the conformance through 1998 is nothing to get excited about -- most models were rewritten after that date and likely had plugs and adjustments to force the historical match.

click to enlarge

 

  • To defend the forecasts, modellers are increasingly blaming natural effects like solar cycles on the miss, natural effects that the same modellers insisted were inherently trivial contributions when skeptics used them to explain part of the temperature rise from 1978-1998.
  • By the way, 1978-1998 is still the only period since 1940 when temperatures actually rose, such that increasingly all catastrophic forecasts rely on extrapolations from this one 20-year period. Seriously, look for yourself.
  • Alarmists are still blaming every two or three sigma weather pattern on CO2 on global warming (polar vortex, sigh).
  • Even when weather is moderate, media hyping of weather events has everyone convinced weather is more extreme, when it is not. (effect explained in context of Summer of the Shark)
  • My temperature forecast from 2007 still is doing well.   Back in '07 I regressed temperature history to a linear trend plus a sine wave.

click to enlarge

Congratulations to Nature Magazine for Catching up to Bloggers

The journal Nature has finally caught up to the fact that ocean cycles may influence global surface temperature trends.  Climate alarmists refused to acknowledge this when temperatures were rising and the cycles were in their warm phase, but now are grasping at these cycles for an explanation of the 15+ year hiatus in warming as a way to avoid abandoning high climate sensitivity assumptions  (ie the sensitivity of global temperatures to CO2 concentrations, which IMO are exaggerated by implausible assumptions of positive feedback).

Here is the chart from Nature:

click to enlarge

 

I cannot find my first use of this chart, but here is a version I was using over 5 years ago.  I know I was using it long before that

click to enlarge

 

It will be interesting to see if they find a way to blame cycles for cooling in the last 10-15 years but not for the warming in the 80's and 90's.

Next step -- alarmists have the same epiphany about the sun, and blame non-warming on a low solar cycle without simultaneously giving previous high solar cycles any credit for warming.  For Nature's benefit, here is another chart they might use (from the same 2008 blog post).  The number 50 below is selected arbitrarily, but does a good job of highlighting solar activity in the second half of the 20th century vs. the first half.

click to enlarge

 

Global Warming: The Unfalsifiable Hypothesis

This is hilarious.  Apparently the polar vortex proves whatever hypothesis you are trying to prove, either cooling or warming:

Steven Goddard of the Real Science blog has the goods on Time magazine.  From the 1974 Time article “Another Ice Age?”:

Scientists have found other indications of global cooling. For one thing there has been anoticeable expansion of the great belt of dry, high-altitude polar winds —the so-calledcircumpolar vortex—that sweep from west to east around the top and bottom of the world.

And guess what Time is saying this week?  Yup:

But not only does the cold spell not disprove climate change, it may well be that global warming could be making the occasional bout of extreme cold weather in the U.S. even more likely. Right now much of the U.S. is in the grip of a polar vortex, which is pretty much what it sounds like: a whirlwind of extremely cold, extremely dense air that forms near the poles. Usually the fast winds in the vortex—which can top 100 mph (161 k/h)—keep that cold air locked up in the Arctic. But when the winds weaken, the vortex can begin to wobble like a drunk on his fourth martini, and the Arctic air can escape and spill southward, bringing Arctic weather with it. In this case, nearly the entire polar vortex has tumbled southward, leading to record-breaking cold.

If You Don't Like People Saying That Climate Science is Absurd, Stop Publishing Absurd Un-Scientific Charts

Kevin Drum can't believe the folks at the National Review are still calling global warming science a "myth".  As is usual for global warming supporters, he wraps himself in the mantle of science while implying that those who don't toe the line on the declared consensus are somehow anti-science.

Readers will know that as a lukewarmer, I have as little patience with outright CO2 warming deniers as I do with those declaring a catastrophe  (for my views read this and this).  But if you are going to simply be thunderstruck that some people don't trust climate scientists, then don't post a chart that is a great example of why people think that a lot of global warming science is garbage.  Here is Drum's chart:

la-sci-climate-warming

 

The problem is that his chart is a splice of multiple data series with very different time resolutions.  The series up to about 1850 has data points taken at best every 50 years and likely at 100-200 year or more intervals.  It is smoothed so that temperature shifts less than 200 years or so in length won't show up and are smoothed out.

In contrast, the data series after 1850 has data sampled every day or even hour.  It has a sampling interval 6 orders of magnitude (over a million times) more frequent.  It by definition is smoothed on a time scale substantially shorter than the rest of the data.

In addition, these two data sets use entirely different measurement techniques.  The modern data comes from thermometers and satellites, measurement approaches that we understand fairly well.  The earlier data comes from some sort of proxy analysis (ice cores, tree rings, sediments, etc.)  While we know these proxies generally change with temperature, there are still a lot of questions as to their accuracy and, perhaps more importantly for us here, whether they vary linearly or have any sort of attenuation of the peaks.  For example, recent warming has not shown up as strongly in tree ring proxies, raising the question of whether they may also be missing rapid temperature changes or peaks in earlier data for which we don't have thermometers to back-check them (this is an oft-discussed problem called proxy divergence).

The problem is not the accuracy of the data for the last 100 years, though we could quibble this it is perhaps exaggerated by a few tenths of a degree.  The problem is with the historic data and using it as a valid comparison to recent data.  Even a 100 year increase of about a degree would, in the data series before 1850, be at most a single data point.  If the sampling is on 200 year intervals, there is a 50-50 chance a 100 year spike would be missed entirely in the historic data.  And even if it were in the data as a single data point, it would be smoothed out at this data scale.

Do you really think that there was never a 100-year period in those last 10,000 years where the temperatures varied by more than 0.1F, as implied by this chart?  This chart has a data set that is smoothed to signals no finer than about 200 years and compares it to recent data with no such filter.  It is like comparing the annualized GDP increase for the last quarter to the average annual GDP increase for the entire 19th century.   It is easy to demonstrate how silly this is.  If you cut the chart off at say 1950, before much anthropogenic effect will have occurred, it would still look like this, with an anomalous spike at the right (just a bit shorter).  If you believe this analysis, you have to believe that there is an unprecedented spike at the end even without anthropogenic effects.

There are several other issues with this chart that makes it laughably bad for someone to use in the context of arguing that he is the true defender of scientific integrity

  • The grey range band is if anything an even bigger scientific absurdity than the main data line.  Are they really trying to argue that there were no years, or decades, or even whole centuries that never deviated from a 0.7F baseline anomaly by more than 0.3F for the entire 4000 year period from 7500 years ago to 3500 years ago?  I will bet just about anything that the error bars on this analysis should be more than 0.3F, much less the range of variability around the mean.  Any natural scientist worth his or her salt would laugh this out of the room.  It is absurd.  But here it is presented as climate science in the exact same article that the author expresses dismay that anyone would distrust climate science.
  • A more minor point, but one that disguises the sampling frequency problem a bit, is that the last dark brown shaded area on the right that is labelled "the last 100 years" is actually at least 300 years wide.  Based on the scale, a hundred years should be about one dot on the x axis.  This means that 100 years is less than the width of the red line, and the last 60 years or the real anthropogenic period is less than half the width of the red line.  We are talking about a temperature change whose duration is half the width of the red line, which hopefully gives you some idea why I say the data sampling and smoothing processes would disguise any past periods similar to the most recent one.

Update:  Kevin Drum posted a defense of this chart on Twitter.  Here it is:  "It was published in Science."   Well folks, there is climate debate in a nutshell.   An 1000-word dissection of what appears to be wrong with a particular analysis retorted by a five-word appeal to authority.

Update #2:  I have explained the issue with a parallel flawed analysis from politics where Drum is more likely to see the flaws.

Want to Save The Ice in the Arctic?

I wrote below about Chinese pollution, but here is one other thought.  Shifting Chinese focus from reducing CO2 with unproven 21st century technology to reducing particulates with 1970s technology would be a great boon for its citizens.  But it could well have one other effect:

It might reverse the warming in the Arctic.

The reduction of Arctic ice sheet size in the summer, and the warming of the Arctic over the last several decades, is generally attributed to greenhouse warming.  But there are reasons to doubt that Co2 is the whole story.   One is that the sea ice extent in Antarctica has actually been growing at the same time the Arctic sea ice cover has been shrinking.  Maybe there is another explanation, one that affects only the northern hemisphere and not the southern?

I don't know if you have snow right now or even ever get snow.  If you do, find some black dust, like coal dust or dark dirt, and sprinkle it on a patch of snow.  Then come back tomorrow.  What will you find?  The patch of snow you sprinkled in dark dust melted a lot in comparison to the rest of the snow.  This is an albedo effect.  Snow takes a while to melt because it reflects rather than absorbs solar radiation.  Putting black dust on it changes that equation, and suddenly solar radiation is adsorbed as heat, and the now melts.  Fast.  I know this because I run a sledding hill in the wintertime, where snow falls on a black cinder hill.  The snow will last until even the smallest patch of black cinders is exposed.  Once exposed, that small hole will grow like a cancer, as it absorbs solar energy and pumps it into the surrounding ground.

By the way, if you have not snow, Accuweather.com did the experiment for you.  See here.  Very nice pictures that make the story really clear.

So consider this mess:

china_pollution_ap971430398958_620x350

Eventually that mess blows away.  Where does it end up?  Well, a lot of it ends up deposited in the Arctic, on top of the sea ice and Greenland ice sheet.

There is a growing hypothesis that this black carbon deposited on the ice from China is causing much of the sea ice to melt faster.  And as the ice sheet melts faster, this lowers the albedo of the arctic, and creates warming.  In this hypothesis, warming follows from ice melting, rather than vice versa.

How do we test this?  Well, the best way would be to go out and actually measure the deposits and calculate the albedo changes from this.  My sense is that this work is starting to be done (example), but it has been slow, because everyone who is interested in Arctic ice of late are strong global warming proponents who have incentives not to find an alternative explanation for melting ice.

But here are two quick mental experiments we can do:

  1. We already mentioned one proof.  Wind patterns cause most pollution to remain within the hemisphere (northern or southern) where it was generated.  So we would expect black carbon ice melting to be limited to the Arctic and not be seen in the Antarctic.  This fits observations
  2. In the winter, as the sea ice is growing, we would expect new ice would be free of particulate deposits and that any new deposits would be quickly covered in snow.  This would mean that we should see ice extents in the winter to be about the same as they were historically, and we would see most of the ice extent reduction in the summer.  Again, this is exactly what we see.

This is by no means a proof -- there are other explanations for the same data.  But I am convinced we would see at least a partial sea ice recovery in the Arctic if China could get their particulate emissions under control.

Update:  Melt ponds in Greenland are black with coal dust

 

Global Warming Folly

I have not written much about climate of late because my interest, err, runs hot and cold.  As most readers know, I am in the lukewarmer camp, meaning that I accept that Co2 is a greenhouse gas but believe that catastrophic warming forecasts are greatly exaggerated (in large part by scientifically unsupportable assumptions of strong net positive feedback in the climate system).  If what I just said is in any way news to you, read this and this for background.

Anyway, one thing I have been saying for about 8 years is that when the history of the environmental movement is written, the global warming obsession will be considered a great folly.  This is because global warming has sucked all the air out of almost anything else in the environmental movement.  For God sakes, the other day the Obama Administration OK'd the wind industry killing more protected birds in a month than the oil industry has killed in its entire history.  Every day the rain forest in the Amazon is cleared away a bit further to make room for ethanol-making crops.

This picture demonstrates a great example of what I mean.   Here is a recent photo from China:

20131211_china1

 

You might reasonably say, well that pollution is from the burning of fossil fuels, and the global warming folks want to reduce fossil fuel use, so aren't they trying to fight this?  And the answer is yes, tangentially.   But here is the problem:  It is an order of magnitude or more cheaper to eliminate polluting byproducts of fossil fuel combustion than it is to eliminate fossil fuel combustion altogether.

What do I mean?  China gets a lot of pressure to reduce its carbon emissions, since it is the largest emitter in the world.  So it might build a wind project, or some solar, or some expensive high speed rail to reduce fossil fuel use.  Let's say any one of these actions reduces smog and sulfur dioxide and particulate pollution (as seen in this photo) by X through reduction in fossil fuel use.  Now, let's take whatever money we spent in, say, a wind project to get X improvement and instead invest it in emissions control technologies that the US has used for decades (coal plant scrubbers, gasoline blending changes, etc) -- invest in making fossil fuel use cleaner, not in eliminating it altogether.  This same money invested in this way would get 10X, maybe even up to 100X improvement in these emissions.

By pressuring China on carbon, we have unwittingly helped enable their pollution problem.  We are trying to get them to do 21st century things that the US can't even figure out how to do economically when in actuality what they really need to be doing is 1970's things that would be relatively easy to do and would have a much bigger impact on their citizen's well-being.

Climate Humor from the New York Times

Though this is hilarious, I am pretty sure Thomas Lovejoy is serious when he writes

But the complete candor and transparency of the [IPCC] panel’s findings should be recognized and applauded. This is science sticking with the facts. It does not mean that global warming is not a problem; indeed it is a really big problem.

This is a howler.  Two quick examples.  First, every past IPCC report summary has had estimates for climate sensitivity, ie the amount of temperature increase they expect for a doubling of CO2 levels.  Coming into this IPCC report, emerging evidence from recent studies has been that the climate sensitivity is much lower than previous estimates.  So what did the "transparent" IPCC do?  They, for the first time, just left out the estimate rather than be forced to publish one that was lower than the last report.

The second example relates to the fact that temperatures have been flat over the last 15-17 years and as a result, every single climate model has overestimated temperatures.  By a lot. In a draft version, the IPCC created this chart (the red dots were added by Steve McIntyre after the chart was made as the new data came in).

figure-1-4-models-vs-observations-annotated (1)

 

This chart was consistent with a number of peer-reviewed studies that assessed the performance of climate models.  Well, this chart was a little too much "candor" for the transparent IPCC, so they replaced it with this chart in the final draft:

figure-1-4-final-models-vs-observations

 

What a mess!  They have made the area we want to look at between 1990 and the present really tiny, and then they have somehow shifted the forecast envelopes down on several of the past reports so that suddenly current measurements are within the bands.   They also hide the bottom of the fourth assessment band (orange FAR) so you can't see that observations are out of the envelope of the last report.  No one so far can figure out how they got the numbers in this chart, and it does not match any peer-reviewed work.  Steve McIntyre is trying to figure it out.

OK, so now that we are on the subject of climate models, here is the second hilarious thing Lovejoy said:

Does the leveling-off of temperatures mean that the climate models used to track them are seriously flawed? Not really. It is important to remember that models are used so that we can understand where the Earth system is headed.

Does this make any sense at all?  Try it in a different context:  The Fed said the fact that their economic models failed to predict what actually happened over the last 15 years is irrelevant because the models are only used to see where the economy is headed.

The consistent theme of this report is declining certainty and declining chances of catastrophe, two facts that the IPCC works as hard as possible to obfuscate but which still come out pretty clearly as one reads the report.

The Key Disconnect in the Climate Debate

Much of the climate debate turns on a single logical fallacy.  This fallacy is clearly on display in some comments by UK Prime Minister David Cameron:

It’s worth looking at what this report this week says – that [there is a] 95 per cent certainty that human activity is altering the climate. I think I said this almost 10 years ago: if someone came to you and said there is a 95 per cent chance that your house might burn down, even if you are in the 5 per cent that doesn’t agree with it, you still take out the insurance, just in case.”

"Human activity altering climate" is not the same thing as an environmental catastrophe (or one's house burning down).  The statement that he is 95% certain that human activity is altering climate is one that most skeptics (including myself) are 100% sure is true.  There is evidence that human activity has been altering the climate since the dawn of agriculture.  Man's changing land uses have been demonstrated to alter climate, and certainly man's incremental CO2 is raising temperatures somewhat.

The key question is -- by how much?  This is a totally different question, and, as I have written before, is largely dependent on climate theories unrelated to greenhouse gas theory, specifically that the Earth's climate system is dominated by large positive feedbacks.  (Roy Spenser has a good summary of the issue here.)

The catastrophe is so uncertain that for the first time, the IPCC left estimates of climate sensitivity to CO2 out of its recently released summary for policy makers, mainly because it was not ready to (or did not want to) deal with a number of recent studies yielding sensitivity numbers well below catastrophic levels.  Further, the IPCC nearly entirely punted on the key question of how it can reconcile its past high sensitivity/ high feedback based temperature forecasts with past relative modest measured warming rates, including a 15+ year pause in warming which none of its models predicted.

The overall tone of the new IPCC report is one of declining certainty -- they are less confident of their sensitivity numbers and less confident of their models which have all been a total failure over the last 15 years. They have also backed off of other statements, for example saying they are far less confident that warming is leading to severe weather.

Most skeptics are sure mankind is affecting climate somewhat, but believe that this effect will not be catastrophic.  On both fronts, the IPCC is slowly catching up to us.

Hearing What You Want to Hear from the Climate Report

After over 15 years of no warming, which the IPCC still cannot explain, and with climate sensitivity numbers dropping so much in recent studies that the IPCC left climate sensitivity estimates out of their summary report rather than address the drop, the Weather Channel is running this headline on their site:

weatherch

 

The IPCC does claim more confidence that warming over the past 60 years is partly or mostly due to man (I have not yet seen the exact wording they landed on), from 90% to 95%.  But this is odd given that the warming all came from 1978 to 1998 (see for yourself in temperature data about halfway through this post).  Temperatures are flat or cooling for the other 40 years of the period.  The IPCC cannot explain these 40 years of no warming in the context of high temperature sensitivities to CO2.  And, they can't explain why they can be 95% confident of what drove temperatures in the 20 year period of 1978-1998 but simultaneously have no clue what drove temperatures in the other years.

At some point I will read the thing and comment further.

 

Appeals to Authority

A reader sends me a story of global warming activist who clearly doesn't know even the most basic facts about global warming.  Since this article is about avoiding appeals to authority, so I hate to ask you to take my word for it, but it is simply impossible to immerse oneself in the science of global warming for any amount of time without being able to immediately rattle off the four major global temperature data bases (or at least one of them!)

I don't typically find it very compelling to knock a particular point of view just because one of its defenders is a moron, unless that defender has been set up as a quasi-official representative of that point of view (e.g. Al Gore).  After all, there are plenty of folks on my side of issues, including those who are voicing opinions skeptical of catastrophic global warming, who are making screwed up arguments.

However, I have found over time this to be an absolutely typical situation in the global warming advocacy world.  Every single time I have publicly debated this issue, I have understood the opposing argument, ie the argument for catastrophic global warming, better than my opponent.   In fact, I finally had to write a first chapter to my usual presentation.  In this preamble, I outline the case and evidence for manmade global warming so the audience could understand it before I then set out to refute it.

The problem is that the global warming alarm movement has come to rely very heavily on appeals to authority and ad hominem attacks in making their case.  What headlines do you see? 97% of scientists agree, the IPCC is 95% sure, etc.  These "studies", which Lord Monkton (with whom I often disagree but who can be very clever) calls "no better than a show of hands", dominate the news.  When have you ever seen a story in the media about the core issue of global warming, which is diagnosing whether positive feedbacks truly multiply small bits of manmade warming to catastrophic levels.  The answer is never.

Global warming advocates thus have failed to learn how to really argue the science of their theory.  In their echo chambers, they have all agreed that saying "the science is settled" over and over and then responding to criticism by saying "skeptics are just like tobacco lawyers and holocaust deniers and are paid off by oil companies" represents a sufficient argument.**  Which means that in an actual debate, they can be surprisingly easy to rip to pieces.  Which may be why most, taking Al Gore's lead, refuse to debate.

All of this is particularly ironic since it is the global warming alarmists who try to wrap themselves in the mantle of the defenders of science.  Ironic because the scientific revolution began only when men and women were willing to reject appeals to authority and try to understand things for themselves.

 

** Another very typical tactic:  They will present whole presentations without a single citation.   But make one statement in your rebuttal as a skeptic that is not backed with a named, peer-reviewed study, and they will call you out on it.  I remember in one presentation, I was presenting some material that was based on my own analysis.  "But this is not peer-reviewed" said one participant, implying that it should therefore be ignored.  I retorted that it was basic math, that the data sources were all cited, and they were my peers -- review it.  Use you brains.  Does it make sense?  Is there a flaw?  But they don't want to do that.  Increasingly, oddly, science is about having officially licensed scientists delivery findings to them on a platter.

Great Moments in Predictions -- Al Gore's Ice Forecast

Via Icecap (I still don't think they have permalinks that work)

In his Dec. 10, 2007 “Earth has a fever” speech, Gore referred to a prediction by U.S. climate scientist Wieslaw Maslowski that the Arctic’s summer ice could “completely disappear” by 2013 due to global warming caused by carbon emissions.

Gore said that on Sept. 21, 2007, “scientists reported with unprecedented alarm that the North Polar icecap is, in their words, ‘falling off a cliff.’ One study estimated that it could be completely gone during summer in less than 22 years. Another new study to be presented by U.S. Navy researchers later this week warns that it could happen in as little as seven years, seven years from now.”

Maslowski told members of the American Geophysical Union in 2007 that the Arctic’s summer ice could completely disappear within the decade. “If anything,” he said, “our projection of 2013 for the removal of ice in summer...is already too conservative.”

The former vice president also warned that rising temperatures were “a planetary emergency and a threat to the survival of our civilization.”

However, instead of completely melting away, the polar icecap is at now at its highest level for this time of year since 2006.

Some Responsible Press Coverage of Record Temperatures

The Phoenix New Times blog had a fairly remarkable story on a record-hot Phoenix summer.  The core of the article is a chart from the NOAA.  There are three things to notice in it:

  • The article actually acknowledges that higher temperatures were due to higher night-time lows rather than higher daytime highs  Any mention of this is exceedingly rare in media stories on temperatures, perhaps because the idea of a higher low is confusing to communicate
  • It actually attributes urban warming to the urban heat island effect
  • It makes no mention of global warming

Here is the graphic:

hottest-summer

 

This puts me in the odd role of switching sides, so to speak, and observing that greenhouse warming could very likely manifest itself as rising nighttime lows (rather than rising daytime highs).  I can only assume the surrounding area of Arizona did not see the same sort of records, which would support the theory that this is a UHI effect.

Phoenix has a huge urban heat island effect, which my son actually measured.  At 9-10 in the evening, we measured a temperature differential of 8-12F from city center to rural areas outside the city.  By the way, this is a fabulous science fair project if you know a junior high or high school student trying to do something different than growing bean plants under different color lights.

Update On My Climate Model (Spoiler: It's Doing a Lot Better than the Pros)

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends (click on any image in this post to enlarge it)

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this "model"

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun's output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about "since 1950", but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to... ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can't assume that these factors caused the current "pause" in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued....

Update:  Seriously, on a relative basis, I am kicking ass

click to enlarge

Trend That is Not A Trend: Rolling Stone Wildfire Article

Rolling Stone brings us an absolutely great example of an article that claims a trend without actually showing the trend data, and where the actual data point to a trend in the opposite direction as the one claimed.

I won't go into the conclusions of the article.  Suffice it to say it is as polemical as anything I have read of late and could be subtitled "the Tea Party and Republicans suck."  Apparently Republicans are wrong to criticize government wildfire management and do so only because they suck, and the government should not spend any effort to fight wildfires that threaten private property but does so only because Republicans, who suck, make them.  Or something.

What I want to delve into is the claim by the author that wildfires are increasing due to global warming, and only evil Republicans (who suck) could possibly deny this obvious trend (numbers in parenthesis added so I can reference passages below):

 But the United States is facing an even more basic question: How should we manage fire, given the fact that, thanks to climate change, the destruction potential for wildfires across the nation has never been greater? In the past decade alone, at least 10 states – from Alaska to Florida – have been hit by the largest or most destructive wildfires in their respective histories (1). Nationally, the cost of fighting fires has increased from $1.1 billion in 1994 to $2.7 billion in 2011.(2)

The line separating "fire season" from the rest of the year is becoming blurry. A wildfire that began in Colorado in early October continued smoldering into May of this year. Arizona's first wildfire of 2013 began in February, months ahead of the traditional firefighting season(3). A year-round fire season may be the new normal. The danger is particularly acute in the Intermountain West, but with drought and record-high temperatures in the Northwest, Midwest, South and Southeast over the past several years, the threat is spreading to the point that few regions can be considered safe....

For wildland firefighters, the debate about global warming was over years ago. "On the fire lines, it is clear," fire geographer Michael Medler told a House committee in 2007. "Global warming is changing fire behavior, creating longer fire seasons and causing more frequent, large-scale, high-severity wildfires."...(4)

Scientists have cited climate change as a major contributor in some of the biggest wildfires in recent years, including the massive Siberian fires during a record heat wave in 2010 and the bushfires that killed 173 people in Australia in 2009.(5)...

The problem is especially acute in Arizona, where average annual temperatures have risen nearly three-quarters of a degree Fahrenheit each decade since 1970, making it the fastest­-warming state in the nation. Over the same period, the average annual number of Arizona wildfires on more than 1,000 acres has nearly quadrupled, a record unsurpassed by any other state and matched only by Idaho. One-quarter of Arizona's signature ponderosa pine and mixed-conifer forests have burned in just the past decade. (6)...

At a Senate hearing in June, United States Forest Service Chief Thomas Tidwell testified that the average wildfire today burns twice as many acres as it did 40 years ago(7). "In 2012, over 9.3 million acres burned in the United States," he said – an area larger than New Jersey, Connecticut and Delaware combined. Tidwell warned that the outlook for this year's fire season was particularly grave, with nearly 400 million acres – more than double the size of Texas – at a moderate-to-high risk of burning.(8)

These are the 8 statements I can find to support an upward trend in fires.  And you will note, I hope, that none of them include the most obvious data - what has the actual trend been in number of US wildfires and acres burned.  Each of these is either a statement of opinion or a data point related to fire severity in a particular year, but none actually address the point at hand:  are we getting more and larger fires?

Maybe the data does not exist.  But in fact it does, and I will say there is absolutely no way, no way, the author has not seen the data.  The reason it is not in this article is because it does not fit the "reporters" point of view so it is left out.  Here is where the US government tracks fires by year, at the National Interagency Fire Center.   To save you clicking through, here is the data as of this moment:

click to enlarge fires 2013 to date

 

Well what do you know?  The number of fires and the acres burned in 2013 are not some sort of record high -- in fact they actually are the, respectively, lowest and second lowest numbers of the last 10 years.  In fact, both the number of fires and the total acres burned are running a third below average.

The one thing this does not address is the size of fires.  The author implies that there are more fires burning more acres, which we see is clearly wrong, but perhaps the fires are getting larger?  Well, 2012 was indeed an outlier year in that fires were larger than average, but 2013 has returned to the trend which has actually been flat to down, again exactly opposite of the author's contention (data below is just math from chart above)

Click to enlarge

 

In the rest of the post, I will briefly walk through his 8 statements highlighted above and show why they exhibit many of the classic fallacies in trying to assert a trend where none exists.  In the postscript, I will address one other inconsistency from the article as to the cause of these fires which is a pretty hilarious of how to turn any data to supporting you hypothesis, even if it is unrelated.  Now to his 8 statements:

(1) Again, no trend here, this is simply a single data point.  He says that  10 states have set in one year or another in the last decade a record for one of two variables related to fires.  With 50 states and 2 variables, we have 100 measurements that can potentially hit a record in any one year.  So if we have measured fires and fire damage for about 100 years (about the age of the US Forest Service), then we would expect on average 10 new records every decade, exactly what the author found.  Further, at least one of these -- costliness of the fires -- should be increasing over time due to higher property valuations and inflation, factors I am betting the author did not adjust for.

(2)  This cost increase over 17 years represents a 5.4% per year inflation.  It is very possible this is entirely due to changes in firefighting unit costs and methods rather than any change in underlying fire counts.

(3) This is idiotic, a desperate reach by an author with an axe to grind.  Wildfires in Arizona often occur out of fire season.   Having a single fire in the winter means nothing.

(4) Again, we know the data does not support the point.  If the data does not support your point, find some "authority" that will say it is true.  There is always someone somewhere who will say anything is true.

(5) It is true that there are scientists who have blamed global warming for these fires.  Left unmentioned is that there are also scientists who think that it is impossible to parse the effect of a 0.5C increase in global temperatures from all the other potential causes of individual weather events and disasters.  If there is no data to support a trend in the mean, it is absolutely irresponsible to claim causality in isolated data points in the tails of the distribution

(6) The idea that temperatures in Arizona have risen 3/4 a degree F for four decades is madness.  Not even close.  This would be 3F, and there is simply no basis in any reputable data base I have seen to support this.  It is potentially possible to take a few AZ urban thermometers to see temperature increases of this magnitude, but they would be measuring mostly urban heat island effects, and not rural temperatures that drive wildfires (more discussion here).  The statement that "the average annual number of Arizona wildfires on more than 1,000 acres has nearly quadrupled" is so awkwardly worded we have to suspect the author is reaching here.  In fact, since wildfires average about 100 acres, the 1000 acre fire is going to be rare.  My bet is that this is a volatility in small numbers (e.g. 1 to 4) rather than a real trend.  His final statement that "One-quarter of Arizona's signature ponderosa pine and mixed-conifer forests have burned in just the past decade" is extremely disingenuous.  The reader will be forgiven for thinking that a quarter of the trees in Arizona have burned.  But in fact this only means there have been fires in a quarter of the forests -- a single tree in one forest burning would likely count for this metric as a forest which burned.

(7) This may well be true, but means nothing really.  It is more likely, particularly given the evidence of the rest of the article, to be due to forest management processes than global warming.

(8)  This is a data point, not a trend.  Is this a lot or a little?  And remember, no matter how much he says is at risk (and remember this man is testifying to get more budget money out of Congress, so he is going to exaggerate) the actual acreage burning is flat to down.

Postscript:  The article contains one of the most blatant data bait and switches I have ever seen.  The following quote is taken as-is in the article and has no breaks or editing and nothing left out.   Here is what you are going to see.  All the way up to the last paragraph, the author tells a compelling story that the fires are due to a series of USFS firefighting and fuel-management policies.  Fair enough.   His last paragraph says that Republicans are the big problem for opposing... opposing what?  Changes to the USFS fire management practices?  No, for opposing the Obama climate change plan. What??  He just spent paragraphs building a case that this is a fire and fuel management issue, but suddenly Republicans suck for opposing the climate change bill?

Like most land in the West, Yarnell is part of an ecosystem that evolved with fire. "The area has become unhealthy and unnatural," Hawes says, "because fires have been suppressed." Yarnell is in chaparral, a mix of small juniper, oak and manzanita trees, brush and grasses. For centuries, fires swept across the chaparral periodically, clearing out and resetting the "fuel load." But beginning in the early 1900s, U.S. wildfire policy was dominated by fire suppression, formalized in 1936 as "the 10 a.m. rule" – fires were to be extinguished by the morning after they were spotted; no exceptions. Back in the day, the logic behind the rule appeared sound: If you stop a fire when it's small, it won't become big. But wildland ecosystems need fire as much as they need rain, and it had been some 45 years since a large fire burned around Yarnell. Hawes estimates that there could have been up to five times more fuel to feed the Yarnell Hill fire than was natural.

The speed and intensity of a fire in overgrown chaparral is a wildland firefighter's nightmare, according to Rick Heron, part of another Arizona crew that worked on the Yarnell Hill fire. Volatile resins and waxy leaves make manzanita "gasoline in plant form," says Heron. He's worked chaparral fires where five-foot-tall manzanitas produced 25-foot-high flames. Then there are the decades of dried-up grasses, easily ignitable, and the quick-burning material known as "fine" or "flash" fuels. "That's the stuff that gets you," says Heron. "The fine, flashy fuels are just insane. It doesn't look like it's going to be a problem. But when the fire turns on you, man, you can't outdrive it. Let alone outrun it."

Beginning with the Forest Service in 1978, the 10 a.m. rule was gradually replaced by a plan that gave federal agencies the discretion to allow fires to burn where appropriate. But putting fire back in the landscape has proved harder to do in practice, where political pressures often trump science and best-management practices. That was the case last year when the Forest Service once again made fire suppression its default position. Fire managers were ordered to wage an "aggressive initial attack" on fires, and had to seek permission to deviate from this practice. The change was made for financial reasons. Faced with skyrocketing costs of battling major blazes and simultaneous cuts to the Forest Service firefighting budget, earlier suppression would, it was hoped, keep wildfires small and thus reduce the cost of battling big fires.

Some critics think election-year politics may have played a role in the decision. "The political liability of a house burning down is greater than the political liability of having a firefighter die," says Kierán Suckling, head of the Tucson-based Center for Biological Diversity. "If they die, you just hope that the public narrative is that they were American heroes."

The problem will only get worse as extremist Republicans and conservative Democrats foster a climate of malign neglect. Even before President Obama unveiled a new climate-change initiative days before the fire, House Speaker John Boehner dismissed the reported proposal as "absolutely crazy." Before he was elected to the Senate last November, Jeff Flake, then an Arizona congressman, fought to prohibit the National Science Foundation from funding research on developing a new model for international climate-change analysis, part of a program he called "meritless." The biggest contributor to Flake's Senate campaign was the Club for Growth, whose founder, Stephen Moore, called global warming "the biggest myth of the last one hundred years."

By the way, the Yarnell firefighters did not die due to global warming or even the 10am rule.  They died due to stupidity.  Whether their own or their leaders may never be clear, but I have yet to meet a single firefighter that thought they had any business being where they were and as out of communication as they were.