Posts tagged ‘USHCN’

Example of Climate Work That Needs to be Checked and Replicated

When someone starts to shout "but its in the peer-reviewed literature" as an argument-ender to me, I usually respond that peer review is not the finish line, meaning that the science of some particular point is settled.  It is merely the starting point, where now a proposition is in the public domain and can be checked and verified and replicated and criticized and potentially disproved or modified.

The CRU scandal should, in my mind, be taken exactly the same way.  Unlike what more fire-breathing skeptics have been saying, this is not the final nail in the coffin of catastrophic man-made global warming theory.  It is merely a starting point, a chance to finally move government funded data and computer code into the public domain where it has always belonged, and start tearing it down or confirming it.

To this end, I would like to share a post from year ago, showing the kind of contortions that skeptics have been going through for years to demonstrate that there appear to be problems in  key data models -- contortions and questions that could have been answered in hours rather than years if the climate scientists hadn't been so afraid of scrutiny and kept their inner workings secret.  This post is from July, 2007.  It is not one of my most core complaints with global warming alarmists, as I think the Earth has indeed warmed over the last 150 years, though perhaps by less than the current metrics say.  But I think some folks are confused why simple averages of global temperatures can be subject to hijinx.  The answer is that the averages are not simple:

A few posts back, I showed how nearly 85% of the reported warming in the US over the last century is actually due to adjustments and added fudge-factors by scientists rather than actual measured higher temperatures.  I want to discuss some further analysis Steve McIntyre has done on these adjustments, but first I want to offer a brief analogy.

Let's say you had two compasses to help you find north, but the compasses are reading incorrectly.  After some investigation, you find that one of the compasses is located next to a strong magnet, which you have good reason to believe is strongly biasing that compass's readings.  In response, would you

  1. Average the results of the two compasses and use this mean to guide you, or
  2. Ignore the output of the poorly sited compass and rely solely on the other unbiased compass?

Most of us would quite rationally choose #2.  However, Steve McIntyre shows us a situation involving two temperature stations in the USHCN network in which government researchers apparently have gone with solution #1.  Here is the situation:

He compares the USHCN station at the Grand Canyon (which appears to be a good rural setting) with the Tucson USHCN station I documented here, located in a parking lot in the center of a rapidly growing million person city.   Unsurprisingly, the Tucson data shows lots of warming and the Grand Canyon data shows none.  So how might you correct Tucson and the Grand Canyon data, assuming they should be seeing about the same amount of warming?  Would you

average them, effectively adjusting the two temperature readings

towards each other, or would you assume the Grand Canyon data is cleaner

with fewer biases and adjust Tucson only?   Is there anyone who would not choose the second option, as with the compasses?

The GISS data set, created by the Goddard Center of NASA, takes the USHCN data set and somehow uses nearby stations to correct for anomalous stations.  I say somehow, because, incredibly, these government scientists, whose research is funded by taxpayers and is being used to make major policy decisions, refuse to release their algorithms or methodology details publicly. They keep it all secret!  Their adjustments are a big black box that none of us are allowed to look into  (and remember, these adjustments account for the vast majority of reported warming in the last century).

We can, however, reverse engineer some of these adjustments, and McIntyre does.  What he finds is that the GISS appears to be averaging the good and bad compass, rather than throwing out or adjusting only the biased reading.  You can see this below.  First, here are the USHCN data for these two stations with only the Time of Observation adjustment made (more on what these adjustments are in this article).
Grand_12

As I said above, no real surprise "“ little warming out in undeveloped nature, lots of warming in a large and rapidly growing modern city.  Now, here is the same data after the GISS has adjusted it:

Grand_15

You can see that Tucson has been adjusted down a degree or two, but Grand Canyon has been adjusted up a degree or two (with the earlier mid-century spike adjusted down).  OK, so it makes sense that Tucson has been adjusted down, though there is a very good argument to be made that it should be been adjusted down more, say by at least 3 degrees**.  But why does the Grand Canyon need to be adjusted up by about a degree and a half?  What is biasing it colder by 1.5 degrees, which is a lot?  The answer:  Nothing.  The explanation:  Obviously, the GISS is doing some sort of averaging, which is bringing the Grand Canyon and Tucson from each end closer to a mean.

This is clearly wrong, like averaging the two compasses.  You don't average a measurement known to be of good quality with one known to be biased.  The Grand Canyon should be held about the same, and Tucson adjusted down even more toward it, or else thrown out.  Lets look at two cases.  In one, we will use the GISS approach to combine these two stations"“ this adds 1.5 degrees to GC and subtracts 1.5 degrees from Tucson.  In the second, we will take an approach that applies all the adjustment to just the biases (Tucson station) "” this would add 0 degrees to GC and subtract 3 degrees from Tucson.  The first approach, used by the GISS, results in a mean warming in these two stations that is 1.5 degrees higher than the more logical second approach. No wonder the GISS produces the highest historical global warming estimates of any source!  Steve McIntyre has much more.

** I got to three degrees by applying all of the adjustments for GC and Tucson to Tucson.  Here is another way to get to about this amount.   We know from studies that urban heat islands can add 8-10 degrees to nighttime urban temperatures over surrounding undeveloped land.  Assuming no daytime effect, which is conservative, we might conclude that 8-10 degrees at night adds about 3 degrees to the entire 24-hour average.

Postscript: Steve McIntyre comments (bold added):

These adjustments are supposed to adjust for station moves "“ the procedure is described in Karl and Williams 1988 [check], but, like so many climate recipes, is a complicated statistical procedure that is not based on statistical procedures known off the island. (That's not to say that the procedures are necessarily wrong, just that the properties of the procedure are not known to statistical civilization.) When I see this particular outcome of the Karl methodology, my mpression is that, net of the pea moving under the thimble, the Grand Canyon values are being blended up and the Tucson values are being blended down. So that while the methodology purports to adjust for station moves, I'm not convinced that the methodology can successfully estimate ex post the impact of numerous station moves and my guess is that it ends up constructing a kind of blended average.

LOL.  McIntyre, by the way, is the same gentleman who helped call foul on the Mann hockey stick for bad statistical procedure.

A Junior High Science Project That Actually Contributes A Small Bit to Science

Cross-posted from Climate Skeptic

Tired of build-a-volcano junior high science fair projects, my son and I tried to identify something he could easily do himself (well, mostly, you know how kids science projects are) but that would actually contribute a small bit to science.  This year, he is doing a project on urban heat islands and urban biases on temperature measurement.   The project has two parts:  1) drive across Phoenix taking temperature measurements at night, to see if there is a variation and 2) participate in the surfacestations.org survey of US Historical Climate Network temperature measurement sites, analyzing a couple of sites for urban heat biases. 

The results of #1 are really cool (warm?) but I will save posting them until my son has his data in order.  Here is a teaser:  While the IPCC claims that urban heat islands have a negligible effect on surface temperature measurement, we found a nearly linear 5 degree F temperature gradient in the early evening between downtown Phoenix and the countryside 25 miles away.  I can't wait to try this for myself near a USHCN site, say from the Tucson site out to the countryside.

For #2, he has posted two USHCN temperature measurement site surveys here and here.  The fun part for him is that his survey of the Miami, AZ site has already led to a post in response at Climate Audit.  It turns out his survey adds data to an ongoing discussion there about GISS temperature "corrections."

Miami_az_mmts

Out-of-the-mouth-of-babes moment:  My son says, "gee, dad, doesn't that metal building reflect a lot of heat on the thermometer-thing."  You can bet it does.  This is so obvious even a 14-year-old can see it, but don't tell the RealClimate folks who continue to argue that they can adjust the data for station quality without ever seeing the station.

This has been a very good science project, and I would encourage others to try it.  There are lots of US temperature stations left to survey, particularly in the middle of the country.  In a later post I will show you how we did the driving temperature transects of Phoenix.

Why the NASA Temperture Adjustments Matter

NASA's GISS was recently forced to restate its historical temperature database for the US when Steve McIntyre (climate gadfly) found discontinuities in the data that seemed to imply a processing error.  Which indeed turned out to be the case (store here).

The importance of this is NOT the actual change to the measurements, though it was substantial.  The importance, which the media reporting on this has entirely missed, is it highlights why NASA and other government-funded climate scientists have got to release their detailed methodologies and software for scrutiny.  The adjustments they are making to historical temperatures are often larger(!) than the measured historical warming (here, here, here) so the adjustment methodology is critical. 

This post from Steve McIntyre really shows how hard government-funded climate scientists like James Hansen are working to avoid scientific scrutiny.  Note the contortions and detective work McIntyre and his readers must go through to try to back into what NASA and Hansen are actually doing.  Read in this context, you should be offended by this article.  Here is an excerpt (don't worry if you can't follow the particular discussion, just get a sense of how hard NASA is making it to replicate their adjustment process):

If I average the data so adjusted, I get the NASA-combined version
up to rounding of 0.05 deg C. Why these particular values are chosen is
a mystery to say the least. Version 1 runs on average a little warmer
than version 0 where they diverge ( and they are identical after 1980).
So why version 0 is adjusted down more than version 1 is hard to figure
out.

Why is version 2 adjusted down prior to 1990 and not after? Again
it's hard to figure out. I'm wondering whether there isn't another
problem in splicing versions as with the USHCN data. One big version of
Hansen's data was put together for Hansen and Lebedeff 1987 and the
next publication was Hansen et al 1999 - maybe different versions got
involved. But that's just a guess. It could be almost anything....It would be interesting to check their source code and see how they get this adjustment, that's for sure.

A basic tenant of science is that you publish enough information such that others can replicate your work.  Hansen and NASA are not doing this, which is all the more insane given that we as taxpayers pay for their work.

Hansen cites the fact that Phil Jones gets somewhat similar results as
evidence of the validity of his calculations. In fairness to Hansen,
while they have not archived code, they have archived enough data
versions to at least get a foothold on what they are doing. In
contrast, Phil Jones at CRU maintains lockdown anti-terrorist security
on his data versions and has even refused FOI requests for his data.
None of these sorts of analyses are possible on CRU data, which may or
may not have problems of its own.

Contributing to Science, Follow-up

My photo survey of the Tucson USHCN climate station is still creating a lot of discussion.  Discussion, for example, is here, here, and here.

And you too can have the satisfaction of contributing to science.  All
you need is a camera (a GPS of some sort is also helpful).  I wrote a
post with instructions on how to find temperature stations near you and how to document them for science here.  Believe it or not, for all the work and money spent on global warming,
this is something that no one had done -- actually go document these
sites to check their quality and potential biases.

Cities and Global Warming

OK, I lied.  I have one more post I want to make on global warming now that Steve McIntyre's site is back up.  I suspect I tend to bury the lede in my warming posts, because I try to be really careful to set up the conclusion in a fact-based way.  However, for this post, I will try a different approach.  Steven McIntyre has reshuffled the data in a study on urbanization and temperature that is relied on by the last IPCC report to get this chart for US Temperature data.
Peters27

Conclusion?  For this particular set of US temperature data, all the 20th century warming was observed in urban areas, and none was observed in rural areas less affected by urban heat islands, asphalt, cars, air conditioning, etc.

If it can be generalized, this is an amazing conclusion -- it would imply that the sum of US measured warming over the last century could be almost 100% attributed to urban heat islands (a different and more localized effect than CO2 greenhouse gas warming).  Perhaps more importantly, outside of the US nearly all of the historical temperature measurement is in urban areas -- no one has 100 year temperature records for the Chinese countryside.  However much this effect might be over-stating US temperature increases, it would probably be even more pronounced in measurements in other parts of the word.

OK, so how did he get this chart?  Did he cherry-pick the data?  First, a bit of background.

The 2003 Peterson study on urban effects on temperature was adopted as a key study for the last IPCC climate report.  In that report, Peterson concluded:

Contrary to generally accepted wisdom, no statistically significant
impact of urbanization could be found in annual temperatures.

This study (which runs counter to both common sense and the preponderance of past studies) was latched onto by the IPCC to allow them to ignore urban heat island effects on historical temperatures and claim that most all past warming in the last half-century was due to CO2.  Peterson's methodology was to take a list of several hundred US temperature stations (how he picked these is unclear, they are a mix of USHCN and non-USHCN sites) and divide them between "urban" and "rural" using various inputs, including satellite photos of night lights.  Then he compared the temperature changes over the last century for the two groups, and declared them substantially identical.

However, McIntyre found a number of problems with his analysis.  First, looking at Peterson's data set, he saw that the raw temperature measurement did show an urbanization effect of about 0.7C over the last century, a very large number.  It turns out that Peterson never showed these raw numbers in his study, only the numbers after he applied layers of "corrections" to them, many of which appear to McIntyre to be statistically dubious.  I discussed the weakness of this whole "adjustment" issue here.

Further, though, McIntyre found obviously rural sites lurking in the urban data, and vice versa, such that Peterson was really comparing a mixed bag with a mixed bag.  For example, Snoqualmie Falls showed as urban -- I have been to Snoqualmie Falls several times, and while it is fairly close to Seattle, it is not urban.  So McIntyre did a simple sort.  He took from Peterson's urban data set only large cities, which he defined as having a major league sports franchise  (yes, a bit arbitrary, but not bad).  He then compared this narrower urban data set from Peterson against Peterson's rural set and got the chart above.  The chart is entirely from Peterson's data set, with no cherry-picking except to clean up the urban list.

Postscript:  Please don't get carried away.  Satellite measurement of the troposphere, which are fairly immune to these urbanization effects, show the world has been warming, though far less than the amount shown in surface temperature databases.

Update: To reinforce the point about global sites, Brazil apparently only has six (6) sites in the worldwide database.  That is about 1/200 of the number of sites in the continental US, which has about the same land area.  And of those six, McIntyre compares urban vs. rural sites.  Guess what he finds?  And, as a follow up from the postscript, while satellites show the Northern Hemisphere is warming, it shows that the Southern Hemisphere is not.

A Temperature Adjustment Example

I won't go back into all the details, but I have posted before about just how large the manual adjustments to temperature numbers are (the "noise") as compared to the magnitude of measured warming (the "signal").  This issue of manual temperature corrections is the real reason the NASA temperature restatements are important (not the absolute value of the restatement).

Here is a quick visual example.  Both charts below are from James Hansen and the GISS and are for the US only.  Both use basically the same temperature measurement network (the USHCN).  The one on the left was Hansen's version of US temperatures in 1999.  The one on the right he published in 2001.
Hansen_1999_v_2001

The picture at the right is substantially different  than the one on the left.  Just look at 1932 and 1998.  Between the first and second chart, none of the underlying temperature measurements changed.  What changed  were the adjustments to the underlying measurements applied by the NOAA and by the GISS.  For some reason, temperatures after 1980 have been raised and temperatures in the middle of the century were lowered.

For scientists to apply a negative temperature adjustment to measurements, as they did for the early 1930's, it means they think there was some warming bias in 1932 that does not exist today.  When scientists raise current temperatures, they are saying there is some kind of cooling bias that exists today that did not exist in the 1930's.  Both of these adjustments are basically implying the same thing:  That temperature measurement was more biased upwards, say by asphalt and urbanization and poor sitings, in 1932 than they are today.  Does this make any freaking sense at all?

Of course, there may be some other bias at work here that I don't know about.  But I and everyone else in the world are forced to guess because the NOAA and the GISS insist on keeping their adjustment software and details a secret, and continue to resist outside review.

Read much more about this from Steve McIntyre.

Some Final Thoughts on The NASA Temperature Restatement

I got a lot of traffic this weekend from folks interested in the US historical temperature restatement at NASA-GISS.  I wanted to share to final thoughts and also respond to a post at RealClimate.org (the #1 web cheerleader for catastrophic man-made global warming theory).

  1. This restatement does not mean that the folks at GISS are necessarily wrong when they say the world has been warming over the last 20 years.  We know from the independent source of satellite measurements that the Northern Hemisphere has been warming (though not so much in the Southern Hemisphere).  However, surface temperature measurements, particularly as "corrected" and aggregated at the GISS, have always been much higher than the satellite readings.  (GISS vs Satellite)  This incident may start to give us an insight into how to bring those two sources into agreement. 
  2. For years, Hansen's group at GISS, as well as other leading climate scientists such as Mann and Briffa (creators of historical temperature reconstructions) have flaunted the rules of science by holding the details of their methodologies and algorithm's secret, making full scrutiny impossible.  The best possible outcome of this incident will be if new pressure is brought to bear on these scientists to stop saying "trust me" and open their work to their peers for review.  This is particularly important for activities such as Hansen's temperature data base at GISS.  While measurement of temperature would seem straight forward, in actual fact the signal to noise ration is really low.  Upward "adjustments" and fudge factors added by Hansen to the actual readings dwarf measured temperature increases, such that, for example, most reported warming in the US is actually from these adjustments, not measured increases.
  3. In a week when Newsweek chose to argue that climate skeptics need to shut up, this incident actually proves why two sides are needed for a quality scientific debate.  Hansen and his folks missed this Y2K bug because, as a man-made global warming cheerleader, he expected to see temperatures going up rapidly so he did not think to question the data.  Mr. Hansen is world-famous, is a friend of luminaries like Al Gore, gets grants in quarter million dollar chunks from various global warming believers.  All his outlook and his incentives made him want the higher temperatures to be true.  It took other people with different hypotheses about climate to see the recent temperature jump for what it was: An error.

The general response at RealClimate.org has been:  Nothing to see here, move along.

Among other incorrect stories going around are that the mistake was due
to a Y2K bug or that this had something to do with photographing
weather stations. Again, simply false.

I really, really don't think it matters exactly how the bug was found, except to the extent that RealClimate.org would like to rewrite history and convince everyone this was just a normal adjustment made by the GISS themselves rather than a mistake found by an outsider.  However, just for the record, the GISS, at least for now until they clean up history a bit, admits the bug was spotted by Steven McIntyre.  Whatever the bug turned out to be, McIntyre initially spotted it as a discontinuity that seemed to exist in GISS data around the year 2000.  He therefore hypothesized it was a Y2K bug, but he didn't know for sure because Hansen and the GISS keep all their code as a state secret.  And McIntyre himself says he became aware of the discontinuity during a series of posts that started from a picture of a weather station at Anthony Watts blog.  I know because I was part of the discussion, talking to these folks online in real time.  Here is McIntyre explaining it himself.

In sum, the post on RealClimate says:

Sum total of this change? A couple of hundredths of degrees in the US
rankings and no change in anything that could be considered
climatically important (specifically long term trends).

A bit of background - surface temperature readings have read higher than satellite readings of the troposphere, when the science of greenhouse gases says the opposite should be true.  Global warming hawks like Hansen and the GISS have pounded on the satellite numbers, investigating them 8 ways to Sunday, and have on a number of occasions trumpeted upward corrections to satellite numbers that are far smaller than these downward corrections to surface numbers. 

But yes, IF this is the the only mistake in the data, then this is a mostly correct statement from RealClimate.org..  However, here is my perspective:

  • If a mistake of this magnitude can be found by outsiders without access to Hansen's algorithm's or computer code just by inspection of the resulting data, then what would we find if we could actually inspect the code?  And this Y2K bug is by no means the only problem.  I have pointed out several myself, including adjustments for urbanization and station siting that make no sense, and averaging in rather than dropping bad measurement locations
  • If we know significant problems exist in the US temperature monitoring network, what would we find looking at China? Or Africa?  Or South America.  In the US and a few parts of Europe, we actually have a few temperature measurement points that were rural in 1900 and rural today.  But not one was measuring rural temps in these other continents 100 years ago.  All we have are temperature measurements in urban locations where we can only guess at how to adjust for the urbanization.  The problem in these locations, and why I say this is a low signal to noise ratio measurement, is that small percentage changes in our guesses for how much the urbanization correction should be make enormous changes (even to changing the sign) of historic temperature change measurements.

Here are my recommendations:

  1. NOAA and GISS both need to release their detailed algorithms and computer software code for adjusting and aggregating USHCN and global temperature data.  Period.  There can be no argument.  Folks at RealClimate.org who believe that all is well should be begging for this to happen to shut up the skeptics.  The only possible reason for not releasing this scientific information that was created by government employees with taxpayer money is if there is something to hide.
  2. The NOAA and GISS need to acknowledge that their assumptions of station quality in the USHCN network are too high, and that they need to incorporate actual documented station condition (as done at SurfaceStations.org) in their temperature aggregations and corrections.  In some cases, stations like Tucson need to just be thrown out of the USHCN.  Once the US is done, a similar effort needs to be undertaken on a global scale, and the effort needs to include people whose incentives and outlook are not driven by making temperatures read as high as possible.
  3. This is the easiest of all.  Someone needs to do empirical work (not simulated, not on the computer, but with real instruments) understanding how various temperature station placements affect measurements.  For example, how do the readings of an instrument in an open rural field compare to an identical instrument surrounded by asphalt a few miles away?  These results can be used for step #2 above.  This is cheap, simple research a couple of graduate students could do, but climatologists all seem focused on building computer models rather than actually doing science.
  4. Similar to #3, someone needs to do a definitive urban heat island study, to find out how much temperature readings are affected by urban heat, again to help correct in #2.  Again, I want real research here, with identical instruments placed in various locations and various radii from an urban center  (not goofy proxys like temperature vs. wind speed -- that's some scientist who wants to get a result without ever leaving his computer terminal).  Most studies have shown the number to be large, but a couple of recent studies show smaller effects, though now these studies are under attack not just for sloppiness but outright fabrication.  This can't be that hard to study, if people were willing to actually go into the field and take measurements.  The problem is everyone is trying to do this study with available data rather than by gathering new data.

Postscript:  The RealClimate post says:

However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake.

If catastrophic man-made global warming theory is correct, then man faces a tremendous lose-lose.  Either shut down growth, send us back to the 19th century, making us all substantially poorer and locking a billion people in Asia into poverty they are on the verge of escaping, or face catastrophic and devastating changes in the planet's weather.

Now take two people.  One in his heart really wants this theory not to be true, and hopes we don't have to face this horrible lose-lose tradeoff.  The other has a deeply felt wish that this theory is true, and hopes man does face this horrible future.  Which person do you like better?  And recognize, RealClimate is holding up the latter as the only moral man. 

Update:  Don't miss Steven McIntyre's take from the whole thing.  And McIntyre responds to Hansen here.

Breaking News: Recent US Temperature Numbers Revised Downwards Today

This is really big news, and a fabulous example of why two-way scientific discourse is still valuable, in the same week that both Newsweek and Al Gore tried to make the case that climate skeptics were counter-productive and evil. 

Climate scientist Michael Mann (famous for the hockey stick chart) once made the statement that  the 1990's were the
warmest decade in a millennia and that "there is a 95 to 99% certainty
that 1998 was the hottest year in the last one thousand years." (By
the way, Mann now denies he ever made this claim, though you can watch him say
these exact words in the CBC documentary Global
Warming:  Doomsday Called Off
).

Well, it turns out, according to the NASA GISS database, that 1998 was not even the hottest year of the last century.  This is because many temperatures from recent decades that appeared to show substantial warming have been revised downwards.  Here is how that happened (if you want to skip the story, make sure to look at the numbers at the bottom).

One of the most cited and used historical surface temperature databases is that of NASA/Goddard's GISS.  This is not some weird skeptics site.  It is considered one of the premier world temperature data bases, and it is maintained by anthropogenic global warming true believers.  It has consistently shown more warming than any other data base, and is thus a favorite source for folks like Al Gore.  These GISS readings in the US rely mainly on the US Historical Climate Network (USHCN) which is a network of about 1000 weather stations taking temperatures, a number of which have been in place for over 100 years.

Frequent readers will know that I have been a participant in an effort led by Anthony Watts at SurfaceStations.org to photo-document these temperature stations as an aid to scientists in evaluating the measurement quality of each station.  The effort has been eye-opening, as it has uncovered many very poor instrument sitings that would bias temperature measurements upwards, as I found in Tucson and Watts has documented numerous times on his blog.

One photo on Watt's blog got people talking - a station in MN with a huge jump in temperature about the same time some air conditioning units were installed nearby.   Others disagreed, and argued that such a jump could not be from the air conditioners, since a lot of the jump happened with winter temperatures when the AC was dormant.  Steve McIntyre, the Canadian statistician who helped to expose massive holes in Michael Mann's hockey stick methodology, looked into it.  After some poking around, he began to suspect that the GISS data base had a year 2000 bug in one of their data adjustments.

One of the interesting aspects of these temperature data bases is that they do not just use the raw temperature measurements from each station.  Both the NOAA (which maintains the USHCN stations) and the GISS apply many layers of adjustments, which I discussed here.  One of the purposes of Watt's project is to help educate climate scientists that many of the adjustments they make to the data back in the office does not necessarily represent the true condition of the temperature stations.  In particular, GISS adjustments imply instrument sitings are in more natural settings than they were in say 1905, an outrageous assumption on its face that is totally in conflict to the condition of the stations in Watt's data base.  Basically, surface temperature measurements have a low signal to noise ratio, and climate scientists have been overly casual about how they try to tease out the signal.

Anyway, McIntyre suspected that one of these adjustments had a bug, and had had this bug for years.  Unfortunately, it was hard to prove.  Why?  Well, that highlights one of the great travesties of climate science.  Government scientists using taxpayer money to develop the GISS temperature data base at taxpayer expense refuse to publicly release their temperature adjustment algorithms or software (In much the same way Michael Mann refused to release the details for scrutiny of his methodology behind the hockey stick).  Using the data, though, McIntyre made a compelling case that the GISS data base had systematic discontinuities that bore all the hallmarks of a software bug.

Today, the GISS admitted that McIntyre was correct, and has started to republish its data with the bug fixed.  And the numbers are changing a lot.  Before today, GISS would have said 1998 was the hottest year on record (Mann, remember, said with up to 99% certainty it was the hottest year in 1000 years) and that 2006 was the second hottest.  Well, no more.  Here are the new rankings for the 10 hottest years in the US, starting with #1:

1934, 1998, 1921, 2006, 1931, 1999, 1953, 1990, 1938, 1939

Three of the top 10 are in the last decade.  Four of the top ten are in the 1930's, before either the IPCC or the GISS really think man had any discernible impact on temperatures.  Here is the chart for all the years in the data base:
New_giss

There are a number of things we need to remember:

  • This is not the end but the beginning of the total reexamination that needs to occur of the USHCN and GISS data bases.  The poor correction for site location and urbanization are still huge issues that bias recent numbers upwards.  The GISS also has issues with how it aggregates multiple stations, apparently averaging known good stations with bad stations a process that by no means eliminates biases.  As a first step, we must demand that NOAA and GISS release their methodology and computer algorithms to the general public for detailed scrutiny by other scientists.
  • The GISS today makes it clear that these adjustments only affect US data and do not change any of their conclusions about worldwide data.  But consider this:  For all of its faults, the US has the most robust historical climate network in the world.  If we have these problems, what would we find in the data from, say, China?  And the US and parts of Europe are the only major parts of the world that actually have 100 years of data at rural locations.  No one was measuring temperature reliably in rural China or Paraguay or the Congo in 1900.  That means much of the world is relying on urban temperature measurement points that have substantial biases from urban heat.
  • All of these necessary revisions to surface temperatures will likely not make warming trends go away completely.  What it may do is bring the warming down to match the much lower satellite measured warming numbers we have, and will make current warming look more like past natural warming trends (e.g. early in this century) rather than a catastrophe created by man.  In my global warming book, I argue that future man-made warming probably will exist, but will be more like a half to one degree over the coming decades than the media-hyped numbers that are ten times higher.

So how is this possible?  How can the global warming numbers used in critical policy decisions and scientific models be so wrong with so basic of an error?  And how can this error have gone undetected for the better part of a decade?  The answer to the latter question is because the global warming  and climate community resist scrutiny.  This weeks Newsweek article and statements by Al Gore are basically aimed at suppressing any scientific criticism or challenge to global warming research.  That is why NASA can keep its temperature algorithms secret, with no outside complaint, something that would cause howls of protest in any other area of scientific inquiry.

As to the first question, I will leave the explanation to Mr. McIntyre:

While acolytes may call these guys "professionals", the process of
data adjustment is really a matter of statistics and even accounting.
In these fields, Hansen and Mann are not "professionals" - Mann
admitted this to the NAS panel explaining that he was "not a
statistician". As someone who has read their works closely, I do not
regard any of these people as "professional". Much of their reluctance
to provide source code for their methodology arises, in my opinion,
because the methods are essentially trivial and they derive a certain
satisfaction out of making things appear more complicated than they
are, a little like the Wizard of Oz. And like the Wizard of Oz, they
are not necessarily bad men, just not very good wizards.

For more, please see my Guide to Anthropogenic Global Warming or, if you have less time, my 60-second argument for why one should be skeptical of catastrophic man-made global warming theory.

Update:
Nothing new, just thinking about this more, I cannot get over the irony that in the same week Newsweek makes the case that climate science is settled and there is no room for skepticism, skeptics discover a gaping hole and error in the global warming numbers.

Update #2:  I know people get upset when we criticize scientists.  I get a lot of "they are not biased, they just made a mistake."  Fine.  But I have zero sympathy for a group of scientists who refuse to let other scientists review their methodology, and then find that they have been making a dumb methodology mistake for years that has corrupted the data of nearly every climate study in the last decade.

Update #3:  I labeled this "breaking news," but don't expect to see it in the NY Times anytime soon.  We all know this is one of those asymmetric story lines, where if the opposite had occurred (ie things found to be even worse/warmer than thought) it would be on the front page immediately, but a lowered threat will never make the news.

Oh, and by he way.  This is GOOD news.  Though many won't treat it that way.  I understand this point fairly well because, in a somewhat parallel situation, I seem to be the last anti-war guy who treats progress in Iraq as good news.

Update #4: I should have mentioned that the hero of the Newsweek story is catastrophic man-made global warming cheerleader James Hansen, who runs the GISS and is most responsible for the database in question as well as the GISS policy not to release its temperature aggregation and adjustment methodologies.  From IBD, via CNN Money:

Newsweek portrays James Hansen, director of NASA's Goddard Institute for Space Studies, as untainted by corporate bribery.

Hansen
was once profiled on CBS' "60 Minutes" as the "world's leading
researcher on global warming." Not mentioned by Newsweek was that
Hansen had acted as a consultant to Al Gore's slide-show presentations
on global warming, that he had endorsed John Kerry for president, and
had received a $250,000 grant from the foundation headed by Teresa
Heinz Kerry.

Update #5: My letter to the editor at Newsweek.  For those worried that this is some weird skeptic's fevered dream, Hansen and company kind of sort of recognize the error in the first paragraph under background here.  Their US temperature chart with what appears is the revised data is here.

Update #6: Several posts are calling this a "scandal."  It is not a scandal.  It is a mistake from which we should draw two lessons:

  1. We always need to have people of opposing opinions looking at a problem.  Man-made global warming hawks expected to see a lot of warming after the year 2000, so they never questioned the numbers.  It took folks with different hypotheses about climate to see the jump in the numbers for what it was - a programming error.
  2. Climate scientists are going to have to get over their need to hold their adjustments, formulas, algorithms and software secret.  It's just not how science is done.  James Hansen saying "trust me, the numbers are right, I don't need to tell you how I got them" reminds me of the mathematician Fermat saying he had a proof of his last theorem, but it wouldn't fit in the margin.  How many man-hours of genius mathematicians was wasted because Fermat refused to show his proof (which was most likely wrong, given how the theorem was eventually proved).

Final Update:  Some parting thoughts, and recommendations, here.

Steve McIntyre Comments on Historical Temperature Adjustments

Steve McIntyre, the statistician than called into question much of the methodology behind the Mann Hockey Stick chart, has some observations on adjustments to US temperature records I discussed here and here.

Eli Rabett and Tamino have both advocated faith-based climate
science in respect to USHCN and GISS adjustments. They say that the
climate "professionals" know what they're doing; yes, there are
problems with siting and many sites do not meet even minimal compliance
standards, but, just as Mann's "professional" software was able to
extract a climate signal from the North American tree ring data, so
Hansen's software is able to "fix" the defects in the surface sites.
"Faith-based" because they do not believe that Hansen has any
obligation to provide anything other than a cursory description of his
software or, for that matter, the software itself. But if they are
working with data that includes known bad data, then critical
examination of the adjustment software becomes integral to the
integrity of the record - as there is obviously little integrity in
much of the raw data.

While acolytes may call these guys "professionals", the process of
data adjustment is really a matter of statistics and even accounting.
In these fields, Hansen and Mann are not "professionals" - Mann
admitted this to the NAS panel explaining that he was "not a
statistician". As someone who has read their works closely, I do not
regard any of these people as "professional". Much of their reluctance
to provide source code for their methodology arises, in my opinion,
because the methods are essentially trivial and they derive a certain
satisfaction out of making things appear more complicated than they
are, a little like the Wizard of Oz. And like the Wizard of Oz, they
are not necessarily bad men, just not very good wizards.

He goes on to investigate a specific example the "professionals" use
as a positive example, demonstrating they appear to have a Y2K error in
their algorithm.   This is difficult to do, because like Mann, government scientists maintaining a government temperature data base taken from government sites paid for with taxpayer funds refuse to release their methodology or algorithms for inspection.

In the case cited, the "professionals" also make adjustments that imply the site has
decreasing urbanization over the last 100 years, something I am not
sure one can say about any site in the US except perhaps for a few
Colorado ghost towns.  The "experts" also fail to take the basic step of actually analyzing the site itself which, if visited, would reveal recently installed air conditioning unites venting hot air on the temperature instrument.   

A rebuttal, arguing that poor siting of temperature instruments is OK and does not affect the results is here.  I find rebuttals of this sort really distressing.  I studied physics for a while, before switching to engineering, and really small procedural mistakes in measurement could easily invalidate one's results.  I find it amazing that climate scientists seek to excuse massive mistakes in measurement.  I'm sorry, but in no other branch of science are results considered "settled" when the experimental noise is greater than the signal.  I would really, really, just for once, love to see a anthropogenic global warming promoter say "well, I don't think the siting will change the results, but you are right, we really need to go back and take another pass at correcting historical temperatures based on more detailed analysis of the individual sites."

More Thoughts on Historic Temperature Adjustments

A few posts back, I showed how nearly 85% of the reported warming in the US over the last century is actually due to adjustments and added fudge-factors by scientists rather than actual measured higher temperatures.  I want to discuss some further analysis Steve McIntyre has done on these adjustments, but first I want to offer a brief analogy.

Let's say you had two compasses to help you find north, but the compasses are reading incorrectly.  After some investigation, you find that one of the compasses is located next to a strong magnet, which you have good reason to believe is strongly biasing that compass's readings.  In response, would you

  1. Average the results of the two compasses and use this mean to guide you, or
  2. Ignore the output of the poorly sited compass and rely solely on the other unbiased compass?

Most of us would quite rationally choose #2.  However, Steve McIntyre shows us a situation involving two temperature stations in the USHCN network in which government researchers apparently have gone with solution #1.  Here is the situation:

He compares the USHCN station at the Grand Canyon (which appears to be a good rural setting) with the Tucson USHCN station I documented here, located in a parking lot in the center of a rapidly growing million person city.   Unsurprisingly, the Tucson data shows lots of warming and the Grand Canyon data shows none.  So how might you correct Tucson and the Grand Canyon data, assuming they should be seeing about the same amount of warming?  Would you
average them, effectively adjusting the two temperature readings
towards each other, or would you assume the Grand Canyon data is cleaner
with fewer biases and adjust Tucson only?   Is there anyone who would not choose the second option, as with the compasses?

The GISS data set, created by the Goddard Center of NASA, takes the USHCN data set and somehow uses nearby stations to correct for anomalous stations.  I say somehow, because, incredibly, these government scientists, whose research is funded by taxpayers and is being used to make major policy decisions, refuse to release their algorithms or methodology details publicly.  They keep it all secret!  Their adjustments are a big black box that none of us are allowed to look into  (and remember, these adjustments account for the vast majority of reported warming in the last century).

We can, however, reverse engineer some of these adjustments, and McIntyre does.  What he finds is that the GISS appears to be averaging the good and bad compass, rather than throwing out or adjusting only the biased reading.  You can see this below.  First, here are the USHCN data for these two stations with only the Time of Observation adjustment made (more on what these adjustments are in this article).
Grand_12

As I said above, no real surprise - little warming out in undeveloped nature, lots of warming in a large and rapidly growing modern city.  Now, here is the same data after the GISS has adjusted it:

Grand_15

You can see that Tucson has been adjusted down a degree or two, but Grand Canyon has been adjusted up a degree or two (with the earlier mid-century spike adjusted down).  OK, so it makes sense that Tucson has been adjusted down, though there is a very good argument to be made that it should be been adjusted down more, say by at least 3 degrees**.  But why does the Grand Canyon need to be adjusted up by about a degree and a half?  What is biasing it colder by 1.5 degrees, which is a lot?  The answer:  Nothing.  The explanation:  Obviously, the GISS is doing some sort of averaging, which is bringing the Grand Canyon and Tucson from each end closer to a mean. 

This is clearly wrong, like averaging the two compasses.  You don't average a measurement known to be of good quality with one known to be biased.  The Grand Canyon should be held about the same, and Tucson adjusted down even more toward it, or else thrown out.  Lets look at two cases.  In one, we will use the GISS approach to combine these two stations-- this adds 1.5 degrees to GC and subtracts 1.5 degrees from Tucson.  In the second, we will take an approach that applies all the adjustment to just the biases (Tucson station) -- this would add 0 degrees to GC and subtract 3 degrees from Tucson.  The first approach, used by the GISS, results in a mean warming in these two stations that is 1.5 degrees higher than the more logical second approach.  No wonder the GISS produces the highest historical global warming estimates of any source!  Steve McIntyre has much more.

** I got to three degrees by applying all of the adjustments for GC and Tucson to Tucson.  Here is another way to get to about this amount.   We know from studies that urban heat islands can add 8-10 degrees to nighttime urban temperatures over surrounding undeveloped land.  Assuming no daytime effect, which is conservative, we might conclude that 8-10 degrees at night adds about 3 degrees to the entire 24-hour average.

Postscript: Steve McIntyre comments (bold added):

These adjustments are supposed to adjust for station moves - the
procedure is described in Karl and Williams 1988 [check], but, like so
many climate recipes, is a complicated statistical procedure that is
not based on statistical procedures known off the island.
(That's not
to say that the procedures are necessarily wrong, just that the
properties of the procedure are not known to statistical civilization.
)
When I see this particular outcome of the Karl methodology, my
impression is that, net of the pea moving under the thimble, the Grand
Canyon values are being blended up and the Tucson values are being
blended down. So that while the methodology purports to adjust for
station moves, I'm not convinced that the methodology can successfully
estimate ex post the impact of numerous station moves and my guess is
that it ends up constructing a kind of blended average.

LOL.  McIntyre, by the way, is the same gentleman who helped call foul on the Mann hockey stick for bad statistical procedure.

An Interesting Source of Man-Made Global Warming

The US Historical Climate Network (USHCN) reports about a 0.6C temperature increase in the lower 48 states since about 1940.  There are two steps to reporting these historic temperature numbers.  First, actual measurements are taken.  Second, adjustments are made after the fact by scientists to the data.  Would you like to guess how much of the 0.6C temperature rise is from actual measured temperature increases and how much is due to adjustments of various levels of arbitrariness?  Here it is, for the period from 1940 to present in the US:

Actual Measured Temperature Increase: 0.1C
Adjustments and Fudge Factors: 0.5C
Total Reported Warming: 0.6C

Yes, that is correct.  Nearly all the reported warming in the USHCN data base, which is used for nearly all global warming studies and models, is from human-added fudge factors, guesstimates, and corrections.

I know what you are thinking - this is some weird skeptic's urban legend.  Well, actually it comes right from the NOAA web page which describes how they maintain the USHCN data set.  Below is the key chart from that site showing the sum of all the plug factors and corrections they add to the raw USHCN measurements:
Ushcn_corrections
I hope you can see this significance.  Before we get into whether these measurements are right or wrong or accurate or guesses, it is very useful to understand that almost all the reported warming in the US over the last 70 years is attributable to the plug figures and corrections a few government scientists add to the data in the back room.  It kind of reduces one's confidence, does it not, in the basic conclusion about catastrophic warming? 

Anyway, lets look at the specific adjustments.  The lines in the chart below should add to the overall adjustment line in the chart above.
Ushcn_corrections2

  • Black line is a time of observation adjustment, adding about 0.3C since 1940
  • Light Blue line is a missing data adjustment that does not affect the data much since 1940
  • Red line is an adjustment for measurement technologies, adding about 0.05C since 1940
  • Yellow line is station location quality adjustment, adding about 0.2C since 1940
  • Purple line is an urban heat island adjustment, subtracting about 0.05C since 1950.

Let's take each of these in turn.  The time of observation adjustment is defined as follows:

The Time of Observation Bias (TOB) arises when the 24-hour daily
summary period at a station begins and ends at an hour other than local
midnight. When the summary period ends at an hour other than midnight,
monthly mean temperatures exhibit a systematic bias relative to the
local midnight standard

0.3C seems absurdly high for this adjustment, but I can't prove it.  However, if I understand the problem, a month might be picking up a few extra hours from the next month and losing a few hours to the previous month.  How is a few hour time shift really biasing a 720+ hour month by so large a number? I will look to see if I can find a study digging into this. 

I will skip over the missing data and measurement technology adjustments, since they are small.

The other two adjustments are fascinating.  The yellow line says that siting has improved on USHCN sites such that, since 1900, their locations average 0.2C cooler due to being near more grass and less asphalt today than in 1900. 

During this time, many sites were relocated from city locations to
airports and from roof tops to grassy areas. This often resulted in
cooler readings than were observed at the previous sites.

OK, without a bit of data, does that make a lick of sense?  Siting today in our modern world has GOT to be worse than it was in 1900 or even 1940.  In particular, the very short cable length of the newer MMTS sensors that are standard for USHCN temperature measurement guarantee that readings today are going to be close to buildings and paving.  Now, go to SurfaceStations.org and look at pictures of actual installations, or look at the couple of installations in the Phoenix area I have taken pictures of here.  Do these look more grassy and natural than measurement sites were likely to be in 1900?  Or go to Anthony Watts blog and scroll down his posts on horrible USHCN sites.

The fact is that not only is NOAA getting this correction wrong, but it probably has the SIGN wrong.  The NOAA has never conducted the site by site survey that we discussed above.  Their statement that locations are improving is basically a leap of faith, rather than a fact-based conclusion.  In fact, NOAA scientists who believe that global warming is a problem tend to overlay this bias on the correction process.  Note the quote above -- temperatures that don't increase as they expect are treated as an error to be corrected, rather than a measurement that disputes their hypothesis.

Finally, lets look the urban heat island adjustment.  The NOAA is claiming that the sum total of urban heat island effects on its network since 1900 is just 0.1C, and less than 0.05C since 1940.  We're are talking about the difference between a rural America with horses and dirt roads and a modern urban society with asphalt and air conditioning and cars.  This rediculously small adjustment reflects two biases among anthropogenic global warming advocates:  1)  That urban heat island effects are negligible and 2) That the USHCN network is all rural.  Both are absurd.  Study after study has show urban heat island effects as high as 6-10 degrees.  Just watch you local news if you live in a city --  you will see actual temperatures and forecasts lower by several degrees in the outlying areas than in the center of town.  As to the locations all being rural, you just have to go to surfacestations.org and see where these stations are.  Many of these sites might have been rural in 1940, but they have been engulfed by cities and towns since.

To illustrate both these points, lets take the case of the Tucson site I visited.  In 1900, Tucson was a dusty one-horse town (Arizona was not even a state yet).  In 1940, it was still pretty small.  Today, it is a city of over one million people and the USHCN station is dead in the center of town, located right on an asphalt parking lot.  The adjustment NOAA makes for all these changes?  Less than one degree.  I don't think this is fraud, but it is willful blindness.

So, let's play around with numbers.  Let's say that instead of a 0.2C site quality adjustment we instead used a -0.1C adjustment, which is still probably generous.  Let's assume that instead of a -0.05C urban adjustment we instead used -0.2C.  The resulting total adjustment from 1940 to date would be +0.05 and the total measurement temperature increase in the US would fall from 0.6C to 0.15C.  And this is without even changing the very large time of observation adjustment, and is using some pretty conservative assumptions on my part.  Wow!  This would put US warming more in the range of what satellite data would imply, and would make it virtually negligible. It means that the full amount of reported US warming may well be within the error bars for the measurement network and the correction factors.

While anthropogenic global warming enthusiasts are quick to analyze the reliability of any temperature measurement that shows lower global warming numbers (e.g. satellite), they have historically resisted calls to face up to the poor quality of surface temperature measurement and the arbitrariness of current surface temperature correction factors.  As the NOAA tellingly states:

The U.S. Historical Climatology Network (USHCN, Karl et al. 1990)
is a high-quality moderate sized data set of monthly averaged maximum,
minimum, and mean temperature and total monthly precipitation developed
to assist in the detection of regional climate change. The USHCN is
comprised of 1221 high-quality stations from the U.S. Cooperative
Observing Network within the 48 contiguous United States.

Does it sound defensive to anyone else when they use "high-quality" in both of the first two sentences?  Does anyone think this is high qualityOr this?  Or this?  Its time to better understand what this network as well as its limitations.

My 60-second climate skepticism argument is here.  The much longer paper explaining the breath of skeptic's issues with catastrophic man-made global warming is available for free here.

PS- This analysis focuses only on the US.  However, is there anyone out there who thinks that measurement in China and India and Russia and Africa is less bad?

UpdateThis pdf has an overview of urban heat islands, including this analysis showing the magnitude of the Phoenix nighttime UHI as well as the fact that this UHI has grown substantially over the last 30 years.

Uhi1

Update2: Steve McIntyre looks at temperature adjustments for a couple of California Stations.  In one case he finds a station that has not moves for over one hundred years getting an adjustment that implies a urban heat island reduction over the past 100 years.

Contributing to Science

I got to make a real contribution to science this weekend, and I will explain below how you can too.  First, some background.

A while back, Steve McIntyre was playing around with graphing temperature data form the US Historical Climate Network (USHCN).  This is the data that is used in most global warming studies and initializes most climate models.  Every climate station is not in this data base - in fact, only about 20 per state are in the data base, with locations supposedly selected in rural areas less subject to biases over time from urban development (urban areas are hotter, due to pavement and energy use, for reasons unrelated to the greenhouse effect).  The crosses below on the map show each station.

He showed this graph, of the USHCN data for temperature change since 1900 (data corrected for time of day of measurement).  Redder shows measured temperatures have increased since 1900, bluer means they have decreased.
Usgrid80

He mentioned that Tucson was the number one warming site -- you can see it in the deepest red.  My first thought was, "wow, that is right next door to me."   My second thought was "how can Tucson, with a million people, count as rural?"   Scientists who study global warming apply all kinds of computer and statistical tricks to this data, supposedly to weed out measurement biases and problems.  However, a number of folks have been arguing that scientists really need to evaluate biases site by site.  Anthony Watts has taken this idea and created SurfaceStations.org, a site dedicated to surveying and photographing these official USHCN stations.

So, with his guidance, I went down to Tucson to see for myself.  My full report is here, but this is what I found:
Tucson1

The measurement station is in the middle of an asphalt parking lot!  This is against all best practices, and even a layman can see how that would bias measurements high.  Watts finds other problems with the installation from my pictures that I missed, and comments here that it is the worst station he has seen yet.  That, by the way, is the great part about this exercise.  Amateurs like me don't need to be able to judge the installation, they just need to take good pictures that the experts can use to analyze problems.

As a final note on Tucson, during the time period between 1950 and today, when Tucson saw most of this measured temperature increase, the population of Tucson increased from under 200,000 to over 1,000,000.  That's a lot of extra urban heat, in addition to the local effects of this parking lot.

The way that scientists test for anomalies without actually visiting or looking at the sites is to do some statistical checks against other nearby sites.  Two such sites are Mesa and Wickenburg.  Mesa immediately set off alarm bells for me.  Mesa is a suburb of Phoenix, and is often listed among the fastest growing cities in the country.  Sure enough, the Mesa temperature measurements were discontinued in the late 1980's, but surely were biased upwards by urban growth up to that time.

So, I then went to visit Wickenburg.  Though is has been growing of late, Wickenburg would still be considered by most to be a small town.  So perhaps the Wickenburg measurement is without bias?  Well, here is the site:

Wickenburg_facing_sw

That white coffee can looking thing on a pole in the center is the temperature instrument.  Again, we have it surrounded by a sea of black asphalt, but we also have two building walls that reflect heat onto the instrument.  Specs for the USHCN say that instruments should be installed in an open area away from buildings and on natural ground.  Oops.  Oh, and by the way, lets look the other direction...

Wickenburg_facing_se

What are those silver things just behind the unit?  They are the cooling fans for the building's AC.  Basically, all the heat from the building removed by the AC gets dumped out about 25 feet from this temperature measurement.

Remember, these are the few select stations being used to determine how much global warming the US is experiencing.  Pretty scary.  Another example is here.

Believe it or not, for all the work and money spent on global warming, this is something that no one had done -- actually go document these sites to check their quality and potential biases.  And you too can have the satisfaction of contributing to science.  All you need is a camera (a GPS of some sort is also helpful).  I wrote a post with instructions on how to find temperature stations near you and how to document them for science here.

For those interested, my paper on the skeptics' arguments against catastrophic man-made global warming is here.  If that is too long, the 60-second climate skeptic pitch is here.

Does this Make Sense?

I am just finishing up my paper "A Skeptical Layman's Guide to Anthropogenic Global Warming," and one thing I encounter a lot with sources and websites that are strong supporters of Anthropogenic Global Warming (AGW) theory is that they will often say such-and-such argument by skeptics was just disproved by so-and-so. 

For example, skeptics often argue that historical temperature records do not correct enough for the effects of urbanization on long-term measurement points.  The IPCC, in fact, has taken the position that what is called the urban heat island effect is trivial, and does not account for much or any of measured warming over the last 100 years.  To this end, one of the pro-AGW sites (either RealClimate.org or the New Scientist, I can't remember which) said that "Parker in 2006 has disproved the urban heat island effect."

Now, if you were going to set out to do such a thing, how would you do it?  The logical way, to me, would be to draw a line from the center of the city to the rural areas surrounding it, and take a bunch of identical thermometers and have people record temperatures every couple of miles along this line.  Then you could draw a graph of temperature vs. nearness to the city center, and see what you would find.

Is that what Parker did?  Uh, no.  I turn it over to Steve McIntyre, one of the two men who helped highlight all the problems with the Mann hockey stick several years ago.

If you are not a climate scientist (or a realclimate reader), you
would almost certainly believe, from your own experience, that cities
are warmer than the surrounding countryside - the "urban heat island".
From that, it's easy to conclude that as cities become bigger and as
towns become cities and villages become towns, that there is a
widespread impact on urban records from changes in landscape, which
have to be considered before you can back out what portion is due to
increased GHG.

One of the main IPCC creeds is that the urban heat island effect has
a negligible impact on large-scale averages such as CRU or GISS. The
obvious way of proving this would seem to be taking measurements on an
urban transect and showing that there is no urban heat island. Of
course, Jones and his associates can't do that because such transects
always show a substantial urban heat island. So they have to resort to
indirect methods to provide evidence of "things unseen", such as Jones
et al 1990, which we've discussed in the past.

The newest entry in the theological literature is Parker (2004, 2006),
who, once again, does not show the absence of an urban heat island by
direct measurements, but purports to show the absence of an effect on
large-scale averages by showing that the temperature trends on calm
days is comparable to that on windy days. My first reaction to this,
and I'm sure that others had the same reaction was: well, so what? Why
would anyone interpret that as evidence one way or the other on UHI?

He goes on to take the study apart in detail, but I think most of you can see that the methodology makes absolutely zero sense unless one is desperately trying to toe the party line and win points with AGW supporters by finding some fig leaf to cover up this urban heat island problem.  By the way, plenty of people have performed the analysis the logical way we discussed first, and have shown huge heat island effects:

Uhi(Click for a larger view)

The bottom axis by the way is a "sky-view" metric I had not seen before, but is a measurement of urban topology.  Effectively the more urbanized and the more tall buildings around you that create a canyon effect, the lower the sky view fraction.  Note that no one gets a number for the Urban Heat Island effect less than 1 degree C, and many hover around 6 degrees (delta temperature from urban location to surrounding rural countryside).  Just a bit higher than the 0.2C assumed by the IPCC.  Why would they assume such a low number in the face of strong evidence?  Because assuming a higher number would reduce historical warming numbers, silly.

Oh, and the IPCC argues that the measurement points it uses around the world are all rural locations so urban heat island corrections are irrelevant.  Below are some sample photos of USHCN sites, which are these supposedly rural sites that are used in the official historical warming numbers.  By the way, these US sites are probably better than what you would find anywhere else in the world. (All pictures from surfacesations.org)  As always, you can click for a larger view.

Marysville_issues1 

Forestgrove

Tahoe_city3

Petaluma_east

You can help with the effort of documenting all the US Historical Climate Network (USHCN) stations.  See my post here -- I have already done two and its fun!

Climate Scavenger Hunt (No Climate Expertise Required)

Anthony Watts is offering an opportunity to help out climate science and participate in something of a climate scavenger hunt.  What is considered the most "trustworthy" temperature history of the US comes from a series of temperature measurement points called the US Historical Climate Network (USHCN).  There are perhaps 20-25 such measurement points in each state, usually in smaller towns and more remote spots.  Some of these stations are well-located, while others are not - having been encroached by urban heat islands of growing towns or having been placed carelessly (see here and here for examples of  inexcusably bad installations that are currently part of the US historical temperature record).

Historically, climate scientists have applied statistical corrections to try to take into account these biasing effects.  Unfortunately, these statistical methods are blind to installation quality.  Watt is trying to correct that, by creating a photo database of these installations, with comments by reviewers about the installation and potential local biases. 

He has created an online database at surfacestations.org, which he explains here.  Your faithful blogger Coyote actually contributed one of the early entries, and it was fun  -- a lot like geocaching but with more of a sense of accomplishment, because it was contributing to science.

So why is it a scavenger hunt?  Well, my son had a double header in Prescott, AZ, which I saw was near the Prescott USHCN station.  Here is what I began with, from the official listing: 

PRESCOTT (34.57°N, 112.44°W; 1586 m)

That looks easy -- latitude and longitude.  Well, I stuck it in Google maps and found this.  Turns out on satellite view that there is nothing there.  So I then asked around to the state climatologist's office - do you know the address of this station.  Nope.  So I zoomed out a bit, and started doing some local business searches in Google maps around the original Lat/Long.  I was looking for government property - fire stations, ranger stations, airports, etc.  These are typically the location of such stations.  The municipal water treatment plant to the east looked good.  So we drove by, and found it in about ten minutes and took our pictures.  My entry is here.

Not only was it fun, but this is important work.  In trying to find some stations in several states, I actually called the offices of the local state climatologist (most states have one).  I have yet to find one that had any idea where these installations were beyond the lat-long points in the data base.  If we are going to make trillion dollar political choices based on the output of this network, it is probably a good idea to understand it better.