Posts tagged ‘temperature’

Irony of Phone Design

My last phone was a Droid Turbo (or some variant of that).  It was a tank (and btw the battery was so large it would last a week).  It was also butt-ugly, but you could drop that thing from an airplane and it would probably keep working.  I never bothered with a case.

My new phone is a Galaxy S8.  It is probably, looks-wise, the acme of phone design right now and the polar opposite in attractiveness from the Droid Turbo.  But it is literally almost all glass.  The front is glass.  The back is glass.  The sides, dues to the curved bezel, are mostly glass.  If you drop this thing you are going to hit -- wait for it -- glass.  I was changing cases on it and dropped it from a height of no more than three feet and both the front and back glass shattered.  So you MUST put this expensive phone in a relatively bulky case.  You can have a slim case that may or may not protect the screen and sort of retains some of the feel of the curved bezel or a bulky case that probably will protect the phone but makes the entire phone design moot.

My point is that companies seem to be designing phones for how good they look and feel in the Verizon store**, rather than how they will actually look bundled up in a large case in real life.  Once you provide reasonable life-protection for the S8, all its expensive design features are covered up.

One thing I have learned during this experience is that the vast majority of the millennials who rate cell phones on review sites like Engadget are wildly over-influenced by aesthetics.  For example they all seem to downgrade phones that have larger bezels and metal rather than glass packaging, irregardless of reliability. I am still looking for a site that publishes a good list of drop test results and ratings.  I don't think I will buy another phone without seeing these results (I was considering a pixel 2 until I saw is horrible drop results).  I would also like to see someone who grades phone aesthetics in the sort of cases we are all going to put on them.  Honestly if I had time I would probably start my own review site focused on real-world use, emphasizing characteristics like reliability, repair costs, drop test results, and battery life.

 

** For a long, long, long time, TV manufacturers ruined TV pictures so they would look better in a store.  Every TV you could buy, at least in the pre-LCD era, had super-high color temperatures shifted way up into the blues.  The colors looked like crap in a dark room watching a movie, but the picture appeared brighter in the TV showroom.  Back in the day, one of the first things one would do with a good TV if one was a movie snob was to get the TV color calibrated or look for a TV that had a color temperature setting.

Keeping Cocktails Cold Without Dilution

For many of you, this will be a blinding glimpse of the obvious, but I see so many dumb approaches to cooling cocktails being pushed that I had to try to clear a few things up.

First, a bit of physics.  Ice cubes cool your drink in two ways.   First and perhaps most obviously, the ice is colder than your drink.  Put any object that is 32 degrees in a liquid that is 72 degrees and the warmer liquid will transfer heat to the cooler object.  The object you dropped in will warm and the liquid will cool and their temperatures will tend to equilibrate.  The exact amount that the liquid will cool depends on their relative masses, the heat carrying capacity of each material, and the difference in their temperatures.

However, for all but the most unusual substances, this cooling effect will be minor in comparison with the second effect, the phase change of the ice.  Phase changes in water consume and liberate a lot of heat. I probably could look up the exact amounts, but the heat absorbed by water going from 32 degree ice to 33 degree water is way more than the heat absorbed going from that now 33 degree water to room temperature.

Your drink needs to be constantly chilled, even if it starts cold, because most glasses are not very good insulators.  Pick up the glass -- is the glass cold from the drink?  If so, this means the glass is a bad insulator.  If it were a good insulator, the glass would be room temperature on the outside even if the drink were cold.  The glass will absorb some heat from the air, but air is not really a great conductor of heat unless it is moving.  But when you hold the glass in your hand, you are making a really good contact between your drink and an organic body that is essentially circulating near-100 degree fluid around it.  Your body is pumping heat into your cocktail.

Given this, let's analyze two common approaches to supposedly cooling cocktails without excessive dilution:

  1. Cold rocks.   You put these things in the freezer and put them in your drink to keep it cold.  Well, this certainly will not dilute the drink, but it also will not keep it very cold for long.   Remember, the equilibration of temperatures between the drink and the object in it is not the main source of heat absorption, it is the phase change and the rocks are not going to change phase in your drink.  Perhaps if you cooled the rocks in liquid nitrogen?  I don't know.
  2. Large round ice balls.  There is nothing that is more attractive in my cocktail than a perfect round ice ball.  A restaurant here in town called the Gladly has a way of making these beautiful round flaw-free ice balls that look like they are Steuben glass.  The theory is that with a smaller surface to volume ratio, the ice ball will melt slower.  Which is probably true, but all this means is that the heat transfer is slower and the cooling is less.   But again, the physics should be roughly the same -- it is going to cool mostly in proportion to how much it melts.  If it melts less, it cools less.  I have a sneaking suspicion that bars have bought into this ice ball thing to mask tiny cocktails -- I have been to several bars which have come up with ice balls or cylinders that are maybe 1 mm smaller in diameter than the glass so that a large glass holds about an ounce of cocktail.

I will not claim to be an expert but I like my bourbon drinks cold and have adopted this strategy -- perhaps you have others.

  1. Keep the bottles chilled.   I keep Vodka in the freezer and bourbon and a few key mixers in the refrigerator.   It is much easier to keep something cool than to cool it the first time, and this is a good dilution-free approach to the initial cooling.  I don't know if this sort of storage is problematic for the liquor -- I have never found any issues.
  2. Keep your drinking glass in the freezer.  Again, it will warm in your hand but an initially warm glass is going to pump heat into whatever you pour into it.
  3. Use a special glass.   I have gone through two generations on this.  My first generation was to use a double wall glass with an air gap. This works well and you can find many choices on Amazon.  Then my wife found some small glasses at Tuesday Morning that were double wall but have water in the gap.  You put them in the freezer and not only does the glass get cold but the water in the middle freezes.  Now I can get some phase change cooling in my cocktail without dilution.  You have to get used to holding a really cold glass but in Phoenix we have no complaints about such things.

Things I don't know but might work:  I can imagine you could design encapsulated ice cubes, such as water in a glass sphere.  Don't know if anyone makes these.  There are similar products with gel in them that freezes, and double wall glasses with gel.  I do not know if the phase change in the gel is better or worse for heat absorption than phase change of water.  I have never found those cold packs made of gel as satisfactory as an ice pack, but that may be just a function of size.  Anyone know?

Update:  I believe this is what I have, though since we bought them at Tuesday Morning their provenance is hard to trace.  They are small, but if you are sipping straight bourbon or scotch this is way more than enough.

Postscript:  I was drinking old Fashions for a while but switched to a straight mix of Bourbon and Cointreau.  Apparently there is no name for this cocktail that I can find, though its a bit like a Bourbon Sidecar without the lemon juice.  For all your cocktails, I would seriously consider getting a jar of these, they are amazing.  The Luxardo cherries are nothing like the crappy bright red maraschino cherries you see sold in grocery stores.

So Where Is The Climate Science Money Actually Going If Not To Temperature Measurement?

You are likely aware that the US, and many other countries, are spending billions and billions of dollars on climate research.  After drug development, it probably has become the single most lucrative academic sector.

Let me ask a question.  If you were concerned (as you should be) about lead in soil and drinking water and how it might or might not be getting into the bloodstream of children, what would you spend money on?  Sure, better treatments and new technologies for filtering and cleaning up lead.  But wouldn't the number one investment be in more and better measurement of environmental and human lead concentrations, and how they might be changing over time?

So I suppose if one were worried about the global rise in temperatures, one would look at better and more complete measurement of these temperatures.  Hah!  You would be wrong.

There are three main global temperature histories: the combined CRU-Hadley record (HADCRU), the NASA-GISS (GISTEMP) record, and the NOAA record. All three global averages depend on the same underlying land data archive, the Global Historical Climatology Network (GHCN). Because of this reliance on GHCN, its quality deficiencies will constrain the quality of all derived products.

The number of weather stations providing data to GHCN plunged in 1990 and again in 2005. The sample size has fallen by over 75% from its peak in the early 1970s, and is now smaller than at any time since 1919.

Well, perhaps they have focused on culling a large poor quality network into fewer, higher quality locations?  If they have been doing this, there is little or no record of that being the case.  To outsiders, it looks like stations just keep turning off.   And in fact, by certain metrics, the quality of the network is falling:

The collapse in sample size has increased the relative fraction of data coming from airports to about 50 percent (up from about 30 percent in the 1970s). It has also reduced the average latitude of source data and removed relatively more high-altitude monitoring sites.

Airports, located in the middle of urban centers by and large, are terrible temperature measurement points, subject to a variety of biases such as the urban heat island effect.  My son and I measured over 10 degrees Fahrenheit different between the Phoenix airport and the outlying countryside in an old school project.  Folks who compile the measurements claim that they have corrected for these biases, but many of us have reasons to doubt that (consider this example, where an obviously biased station was still showing in the corrected data as the #1 warming site in the country).  I understand why we have spent 30 years correcting screwed up biased stations because we need some stations with long histories and these are what we have (though many long lived stations have been allowed to expire), but why haven't we been building a new, better-sited network?

Ironically, there has been one major investment effort to improve temperature measurement, and that is through satellite measurements.  We now use satellites for official measures of cloud cover, sea ice extent, and sea level, but the global warming establishment has largely ignored satellite measurement of temperatures.  For example, James Hansen (Al Gore's mentor and often called the father of global warming) strongly defended 100+ year old surface temperature measurement technology over satellites.  Ironically, Hansen was head, for years, of NASA's Goddard Institute of Space Studies (GISS), so one wonders why he resisted space technology in this one particular area.  Cynics among us would argue that it is because satellites give the "wrong" answer, showing a slower warming rate than the heavily manually adjusted surface records.

Global Temperature Update

I just updated my climate presentation with data through December of 2016, so given "hottest year evah" claims, I thought I would give a brief update with the data that the media seldom ever provides.  This is only a small part of my presentation, which I will reproduce for Youtube soon (though you can see it here at Claremont-McKenna).  In this post I will address four questions:

  • Is the world still warming?
  • Is global warming accelerating?
  • Is global warming "worse than expected"?
  • Coyote, How Is Your Temperature Prediction Model Doing?

Is the world still warming:  Yes

We will use two data sets.  The first is the land surface data set from the Hadley Center in England, the primary data set used by the IPCC.  Rather than average world absolute temperature, all these charts show the variation or "anomaly" of that absolute temperature from some historical average (the zero point of which is arbitrary).  The theory is that it is easier and more accurate to aggregate anomalies across the globe than it is to average the absolute temperature.  In all my temperature charts, unless otherwise noted, the dark blue is the monthly data and the orange is a centered 5-year moving average.

You can see the El Nino / PDO-driven spike last year.  Ocean cycles like El Nino are complicated, but in short, oceans hold an order of magnitude or two more heat than the atmosphere.  There are decadal cycles where oceans will liberate heat from their depths into the atmosphere, creating surface warming, and cycles where oceans bury more heat, cooling the surface.

The other major method for aggregating global temperatures is using satellites.  I use the data from University of Alabama, Huntsville.

On this scale, the el nino peaks in 1999 and 2017 are quite obvious.  Which method, surface or satellites, gets a better result is a matter of debate.  Satellites are able to measure a larger area, but are not actually measuring the surface, they are measuring temperatures in the lower tropospehere (the troposphere's depth varies but ranges from the surface to 5-12 miles above the surface).  However, since most climate models and the IPCC show man-made warming being greatest in the lower troposphere, it seems a good place to measure.  Surface temperature records, on the other hand, are measuring exactly where we live, but can be widely spaced and are subject to a variety of biases, such as the urban heat island effect.  The station below in Tucson, located in a parking lot and surrounded by buildings, was an official part of the global warming record until my picture became widely circulated and embarrassed them in to closing it.

This argument about dueling data sets goes on constantly, and I have not even mentioned the issues of manual adjustments in the surface data set that are nearly the size of the entire global warming signal.  But we will leave these all aside with the observation that all data sources show a global warming trend.

Is Global Warming Accelerating?  No

Go into google and search "global warming accelerating".  Or just click that link.  There are a half-million results about global warming accelerating.  Heck, Google even has one of those "fact" boxes at the top that say it is:

It is interesting by the way that Google is using political advocacy groups for its "facts" nowadays.

Anyway, if global warming is so obviously accelerating that Google can list it as a fact at the top of its search page, it should be obvious from the data, right?  Well let's look.  First, here is the satellite data since I honestly believe it to be of higher quality than the surface records:

This is what I call the cherry-picking chart.  Everyone can find a peak for one end of their time scale and a valley for the other and create whatever story they want.  In economic analysis, to deal with the noise and cyclicality, one will sometimes see economic growth measured peak-to-peak, meaning from cyclical peak to the next cyclical peak, as a simple way to filter out some of the cyclicality.  I have done the same here, taking my time period as about 18 years from the peak of the 1999 El Nino to 2017 and the peak of the recent El Nino.  The exact data used for the trend is show in darker blue.  You can decide if I have been fair.

The result for this time period is a Nino to Nino warming trend of 0.11C.  Now let's look at the years before this

So the trend for 36 years is 1.2C per century but the trend for the last half of this is just 0.11C.  That does not look like acceleration to me.  One might argue that it may again accelerate in the future, but I cannot see how so many people blithely treat it as a fact that global warming has been accelerating when it clearly has not.  But maybe its just because I picked those darn satellites.  Maybe the surface temperatures show acceleration?

Nope.  Though the slow down is less dramatic, the surface temperature data never-the-less shows the same total lack of acceleration.

Is Global Warming "Worse Than Expected"?  No

The other meme one hears a lot is that global warming is "worse than expected".  Again, try the google search I linked.  Even more results, over a million this time.

To tackle this one, we have to figure out what was "expected".  Al Gore had his crazy forecasts in his movie.  One sees all kinds of apocalyptic forecasts in the media.  The IPCC has forecasts, but it tends to change them every five years and seldom goes back and revisits them, so those are hard to use.  But we have one from James Hansen, often called the father of global warming and Al Gore's mentor, from way back in 1988.  His seminal testimony in that year in front of Congress really put man-made global warming on the political map.  Here is the forecast he presented:

Unfortunately, in his scenarios, he was moving two different variables (CO2 levels and volcanoes) so it is hard to tell which one applies best to the actual history since then, but we are almost certainly between his A and B forecasts.  A lot of folks have spent time trying to compare actual temperatures to these lines, but it is very hard.  The historical temperature record Hansen was using has been manually adjusted several times since, so the historical data does not match, and it is hard to get the right zero point.  But we can eliminate the centering issues altogether if we just look at slopes -- that is all we really care about anyway.  So I have reproduced Hanson's data in the chart on the left and calculated the warming slopes in his forecast:

As it turns out, it really does not matter whether we choose the A or B scenario from Hansen, because both have about the same slope -- between 2.8C and 3.1C per century of warming from 1986 (which appears to be the actual zero date of Hansen's forecast) and today.  Compare this to 1.8C of actual warming in the surface temperature record for this same period, and 1.2C in the satellite record.  While we have seen warming, it is well under the rates predicted by Hansen.

This is a consistent result to what the IPCC found in their last assessment when they evaluated past forecasts.  The colored areas are the IPCC forecast ranges from past forecasts, the grey area was the error bar (the IPCC is a bit inconsistent when it shows error bars, including error bands seemingly only when it helps their case).  The IPCC came to the same result as I did above:   that warming had continued but was well under the pace that was "expected" form past forecasts.

By the way, the reason that many people may think that global warming is accelerating is because media mentions of global warming and severe weather events has been accelerating, leaving the impression that things are changing faster than they truly are.  I wrote an article about this effect here at Forbes.  In that I began:

The media has two bad habits that make it virtually impossible for consumers of, say, television news to get a good understanding of trends

  1. They highlight events in the tail ends of the normal distribution and authoritatively declare that these data points represent some sort of trend or shift in the mean
  2. They mistake increases in their own coverage of certain phenomenon for an increase in the frequency of the phenomenon itself.

Coyote, How Is Your Temperature Prediction Model Doing?  Great, thanks for asking

Ten years ago, purely for fun, I attempted to model past temperatures using only three inputs:  A decadal cyclical sin wave, a long-term natural warming trend out of the little ice age (of 0.36 C per century), and a man-made warming trend really kicking in around 1950 (of 0.5C per century).  I used this regression as an attribution model, to see how much of past warming might be due to man (I concluded about half of 20th century warming may be due to manmade effects).  But I keep running it to test its accuracy, again just for fun, as a predictive tool.  Here is where we are as of December of 2016 (in this case the orange line is my forecast line):

Still hanging in there:  Despite the "hottest year evah" news, temperatures in December were exactly on my prediction line.  Here is the same forecast with the 5-year centered moving average added in light blue:

In Case You Were Tempted To Have Any Respect for Arizona's State-run Universities: Professor Says Human Extinction in 10 Years is "A Lock"

From New Zealand:

There's no point trying to fight climate change - we'll all be dead in the next decade and there's nothing we can do to stop it, a visiting scientist claims.

Guy McPherson, a biology professor at the University of Arizona, says the human destruction of our own habitat is leading towards the world's sixth mass extinction.

Instead of fighting, he says we should just embrace it and live life while we can.

"It's locked down, it's been locked in for a long time - we're in the midst of our sixth mass extinction," he told Paul Henry on Thursday.

....

"I can't imagine there will be a human on the planet in 10 years," he says.

"We don't have 10 years. The problem is when I give a number like that, people think it's going to be business as usual until nine years [and] 364 days."

He says part of the reason he's given up while other scientists fight on is because they're looking at individual parts, such as methane emissions and the melting ice in the Arctic, instead of the entire picture.

"We're heading for a temperature within that span that is at or near the highest temperature experienced on Earth in the last 2 billion years."

Instead of trying to fix the climate, Prof McPherson says we should focus on living while we can.

"I think hope is a horrible idea. Hope is wishful thinking. Hope is a bad idea - let's abandon that and get on with reality instead. Let's get on with living instead of wishing for the future that never comes.

Uncertainty Intervals and the Olympics

If I had to pick one topic or way of thinking that engineers and scientists have developed but other folks are often entirely unfamiliar with, I might pick the related ideas of error, uncertainty, and significance.  A good science or engineering education will spend a lot of time on assessing the error bars for any measurement, understanding how those errors propagate through a calculation, and determining which digits of an answer are significant and which ones are, as the British might say, just wanking.

It is quite usual to see examples of the media getting notions of error and significance wrong.  But yesterday I saw a story where someone actually dusted these tools off and explained why the Olympics don't time events to the millionths of a second, despite clocks that are supposedly that accurate:

Modern timing systems are capable of measuring down to the millionth of a second—so why doesn’t FINA, the world swimming governing body, increase its timing precision by adding thousandths-of-seconds?

As it turns out, FINA used to. In 1972, Sweden’s Gunnar Larsson beat American Tim McKee in the 400m individual medley by 0.002 seconds. That finish led the governing body to eliminate timing by a significant digit. But why?

In a 50 meter Olympic pool, at the current men’s world record 50m pace, a thousandth-of-a-second constitutes 2.39 millimeters of travel. FINA pool dimension regulations allow a tolerance of 3 centimeters in each lane, more than ten times that amount. Could you time swimmers to a thousandth-of-a-second? Sure, but you couldn’t guarantee the winning swimmer didn’t have a thousandth-of-a-second-shorter course to swim. (Attempting to construct a concrete pool to any tighter a tolerance is nearly impossible; the effective length of a pool can change depending on the ambient temperature, the water temperature, and even whether or not there are people in the pool itself.)

By this, even timing to the hundredth of a second is not significant.  And all this is even before talk of currents in the Olympic pool distorting times.

US Temperature Trends, In Context

There was some debate a while back around about a temperature chart some Conservative groups were passing around.

Obviously, on this scale, global warming does not look too scary.  The question is, is this scale at all relevant?  I could re-scale the 1929 stock market drop to a chart that goes from Dow 0 to, say, Dow 100,000 and the drop would hardly be noticeable.  That re-scaling wouldn't change the fact that the 1929 stock market crash was incredibly meaningful and had large impacts on the economy.  Kevin Drum wrote about the temperature chart above,

This is so phenomenally stupid that I figured it had to be a joke of some kind.

Mother Jones has banned me from commenting on Drum's site, so I could not participate in the conversation over this chart.  But I thought about it for a while, and I think the chart's author perhaps has a point but pulled it off poorly.  I am going to take another shot at it.

First, I always show the historic temperature anomaly on the zoomed in scale that you are used to seeing, e.g.  (as usual, click to enlarge)

click to enlarge

The problem with this chart is that it is utterly without context just as much as the previous chart.  Is 0.8C a lot or a little?  Going back to our stock market analogy, it's a bit like showing the recent daily fluctuations of the Dow on a scale from 16,300 to 16,350.  The variations will look huge, much larger than either their percentage variation or their meaningfulness to all but the most panicky investors.

So I have started including the chart below as well.  Note that it is in Fahrenheit (vs. the anomaly chart above in Celsius) because US audiences have a better intuition for Fahrenheit, and is only for the US vs. the global chart above.  It shows the range of variation in US monthly averages, with the orange being the monthly average daily maximum temperature across the US, the dark blue showing the monthly average daily minimum temperature, and the green the monthly mean.  The dotted line is the long-term linear trend

click to enlarge

Note that these are the US averages -- the full range of daily maximums and minimums for the US as a whole would be wider and the full range of individual location temperatures would be wider still.   A couple of observations:

  • It is always dangerous to eyeball charts, but you should be able to see what is well known to climate scientists (and not just some skeptic fever dream) -- that much of the increase over the last 30 years (and even 100 years) of average temperatures has come not from higher daytime highs but from higher nighttime minimum temperatures.  This is one reason skeptics often roll their eyes as attribution of 15 degree summer daytime record heat waves to global warming, since the majority of the global warming signal can actually be found with winter and nighttime temperatures.
  • The other reason skeptics roll their eyes at attribution of 15 degree heat waves to 1 degree long term trends is that this one degree trend is trivial compared to the natural variation found in intra-day temperatures, between seasons, or even across years.  It is for this context that I think this view of temperature trends is useful as a supplement to traditional anomaly charts (in my standard presentation, I show this chart scale once and the standard anomaly chart scale further up about 30 times, so that utility has limits).

Revisiting James Hanson's 1988 Global Warming Forecast to Congress

(Cross-posted from Climate Skeptic)

I want to briefly revisit Hansen's 1998 Congressional forecast.  Yes, I and many others have churned over this ground many times, but I think I now have a better approach.   The typical approach has been to overlay some actual temperature data set on top of Hansen's forecast (e.g. here).  The problem is that with revisions to all of these data sets, particularly the GISS reset in 1999, none of these data sets match what Hansen was using at the time.  So we often get into arguments on where the forecast and actuals should be centered, etc.

This might be a better approach.  First, let's start with Hansen's forecast chart (click to enlarge).

hansen forecast

Folks have argued for years over which CO2 scenario best matches history.  I would argue it is somewhere between A and B, but you will see in a moment that it almost does not matter.    It turns out that both A and B have nearly the same regressed slope.

The approach I took this time was not to worry about matching exact starting points or reconciling difference anomaly base periods.  I merely took the slope of the A and B forecasts and compared it to the slope over the last 30 years of a couple of different temeprature databases (Hadley CRUT4 and the UAH v6 satellite data).

The only real issue is the start year.  The analysis is not very sensitive to the year, but I tried to find a logical start.  Hansen's chart is frustrating because his forecasts never converge exactly, even 20 years in the past.  However, they are nearly identical in 1986, a logical base year if Hansen was giving the speech in 1988, so I started there.  I didn't do anything fancy on the trend lines, just let Excel calculate the least squares regression.  This is what we get (as usual, click to enlarge).

click to enlarge

I think that tells the tale  pretty clearly.   Versus the gold standard surface temperature measurement (vs. Hansen's thumb-on-the-scale GISS) his forecast was 2x too high.  Versus the satellite measurements it was 3x too high.

The least squares regression approach probably under-estimates that A scenario growth rate, but that is OK, that just makes the conclusion more robust.

By the way, I owe someone a thanks for the digitized numbers behind Hansen's chart but it has been so many years since I downloaded them I honestly forgot who they came from.

Immigrants and Crime

Virtually every study done points to the fact the immigrants, even illegal immigrants, are less prone to crime than American citizens.  That is why immigration opponents must rely on repetition of lurid single examples to try to make their case, a bit like global warming advocates point to individual heat waves as a substitute for having any warming show up in the recent global temperature metrics.

From the Foundation for Economic Education

With few exceptions, immigrants are less crime prone than natives or have no effect on crime rates. As described below, the research is fairly one-sided.

There are two broad types of studies that investigate immigrant criminality. The first type uses Census and American Community Survey (ACS) data from the institutionalized population and broadly concludes that immigrants are less crime prone than the native-born population. It is important to note that immigrants convicted of crimes serve their sentences before being deported with few exceptions.

However, there are some potential problems with Census-based studies that could lead to inaccurate results. That’s where the second type of study comes in. The second type is a macro level analysis to judge the impact of immigration on crime rates, generally finding that increased immigration does not increase crime and sometimes even causes crime rates to fall.

Butcher and Piehl examine the incarceration rates for men aged 18-40 in the 1980, 1990, and 2000 Censuses. In each year, immigrants are less likely to be incarcerated than natives with the gap widening each decade. By 2000, immigrants have incarceration rates that are one-fifth those of the native-born

There is a lot more at the link.

"Man-Made" Climate Change

Man has almost certainly warmed the world by some tenths of a degree C with his CO2, though much of this warming has hit night-time lows rather than daily highs.  Anyway, while future temperature rise forecasts are often grossly exaggerated by absurdly high assumptions of positive feedback, there is at least a kernel of fact in there that CO2 is likely warming the world somewhat.

However, the popular "science" on climate change is often awful, positing, for example, that hurricanes are being increased by man right in the midst of the longest hurricane drought we have seen in the US for a hundred years.

Inevitably, the recent severe California droughts have been blamed on manmade CO2.  As a hopefully useful adjunct to this debate, I have annotated a recent chart from the San Jose Mercury News on the history of California droughts to reflect the popular global warming / climate change narrative.  You be the judge of the reasonableness:

click to enlarge

LMAO -- My Kid Learns About the Cold

My Arizona-raised, thin-blooded son was convinced that he had no problem with cold weather when he departed for Amherst College several years ago.  That, of course, was based on exposure to cold via a couple of ski trips.  What he likely underestimated was the impact of cold that lasts for like 6 freaking months.

So it was with good-natured parental fondness for my child that I was LMAO when I read this:

Amherst, MA has coldest February in recorded history.  or here if you hit a paywall.

The average temperature in Amherst in the past month was 11.2 degrees, the lowest average monthly temperature since records were first kept in town in 1835. It broke the previous record of 11.6 degrees set in 1934, according to Michael A. Rawlins, an assistant professor in the department of geosciences and manager of the Climate System Research Center at the University of Massachusetts.

As it turns out, I have made a climate presentation in Amherst so I actually have historic temperature charts.  It is a good example of two things:

  1. While Amherst has been warming, it was warming as much or more before 1940 (or before the era of substantial CO2 emissions) as much as after
  2. Much of the recent warming has manifested as increases in daily minimum temperatures, rather in an increase in daily maximum temperatures.  This is as predicted by warming models, but poorly communicated and understood.  Possibly because fewer people would be bent out of shape if they knew that warming translated into warmer nights rather than higher highs in the daytime.

click to enlarge

Skeptics: Please Relax on the Whole "Greatest Scientific Fraud of All Time" Thing

Climate skeptics are at risk of falling into the same exaggeration-trap as do alarmists.

I have written about the exaggeration of past warming by questionable manual adjustments to temperature records for almost a decade.  So I don't need to be convinced that these adjustments 1) need to be cleaned up and 2) likely exaggerate past warming.

However, this talk of the "Greatest Scientific Fraud of All Time" is just crazy.  If you are interested, I urge you read my piece from the other day for a more balanced view.  Don't stop reading without checking out #4.

These recent articles are making it sound like alarmist scientists are simply adding adjustments to past temperatures for no reason.  But there are many perfectly valid reasons surface temperature measurements have to be manually adjusted.  It is a required part of the process.  Just as the satellite data must be adjusted as well, though for different things.

So we should not be suspicious of adjustments per se.  We should be concerned about them, though, for a number of reasons:

  • In many parts of the world, like in the US, the manual adjustments equal or exceed the measured warming trend.  That means the"signal" we are measuring comes entirely from the adjustments.  That is, to put it lightly, not ideal.
  • The adjustments are extremely poorly documented and impossible for any third party to replicate (one reason the satellite record may be more trustworthy is all the adjustment code for the satellites is open source).
  • The adjustments may have a bias.  After all, most of the people doing the adjustments expect to see a warming trend historically, and so consider lack of such a trend to be an indicator the data is wrong and in need of adjustment.  This is not a conspiracy, but a normal human failing and the reason why the ability to replicate such work is important.
  • The adjustments do seem to be very aggressive in identifying any effects that might have artificially created a cooling trend but lax in finding and correcting effects that might have artificially created a warming trend.  First and foremost, the changing urban heat island effect in growing cities seems to be under-corrected  (Again there is debate on this -- the proprietors of the model believe they have fixed this with a geographic normalizing, correcting biases from nearby thermometers.  I and others believe all they are doing is mathematically smearing the error over a larger geography).

Again, I discussed all the pros and cons here.  If pushed to the wall, I would say perhaps half of the past warming in the surface temperature record is due to undercorrection of warming biases or overcorrection of cooling biases.

Adjusting the Temperature Records

I have been getting inquiries from folks asking me what I think about stories like this one, where Paul Homewood has been looking at the manual adjustments to raw temperature data and finding that the adjustments actually reverse the trends from cooling to warming.  Here is an example of the comparisons he did:

Raw, before adjustments;

puertoraw

 

After manual adjustments

puertoadj2

 

I actually wrote about this topic a few months back, and rather than rewrite the post I will excerpt it below:

I believe that there is both wheat and chaff in this claim [that manual temperature adjustments are exaggerating past warming], and I would like to try to separate the two as best I can.  I don't have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years.
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  For example, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction. 1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data -- for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA's own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example -- this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable -- my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net result was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth's surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990's.
  6. As is usual in such debates, I find words like "fabrication", "lies",  and "myth" to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

To these I will add a #7:  The notion that satellite results are somehow pure and unadjusted is just plain wrong.  The satellite data set takes a lot of mathematical effort to get right, something that Roy Spencer who does this work (and is considered in the skeptic camp) will be the first to tell you.  Satellites have to be adjusted for different things.  They have advantages over ground measurement because they cover most all the Earth, they are not subject to urban heat biases, and bring some technological consistency to the measurement.  However, the satellites used are constantly dieing off and being replaced, orbits decay and change, and thus times of observation of different parts of the globe change [to their credit, the satellite folks release all their source code for correcting these things].   I have become convinced the satellites, net of all the issues with both technologies, provide a better estimate but neither are perfect.

The 2014 Temperature Record No One Is Talking About

Depending on what temperature data set you look at **, or on your trust in various manual adjustments in these data sets ***, 2014 may have beaten the previous world temperature record by 0.02C.  Interestingly, the 0.02C rise over the prior record set four years ago would imply (using only these two data points which warmists seem to want to focus on) a temperature increase of 0.5C per century, a few tenths below my prediction but an order of magnitude below the alarmists' predictions for future trends.

Anyway, whether there was an absolute record or not, there was almost certainly a different temperature record set -- the highest divergence to date in the modern measured temperatures from what the computer models predicted.  The temperature increase for the past 5 years was a full 0.17C less than predicted, the largest gap yet for the models in forward-prediction mode (as opposed to when they are used to backcast history).

 

** There are four or five or more data sets, depending on how you count them.   There are 2 major satellite data sets and 2-3 ground based data sets.  The GISS ground data set generally gives the largest warming trends, while the satellite data sets give the least, but all show some warming over the last 30 or so years (though most of this warming was before 1999).

*** The data sets are all full of manual adjustments of various sorts.  All of these are necessary.  For surface stations, the measurement points move and change technology.  For the satellites, orbits and instruments shift over time.  The worrisome feature of all these adjustments is that they are large as compared to the underlying warming signal being measured, so small changes in the adjustments can lead to large changes in the apparent trend.  Skeptics often charge that the proprietors of land data sets are aggressive about including adjustments that increase the apparent trend but reluctant to add similar adjustments (eg for urban heat islands) that might reduce the trend.  As a result, most of the manual adjustments increase the trend.  There is actually little warming trend in the raw data, and it only shows up after the adjustments.  It may be total coincidence, but the database run by the most ardent warmist is the GISS and it has the highest trend.   The database run by the most skeptical is the UAH satellite database and it shows the smallest trend.  Hmm.

California Drought Update -- Not Even Close to Worst Drought Ever

There is little trend evidence anywhere that climate is getting -- pick the world -- weirder, more extreme, out of whack, whatever.  In particular, name any severe weather category you can imagine, and actual data in trend charts likely will not show any recent trend.

The reasons the average person on the street will swear you are a crazy denier for pointing such a thing out to them is that the media bombards them with news of nearly every 2+ sigma weather event, calling most of these relatively normal episodes as "the worst ever".

A great example is the California drought.  Here is the rolling average 5-year precipitation chart for California.  Find the worst drought "ever".

multigraph3

I know no one trusts anyone else's data in public debates, but you can make these charts yourself at the NOAA site, just go here:  http://www.ncdc.noaa.gov/cag/.  The one record set was that 2013 had the lowest measured CA precipitation in the last century plus, so that was indeed a record bad year, but droughts are typically made up of multiple years of below average precipitation and by that measure the recent CA drought is the fourth or fifth worst.

By the way, Paul Homewood points out something that even surprised me and I try not to be susceptible to the mindless media bad news stampeded:  California rainfall this year was close to normal.  And, as you can see, there is pretty much no trend over the last century plus in California rainfall:

multigraph1

 

As discussed previously, let's add the proviso that rainfall is not necessarily the best metric of drought.  The Palmer drought index looks at moisture in soil and takes into account other factors like temperature and evaporation, and by that metric this CA drought is closer to the worst of the century, though certainly not what one would call unprecedented.  Also, there is a worsening trend in the Palmer data.

multigraph_palmer

 

Update:  By the way, the fact that two measures of drought give us two different answers on the relative severity of the drought and on the trend in droughts is typical.   It makes a mockery of the pretense to certainty on these topics in the media.  Fortunately, I am not so invested in the whole thing that I can't include data that doesn't support my thesis.

Geeky Reflections -- Simulated Annealing

When I was an undergrad, my interest was in interfacing microcomputers with mechanical devices.  Most of what we did would be labelled "robotics" today, or at least proto-robotics (e.g. ripping the ultrasonic rangefinder out of a Polaroid camera, putting it on a stepper motor, and trying to paint a radar image of the room on a computer screen).

In doing this, we were playing around with S-100 bus computers (PC's were a bit in the future at that point) and I got interested in brute force approaches to solving the traveling salesman problem.  The way this is done is to establish some random points in x,y space and then connect them with a random path and measure the length of that path.  The initial random path is obviously going to be a terrible solution.  So you have the computer randomly flip flop two segments, and then you see if the resulting total distance is reduced.  If it is, then you keep the change and try another.

This will lead to a much shorter path, but often will not lead to the optimally shortest path.  The reason is that the result can get stuck in a local minimum that is not the optimum.  Essentially, to break out of this, you have to allow the solution to get worse first before it can get better.

The approach I was playing with was called simulated annealing.  Everything I said above is the same in this approach, but sometimes you let the program accept flip-flopped segments that yield a worse (ie longer) rather than better path.  The allowed amount worse is governed by a "temperature" that is slowly lowered.  Initially, at high temperatures, the solution can jump into most any solution, better or worse.  But as the "temperature" is lowered, the allowed amount of jumping into worse solutions is reduced.  Essentially, the system is much, much more likely than the previous approach to settle closer to the actual optimum.  This is roughly an analog of how annealing works in metals.  The code is ridiculously simple.   I don't remember it being much more than 100 lines in Pascal.

Anyway, if you lived through the above without falling asleep, the payoff is this site.  After 30 years of pretty much never thinking about simulated annealing again, I found Todd Schneider's blog which has a great visual overview of solving the travelling salesman problem with simulated annealing.  If you really want to visually see it work, go to the customizable examples at the bottom and set the iterations per map draw for about 100.  Then watch.  It really does look a bit like a large excited molecule slowly cooling.  Here is an example below but check out his site.

0e1ca854cbc30f33abc46108f2ba38f2.640x640x42

What is Normal?

I titled my very first climate video "What is Normal," alluding to the fact that climate doomsayers argue that we have shifted aspects of the climate (temperature, hurricanes, etc.) from "normal" without us even having enough historical perspective to say what "normal" is.

A more sophisticated way to restate this same point would be to say that natural phenomenon tend to show various periodicities, and without observing nature through the whole of these cycles, it is easy to mistake short term cyclical variations for long-term trends.

A paper in the journal Water Resources Research makes just this point using over 200 years of precipitation data:

We analyze long-term fluctuations of rainfall extremes in 268 years of daily observations (Padova, Italy, 1725-2006), to our knowledge the longest existing instrumental time series of its kind. We identify multidecadal oscillations in extremes estimated by fitting the GEV distribution, with approximate periodicities of about 17-21 years, 30-38 years, 49-68 years, 85-94 years, and 145-172 years. The amplitudes of these oscillations far exceed the changes associated with the observed trend in intensity. This finding implies that, even if climatic trends are absent or negligible, rainfall and its extremes exhibit an apparent non-stationarity if analyzed over time intervals shorter than the longest periodicity in the data (about 170 years for the case analyzed here). These results suggest that, because long-term periodicities may likely be present elsewhere, in the absence of observational time series with length comparable to such periodicities (possibly exceeding one century), past observations cannot be considered to be representative of future extremes. We also find that observed fluctuations in extreme events in Padova are linked to the North Atlantic Oscillation: increases in the NAO Index are on average associated with an intensification of daily extreme rainfall events. This link with the NAO global pattern is highly suggestive of implications of general relevance: long-term fluctuations in rainfall extremes connected with large-scale oscillating atmospheric patterns are likely to be widely present, and undermine the very basic idea of using a single stationary distribution to infer future extremes from past observations.

Trying to work with data series that are too short is simply a fact of life -- everyone in climate would love a 1000-year detailed data set, but we don't have it.  We use what we have, but it is important to understand the limitations.  There is less excuse for the media that likes to use single data points, e.g. one storm, to "prove" long term climate trends.

A good example of why this is relevant is the global temperature trend.  This chart is a year or so old and has not been updated in that time, but it shows the global temperature trend using the most popular surface temperature data set.  The global warming movement really got fired up around 1998, at the end of the twenty year temperature trend circled in red.

click to enlarge

 

They then took the trends from these 20 years and extrapolated them into the future:

click to enlarge

But what if that 20 years was merely the upward leg of a 40-60 year cyclic variation?  Ignoring the cyclic functions would cause one to overestimate the long term trend.  This is exactly what climate models do, ignoring important cyclic functions like the AMO and PDO.

In fact, you can get a very good fit with actual temperature by modeling them as three functions:  A 63-year sine wave, a 0.4C per century long-term linear trend  (e.g. recovery from the little ice age) and a new trend starting in 1945 of an additional 0.35C, possibly from manmade CO2.Slide52

In this case, a long-term trend still appears to exist but it is exaggerated by only trying to measure it in the upward part of the cycle (e.g. from 1978-1998).

 

Listening to California Parks People Discuss Climate Change

Some random highlights:

  • I watched a 20 minute presentation in which a woman from LA parks talked repeatedly about the urban heat island being a result of global warming
  • I just saw that California State Parks, which is constantly short of money and has perhaps a billion dollars in unfunded maintenance needs, just spent millions of dollars to remove a road from a beachfront park based solely (they claimed) based on projections that 55 inches of sea level rise would cause the road to be a problem.  Sea level has been rising 3-4mm a year for over 150 years and even the IPCC, based on old much higher temperature increase forecasts, predicted about a foot of rise.
  • One presenter said that a 3-5C temperature rise over the next century represent the low end of reasonable forecasts.  Most studies of later are showing a climate sensitivity of 1.5-2.0 C (I still predict 1C) with warming over the rest of the century of about 1C, or about what we saw last century
  • I watched them brag for half an hour about spending tons of extra money on make LEED certified buildings.  As written here any number of times, most LEED savings come through BS gaming of the rules, like putting in dedicated electric vehicle parking sites (that do not even need a charger to get credit).  In a brief moment of honesty, the architect presenting admitted that most of the LEED score for one building came from using used rather than new furniture in the building.
  • They said that LEED buildings were not any more efficient than most other commercial buildings getting built, just a matter of whether you wanted to pay for LEED certification -- it was stated that the certification was mostly for the plaque.  Which I suppose is fine for private businesses looking for PR, but why are cash-strapped public agencies doing it?

Scott Sumner Explains a Lot of Climate Alarmism, Without Discussing Climate

Scott Sumner is actually discussing discrimination, and how discrimination is often "proven" in social studies

The economy operates in very subtle ways, and often when I read academic studies of issues like discrimination, the techniques seem incredibly naive to me. They might put in all the attributes of male and female labor productivity they can think of, and then simply assume than any unexplained residual must be due to "discrimination." And they do this in cases where there is no obvious reason to assume discrimination. It would be like a scientist assuming that magicians created a white rabbit out of thin air, at the snap of their fingers, because they can't think of any other explanation of how it got into the black hat!

Most alarming climate forecasts are based on the period from 1978 to 1998.  During this 20 year period world temperatures rose about a half degree C.  People may say they are talking about temperature increases since 1950, but most if not all of those increases occurred from 1978-1998.  Temperatures were mostly flat or down before and since.

A key, if not the key, argument for CO2-driven catastrophic warming that is based on actual historic data (rather than on theory or models) is that temperatures rose in this 20 year period farther and faster than would be possible by any natural causes, and thus must have been driven by man-made CO2.  Essentially what scientists said was, "we have considered every possible natural cause of warming that we can think of, and these are not enough to cause this warming, so the warming must be unnatural."  I was struck just how similar this process was to what Mr. Sumner describes.  Most skeptics, by the way, agree that some of this warming may have been driven by manmade CO2 but at the same time argue that there were many potential natural effects (e.g. ocean cycles) that were not considered in this original analysis.

Reconciling Seemingly Contradictory Climate Claims

At Real Science, Steven Goddard claims this is the coolest summer on record in the US.

The NOAA reports that both May and June were the hottest on record.

It used to be the the media would reconcile such claims and one might learn something interesting from that reconciliation, but now all we have are mostly-crappy fact checks with Pinocchio counts.  Both these claims have truth on their side, though the NOAA report is more comprehensively correct.  Still, we can learn something by putting these analyses in context and by reconciling them.

The NOAA temperature data for the globe does indeed show May and June as the hottest on record.  However, one should note a couple of things

  • The two monthly records do not change the trend over the last 10-15 years, which has basically been flat.  We are hitting records because we are sitting on a plateau that is higher than the rest of the last century (at least in the NOAA data).  It only takes small positive excursions to reach all-time highs
  • There are a number of different temperature data bases that measure the temperature in different ways (e.g. satellite vs. ground stations) and then adjust those raw readings using different methodologies.  While the NOAA data base is showing all time highs, other data bases, such as satellite-based ones, are not.
  • The NOAA database has been criticized for manual adjustments to temperatures in the past which increase the warming trend.  Without these adjustments, temperatures during certain parts of the 1930's (think: Dust Bowl) would be higher than today.  This was discussed here in more depth.  As is usual when looking at such things, some of these adjustments are absolutely appropriate and some can be questioned.  However, blaming the whole of the warming signal on such adjustments is just wrong -- satellite data bases which have no similar adjustment issues have shown warming, at least between 1979 and 1999.

The Time article linked above illustrated the story of these record months with a video partially on wildfires.  This is a great example of how temperatures are indeed rising but media stories about knock-on effects, such as hurricanes and fires, can be full of it.  2014 has actually been a low fire year so far in the US.

So the world is undeniably on the warm side of average (I won't way warmer than normal because what is "normal"?)  So how does Goddard get this as the coolest summer on record for the US?

Well, the first answer, and it is an important one to remember, is that US temperatures do not have to follow global temperatures, at least not tightly.  While the world warmed 0.5-0.7 degrees C from 1979-1999, the US temperatures moved much less.  Other times, the US has warmed or cooled more than the world has.  The US is well under 5% of the world's surface area.  It is certainly possible to have isolated effects in such an area.  Remember the same holds true the other way -- heat waves in one part of the world don't necessarily mean the world is warming.

But we can also learn something that is seldom discussed in the media by looking at Goddard's chart:

click to enlarge

First, I will say that I am skeptical of any chart that uses "all USHCN" stations because the number of stations and their locations change so much.  At some level this is an apples to oranges comparison -- I would be much more comfortable to see a chart that looks at only USHCN stations with, say, at least 80 years of continuous data.  In other words, this chart may be an artifact of the mess that is the USHCN database.

However, it is possible that this is correct even with a better data set and against a backdrop of warming temperatures.  Why?  Because this is a metric of high temperatures.  It looks at the number of times a data station reads a high temperature over 90F.  At some level this is a clever chart, because it takes advantage of a misconception most people, including most people in the media have -- that global warming plays out in higher daytime high temperatures.

But in fact this does not appear to be the case.  Most of the warming we have seen over the last 50 years has manifested itself as higher nighttime lows and higher winter temperatures.  Both of these raise the average, but neither will change Goddard's metric of days above 90F.  So it is perfectly possible Goddard's chart is right even if the US is seeing a warming trend over the same period.  Which is why we have not seen any more local all-time daily high temperature records set recently than in past decades.  But we have seen a lot of new records for high low temperature, if that term makes sense.  Also, this explains why the ratio of daily high records to daily low records has risen -- not necessarily because there are a lot of new high records, but because we are setting fewer low records.  We can argue about daytime temperatures but nighttime temperatures are certainly warmer.

This chart shows an example with low and high temperatures over time at Amherst, MA  (chosen at random because I was speaking there).  Note that recently, most warming has been at night, rather than in daily highs.

On The Steven Goddard Claim of "Fabricated" Temperature Data

Steven Goddard of the Real Science blog has a study that claims that US real temperature data is being replaced by fabricated data.  Christopher Booker has a sympathetic overview of the claims.

I believe that there is both wheat and chaff in this claim, and I would like to try to separate the two as best I can.  I don't have time to write a well-organized article, so here is just a list of thoughts

  1. At some level it is surprising that this is suddenly news.  Skeptics have criticized the adjustments in the surface temperature database for years
  2. There is certainly a signal to noise ratio issue here that mainstream climate scientists have always seemed insufficiently concerned about.  Specifically, the raw data for US temperatures is mostly flat, such that the manual adjustments to the temperature data set are about equal in magnitude to the total warming signal.  When the entire signal one is trying to measure is equal to the manual adjustments one is making to measurements, it probably makes sense to put a LOT of scrutiny on the adjustments.  (This is a post from 7 years ago discussing these adjustments.  Note that these adjustments are less than current ones in the data base as they have been increased, though I cannot find a similar chart any more from the NOAA discussing the adjustments)
  3. The NOAA HAS made adjustments to US temperature data over the last few years that has increased the apparent warming trend.  These changes in adjustments have not been well-explained.  In fact, they have not really be explained at all, and have only been detected by skeptics who happened to archive old NOAA charts and created comparisons like the one below.  Here is the before and after animation (pre-2000 NOAA US temperature history vs. post-2000).  History has been cooled and modern temperatures have been warmed from where they were being shown previously by the NOAA.  This does not mean the current version  is wrong, but since the entire US warming signal was effectively created by these changes, it is not unreasonable to act for a detailed reconciliation (particularly when those folks preparing the chart all believe that temperatures are going up, so would be predisposed to treating a flat temperature chart like the earlier version as wrong and in need of correction.
    1998changesannotated
  4. However, manual adjustments are not, as some skeptics seem to argue, wrong or biased in all cases.  There are real reasons for manual adjustments to data -- for example, if GPS signal data was not adjusted for relativistic effects, the position data would quickly get out of whack.  In the case of temperature data:
    • Data is adjusted for shifts in the start/end time for a day of measurement away from local midnight (ie if you average 24 hours starting and stopping at noon).  This is called Time of Observation or TOBS.  When I first encountered this, I was just sure it had to be BS.  For a month of data, you are only shifting the data set by 12 hours or about 1/60 of the month.  Fortunately for my self-respect, before I embarrassed myself I created a spreadsheet to monte carlo some temperature data and play around with this issue.  I convinced myself the Time of Observation adjustment is valid in theory, though I have no way to validate its magnitude  (one of the problems with all of these adjustments is that NOAA and other data authorities do not release the source code or raw data to show how they come up with these adjustments).   I do think it is valid in science to question a finding, even without proof that it is wrong, when the authors of the finding refuse to share replication data.  Steven Goddard, by the way, believes time of observation adjustments are exaggerated and do not follow NOAA's own specification.
    • Stations move over time.  A simple example is if it is on the roof of a building and that building is demolished, it has to move somewhere else.  In an extreme example the station might move to a new altitude or a slightly different micro-climate.  There are adjustments in the data base for these sort of changes.  Skeptics have occasionally challenged these, but I have no reason to believe that the authors are not using best efforts to correct for these effects (though again the authors of these adjustments bring criticism on themselves for not sharing replication data).
    • The technology the station uses for measurement changes (e.g. thermometers to electronic devices, one type of electronic device to another, etc.)   These measurement technologies sometimes have known biases.  Correcting for such biases is perfectly reasonable  (though a frustrated skeptic could argue that the government is diligent in correcting for new cooling biases but seldom corrects for warming biases, such as in the switch from bucket to water intake measurement of sea surface temperatures).
    • Even if the temperature station does not move, the location can degrade.  The clearest example is a measurement point that once was in the country but has been engulfed by development  (here is one example -- this at one time was the USHCN measurement point with the most warming since 1900, but it was located in an open field in 1900 and ended up in an asphalt parking lot in the middle of Tucson.)   Since urban heat islands can add as much as 10 degrees F to nighttime temperatures, this can create a warming signal over time that is related to a particular location, and not the climate as a whole.  The effect is undeniable -- my son easily measured it in a science fair project.  The effect it has on temperature measurement is hotly debated between warmists and skeptics.  Al Gore originally argued that there was no bias because all measurement points were in parks, which led Anthony Watts to pursue the surface station project where every USHCN station was photographed and documented.  The net results was that most of the sites were pretty poor.  Whatever the case, there is almost no correction in the official measurement numbers for urban heat island effects, and in fact last time I looked at it the adjustment went the other way, implying urban heat islands have become less of an issue since 1930.  The folks who put together the indexes argue that they have smoothing algorithms that find and remove these biases.  Skeptics argue that they just smear the bias around over multiple stations.  The debate continues.
  5. Overall, many mainstream skeptics believe that actual surface warming in the US and the world has been about half what is shown in traditional indices, an amount that is then exaggerated by poorly crafted adjustments and uncorrected heat island effects.  But note that almost no skeptic I know believes that the Earth has not actually warmed over the last 100 years.  Further, warming since about 1980 is hard to deny because we have a second, independent way to measure global temperatures in satellites.  These devices may have their own issues, but they are not subject to urban heat biases or location biases and further actually measure most of the Earth's surface, rather than just individual points that are sometimes scores or hundreds of miles apart.  This independent method of measurement has shown undoubted warming since 1979, though not since the late 1990's.
  6. As is usual in such debates, I find words like "fabrication", "lies",  and "myth" to be less than helpful.  People can be totally wrong, and refuse to confront their biases, without being evil or nefarious.

Postscript:  Not exactly on topic, but one thing that is never, ever mentioned in the press but is generally true about temperature trends -- almost all of the warming we have seen is in nighttime temperatures, rather than day time.  Here is an example from Amherst, MA (because I just presented up there).  This is one reason why, despite claims in the media, we are not hitting any more all time daytime highs than we would expect from a normal distribution.  If you look at temperature stations for which we have 80+ years of data, fewer than 10% of the 100-year highs were set in the last 10 years.  We are setting an unusual number of records for high low temperature, if that makes sense.

click to enlarge

 

Great Moments in "Science"

You know that relative of yours, who last Thanksgiving called you anti-science because you had not fully bought into global warming alarm?

Well, it appears that the reason we keep getting called "anti-science" is because climate scientists have a really funny idea of what exactly "science" is.

Apparently, a number of folks have been trying for years to get articles published in peer reviewed journals comparing the IPCC temperature models to actual measurements, and in the process highlighting the divergence of the two.  And they keep getting rejected.

Now, the publisher of Environmental Research Letters has explained why.  Apparently, in climate science it is "an error" to attempt to compare computer temperature forecasts with the temperatures that actually occurred.  In fact, he says that trying to do so "is harmful as it opens the door for oversimplified claims of 'errors' and worse from the climate sceptics media side".  Apparently, the purpose of scientific inquiry is to win media wars, and not necessarily to discover truth.

Here is something everyone in climate should remember:  The output of models merely represents a hypothesis.  When we have complicated hypotheses in complicated systems, and where such hypotheses may encompass many interrelated assumptions, computer models are an important tool for playing out, computationally, what results those hypotheses might translate to in the physical world.  It is no different than if Newton had had a computer and took his equation Gmm/R^2 and used the computer to project future orbits for the Earth and other planets (which he and others did, but by hand).   But these projections would have no value until they were checked against actual observations.  That is how we knew we liked Newton's models better than Ptolemy's -- because they checked out better against actual measurements.

But climate scientists are trying to create some kind of weird world where model results have some sort of independent reality, where in fact the model results should be trusted over measurements when the two diverge.  If this is science -- which it is not -- but if it were, then I would be anti-science.

California Food Sales Tax Rules Are Madness

We have invested a fair amount of time to try to get sales tax treatment on food items in our California stores correct.  But the rules are insane.   Beyond all the crazy rules (e.g. if a customer buys a refrigerated burrito it may be non-taxable, but if he puts it in the microwave in the store to heat it up it becomes taxable for sure) is the fact that sometimes customer intent matters (e.g. will they consume it at one of the picnic tables on site, or take it back to their home or camp site)

When searching for more resources on the topic, I found this flow chart on deciding if CA sales tax applies to food

click to enlarge

Here is more, from the same article

Under California law, foods eaten on the premise of an eatery is taxed while the same item taken to go is not: "Sales of food for human consumption are generally exempt from tax unless sold in a heated condition (except hot bakery items or hot beverages, such as coffee, sold for a separate price), served as meals, consumed at or on the seller's facilities, ordinarily sold for consumption on or near the seller's parking facility, or sold for consumption where there is an admission charge." Exactly which type of foods do and do not fall under the scope of this provision is the frustrating devil in the detail.

Eskenazi notes a few of the ridiculous results of drawing an artificial distinction between hot and cold foods. "A hot sandwich to go would be taxable," for example, "While a prepackaged, cold one would not -- but a cold sandwich becomes taxable if it has hot gravy poured onto it. Cold foods to go are generally not taxable -- but hot foods that have cooled are taxable (meaning a cold sandwich slathered in "hot" gravy that has cooled to room temperature is taxable)."

 

Climate Alarmists Coming Around to At Least One Skeptic Position

As early as 2009 (and many other more prominent skeptics were discussing it much earlier) I reported on why measuring ocean heat content was a potentially much better measure of greenhouse gas changes to the Earth rather than measuring surface air temperatures.  Roger Pielke, in particular, has been arguing this for as long as I can remember.

The simplest explanation for why this is true is that greenhouse gasses increase the energy added to the surface of the Earth, so that is what we would really like to measure, that extra energy.  But in fact the vast, vast majority of the heat retention capacity of the Earth's surface is in the oceans, not in the air.  Air temperatures may be more immediately sensitive to changes in heat flux, but they are also sensitive to a lot of other noise that tends to mask long-term signals.    The best analog I can think of is to imagine that you have two assets, a checking account and your investment portfolio.  Looking at surface air temperatures to measure long-term changes in surface heat content is a bit like trying to infer long-term changes in your net worth by looking only at your checking account, whose balance is very volatile, vs. looking at the changing size of your investment portfolio.

Apparently, the alarmists are coming around to this point

Has global warming come to a halt? For the last decade or so the average global surface temperature has been stabilising at around 0.5°C above the long-term average. Can we all relax and assume global warming isn't going to be so bad after all?

Unfortunately not. Instead we appear to be measuring the wrong thing. Doug McNeall and Matthew Palmer, both from the Met Office Hadley Centre in Exeter, have analysed climate simulations and shown that both ocean heat content and net radiation (at the top of the atmosphere) continue to rise, while surface temperature goes in fits and starts. "In my view net radiation is the most fundamental measure of global warming since it directly represents the accumulation of excess solar energy in the Earth system," says Palmer, whose findings are published in the journal Environmental Research Letters.

First, of course, we welcome past ocean heat content deniers to the club.  But second, those betting on ocean heat content to save their bacon and keep alarmism alive should consider why skeptics latched onto the metric with such passion.   In fact, ocean heat content may be rising more than surface air temperatures, but it has been rising MUCH less than would be predicted from high-sensitivity climate models.

Just When You Thought You Would Never See Any Of That Stuff From Science Fiction Novels...

Via the New Scientist

NEITHER dead or alive, knife-wound or gunshot victims will be cooled down and placed in suspended animation later this month, as a groundbreaking emergency technique is tested out for the first time....

The technique involves replacing all of a patient's blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. "If a patient comes to us two hours after dying you can't bring them back to life. But if they're dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed," says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique.

The benefits of cooling, or induced hypothermia, have been known for decades. At normal body temperature – around 37 °C – cells need a regular oxygen supply to produce energy. When the heart stops beating, blood no longer carries oxygen to cells. Without oxygen the brain can only survive for about 5 minutes before the damage is irreversible.

However, at lower temperatures, cells need less oxygen because all chemical reactions slow down. This explains why people who fall into icy lakes can sometimes be revived more than half an hour after they have stopped breathing.

via Alex Tabarrok