Posts tagged ‘warming’

Climate Theory vs. Climate Data

This is a pretty amazing statement Justin Gillis in the New York Times.

This month, the world will get a new report from a United Nations panel about the science of climate change. Scientists will soon meet in Stockholm to put the finishing touches on the document, and behind the scenes, two big fights are brewing....

In the second case, we have mainstream science that says if the amount of carbon dioxide in the atmosphere doubles, which is well on its way to happening, the long-term rise in the temperature of the earth will be at least 3.6 degrees Fahrenheit, but more likely above 5 degrees. We have outlier science that says the rise could come in well below 3 degrees.

In this case, the drafters of the report lowered the bottom end in a range of temperatures for how much the earth could warm, treating the outlier science as credible.

The interesting part is that "mainstream science" is based mainly on theory and climate models that over the last 20 years have not made accurate predictions (overestimating warming significantly).  "Outlier science" is in a lot of cases based on actual observations of temperatures along with other variables like infrared radiation returning to space.  The author, through his nomenclature, is essentially disparaging observational data that is petulantly refusing to match up to model predictions.  But of course skeptics are anti-science.

We Are 95% Confident in a Meaningless Statement

Apparently the IPCC is set to write:

Drafts seen by Reuters of the study by the U.N. panel of experts, due to be published next month, say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s.

That is up from at least 90 percent in the last report in 2007, 66 percent in 2001, and just over 50 in 1995, steadily squeezing out the arguments by a small minority of scientists that natural variations in the climate might be to blame.

I have three quick reactions to this

  • The IPCC has always adopted words like "main cause" or "substantial cause."  They have not even had enough certainly to use the word "majority cause" -- they want to keep it looser than that.  If man causes 30% and every other cause is at 10% or less, is man the main cause?  No one knows.  So that is how we get to the absurd situation where folks are trumpeting being 95% confident in a statement that is purposely vaguely worded -- so vague that the vast majority of people who sign it would likely disagree with one another on exactly what they have agreed to.
  • The entirety of the post-1950 temperature rise occurred between 1978 and 1998 (see below a chart based on the Hadley CRUT4 database, the same one used by the IPCC

2013 Version 3 Climate talk

Note that temperatures fell from 1945 to about 1975, and have been flat from about 1998 to 2013.  This is not some hidden fact - it was the very fact that the warming slope was so steep in the short period from 1978-1998 that contributed to the alarm.  The current 15 years with no warming was not predicted and remains unexplained (at least in the context of the assumption of high temperature sensitivities to CO2).  The IPCC is in a quandary here, because they can't just say that natural variation counter-acted warming for 15 years, because this would imply a magnitude to natural variability that might have explained the 20 year rise from 1978-1998 as easily as it might explain the warming hiatus over the last 15 years (or in the 30 years preceding 1978).

  • This lead statement by the IPCC continues to be one of the great bait and switches of all time.  Most leading skeptics (excluding those of the talk show host or politician variety) accept that CO2 is a greenhouse gas and is contributing to some warming of the Earth.  This statement by the IPCC says nothing about the real issue, which is what is the future sensitivity of the Earth's temperatures to rising CO2 - is it high, driven by large positive feedbacks, or more modest, driven by zero to negative feedbacks.  Skeptics don't disagree that man has cause some warming, but believe that future warming forecasts are exaggerated and that the negative effects of warming (e.g. tornadoes, fires, hurricanes) are grossly exaggerated.

Its OK not to know something -- in fact, that is an important part of scientific detachment, to admit what one does not know.   But what the hell does being 95% confident in a vague statement mean?  Choose which of these is science:

  • Masses are attracted to each other in proportion to the product of their masses and inversely proportional to the square of their distance of separation.
  • We are 95% certain that gravity is the main cause of my papers remaining on my desk

Earth to California

From our paper this morning:

California regulators have launched an investigation into offshore hydraulic fracturing after revelations that the practice had quietly occurred off the coast for the past two decades.

The California Coastal Commission promised to look into the extent of so-called fracking in federal and state waters and any potential risks.

Hydraulic fracturing has been a standard tool for reinvigorating oil and gas wells for over 60 years.  While it gets headlines as something new, it decidedly is not.  What is new is its use in combination with horizontal drilling as a part of the initial well design, rather than as as a rework tool for an aging field.

What California regulators are really saying is that they have known about and been comfortable with this process for decades**, but what has changed is not the technology but public opinion.  A small group of environmentalists have tried to, without much scientific basis, demonize this procedure not because they oppose it per se but because they are opposed to an expansion of hydrocarbon availability, which they variously blame for either CO2 and global warming or more generally the over-industrialization of the world.

So given this new body of public opinion, rather than saying that "sure, fracking has existed for decades and we have always been comfortable with it", the regulators instead act astonished and surprised -- "we are shocked, shocked that fracking is going on in this establishment" -- and run around in circles demonstrating their care and concern.  Next step is their inevitable trip to the capital to tell legislators that they desperately need more money and people to deal with their new responsibility to carefully scrutinize this decades-old process.

 

**Postscript:  If regulators are not familiar with basic oil-field processes, then one has to wonder what the hell they are going with their time.  It's not like anyone in the oil business had any reason to hide fracking activity -- only a handful of people in the country would have known what it was or cared until about 5 years ago.

This Is How We Get In Pointless Climate Flame Wars

The other day I posted a graph from Roy Spencer comparing climate model predictions to actual measurements in the tropical mid-troposphere (the zone on Earth where climate models predict the most warming due to large assumed water vapor positive feedbacks).  The graph is a powerful indictment of the accuracy of climate models.

Spencer has an article (or perhaps a blog post) in the Financial Post with the same results, and includes a graph that does a pretty good job of simplifying the messy spaghetti graph in the original version.  Except for one problem.  Nowhere is it correctly labelled.  One would assume looking at it that it is a graph of global surface temperatures, which is what most folks are used to seeing in global warming articles. But in fact it is a graph of temperatures in the mid-troposphere, between 20 degrees North and 20 degrees South latitude.  He mentions that it is for tropical troposphere in the text of the article, but it is not labelled as such on the graph.  There is a very good reason for that narrow focus, but now the graph will end up on Google image search, and people will start crying "bullsh*t" because they will compare the numbers to global surface temperature data and it won't match.

I respect Spencer's work but he did not do a good job with this.

Climate Model Fail

Dr. Roy Spencer has compared the output of 73 climate models to actual recent temperature measurements.  He has focused on temperatures in the mid-troposphere in the tropics -- this is not the same as global surface temperatures but is of course related.  The reason for this focus is 1) we have some good space-based data sources for temperatures in this region that don't suffer the same biases and limitations as surface thermometers and 2) This is the zone that catastrophic anthropogenic global warming theory says should be seeing the most warming, due to positive feedback effects of water vapor.  The lines are the model results for temperatures, the dots are the actuals.

click to enlarge

As Spencer writes in an earlier post:

I continue to suspect that the main source of disagreement is that the models’ positive feedbacks are too strong…and possibly of even the wrong sign.

The lack of a tropical upper tropospheric hotspot in the observations is the main reason for the disconnect in the above plots, and as I have been pointing out this is probably rooted in differences in water vapor feedback. The models exhibit strongly positive water vapor feedback, which ends up causing a strong upper tropospheric warming response (the “hot spot”), while the observation’s lack of a hot spot would be consistent with little water vapor feedback.

The warming from manmade CO2 without positive feedbacks would be about 1.3C per doubling of CO2 concentrations, a fraction of the 3-10C predicted by these climate models.  If the climate, like most other long-term stable natural systems, is dominated by negative feedbacks, the sensitivity would be likely less than 1C.  Either way, the resulting predicted warming from manmade CO2 over the rest of this century would likely be less than 1 degree C.

More on declining estimates of climate sensitivity based on actual temperature observations rather than computer models here.

Update on Climate Temperature Sensitivity (Good News, the Numbers are Falling)

I have not had the time to write much about climate of late, but after several years of arguing over emails (an activity with which I quickly grew bored), the field is heating up again, as it were.

As I have said many times, the key missing science in the whole climate debate centers around climate sensitivity, or the expected temperature increase from a doubling of CO2 concentrations in the atmosphere  (as reference, CO2 in the industrial age has increased from about 270 ppm to close to 400 ppm, or about half a doubling).

In my many speeches and this video (soon to be updated, if I can just find the time to finish it), I have argued that climate computer models have exaggerated climate sensitivity.  This Wikipedia page is a pretty good rehash of the alarmist position on climate sensitivity.  According to this standard alarmist position, here is the distribution of studies which represent the potential values for sensitivity - note that virtually none are below 2°C.

Frequency_distribution_of_climate_sensitivity,_based_on_model_simulations_(NASA)

The problem is that these are all made with computer models.  They are not based on observational data.  Yes, all these models nominally backcast history reasonably correctly (look at that chart above and think about that statement for a minute, see if you can spot the problem).  But many an investor has been bankrupted by models that correctly backcast history.  The guys putting together tranches of mortgages for securities all had models.   What has been missing is any validation of these numbers with actual, you know, observations of nature.

Way back 6 or 7 years ago I began taking these numbers and projecting them backwards.  In other words, if climate sensitivity is really, say, at 4°C, then what should that imply about historical temperature increases since the pre-industrial age?  Let's do a back of the envelope with the 4°C example.  We are at just about half of a doubling of CO2 concentrations, but since sensitivity is a logarithmic curve, this implies we should have seen about 57% of the temperature increase that we would expect from a full doubling of CO2.  Applied to the 4°C sensitivity figure, this means that if sensitivity really is 4°C, we should have seen a 2.3°C global temperature increase over the last 150 years or so.  Which we certainly have not -- instead we have seen 0.8°C from all causes, only one of which is CO2.

So these high sensitivity models are over-predicting history.  Even a 2°C sensitivity over-predicts the amount of warming we have seen historically.  So how do they make the numbers fit?  The models are tuned and tweaked with a number of assumptions.  Time delays are one -- the oceans act as a huge flywheel on world temperatures and tend to add large lags to getting to the ultimate sensitivity figure.  But even this was not enough for high sensitivity models to back-cast accurately.  To make their models accurately predict history, their authors have had to ignore every other source of warming (which is why they have been so vociferous in downplaying the sun and ocean cycles, at least until they needed these to explain the lack of warming over the last decade).  Further, they have added man-made cooling factors, particularly from sulfate aerosols, that offset some of the man-made warming with man-made cooling.

Which brings us back to the problem I hinted at with the chart above and its distribution of sensitivities.  Did you spot the problem?  All these models claim to accurately back-cast history, but how can a model with a 2°C sensitivity and an 11°C sensitivity both accurately model the last 100 years?  One way they do it is by using a plug variable, and many models use aerosol cooling as the plug.  Why?   Well, unlike natural cooling factors, it is anthropogenic, so they can still claim catastrophe once we clean up the aerosols.  Also, for years the values of aerosol cooling were really uncertain, so ironically the lack of good science on them allowed scientists to assume a wide range of values.  Below is from a selection of climate models, and shows that the higher the climate sensitivity in the model, the higher the negative forcing (cooling) effect assumed from aerosols.  This has to be, or the models would not back-cast.aerosols2

The reasons that these models had such high sensitivities is that they assumed the climate was dominated by net positive feedback, meaning there were processes in the climate system that would take small amounts of initial warming from CO2 and multiply them many times.  The generally accepted value for sensitivity without these feedbacks is 1.2°C or 1.3°C (via work by Michael Mann over a decade ago).  So all the rest of the warming, in fact the entire catastrophe that is predicted, comes not from CO2 but from this positive feedback that multiplies this modest 1.2°C many times.

I have argued, as have many other skeptics, that this assumption of net positive feedback is not based on good science, and in fact most long-term stable natural systems are dominated by negative feedback (note that you can certainly identify individual processes, like ice albedo, that are certainly a positive feedback, but we are talking about the net effect of all such processes combined).  Based on a skepticism about strong positive feedback, and the magnitude of past warming in relation to CO2 increases, I have always argued that the climate sensitivity is perhaps 1.2°C and maybe less, but that we should not expect more than a degree of warming from CO2 in the next century, hardly catastrophic.

One of the interesting things you might notice from the Wikipedia page is that they do not reference any sensitivity study more recent than 2007 (except for a literature review in 2008).  One reason might be that over the last 5 years there have been a series of studies that have begun to lower the expected value of the sensitivity number.   What many of these studies have in common is that they are based on actual observational data over the last 100 years, rather than computer models  (by the way, for those of you who like to fool with Wikipedia, don't bother on climate pages -- the editors of these pages will reverse any change attempting to bring balance to their articles in a matter of minutes).  These studies include a wide range of natural effects, such as ocean cycles, left out of the earlier models.  And, as real numbers have been put on aerosol concentrations and their effects, much lower values have been assigned to aerosol cooling, thus reducing the amount of warming that could be coming from CO2.

Recent studies based on observational approaches are coming up with much lower numbers.   ECS, or equilibrium climate sensitivity numbers (what we would expect in temperature increases if we waited hundreds or thousands of years for all time delays to be overcome) has been coming in between 1.6°C and 2.0°C.  Values for TCS, or transient climate sensitivity, or what we might expect to see in our lifetimes, has been coming in around 1.3°C per doubling of CO2 concentrations.

Matt Ridley has the layman's explanation

Yesterday saw the publication of a paper in a prestigious journal,Nature Geoscience, from a high-profile international team led by Oxford scientists. The contributors include 14 lead authors of the forthcoming Intergovernmental Panel on Climate Change scientific report; two are lead authors of the crucial chapter 10: professors Myles Allen and Gabriele Hegerl.

So this study is about as authoritative as you can get. It uses the most robust method, of analysing the Earth’s heat budget over the past hundred years or so, to estimate a “transient climate response” — the amount of warming that, with rising emissions, the world is likely to experience by the time carbon dioxide levels have doubled since pre-industrial times.

The most likely estimate is 1.3C. Even if we reach doubled carbon dioxide in just 50 years, we can expect the world to be about two-thirds of a degree warmer than it is now, maybe a bit more if other greenhouse gases increase too….

Judith Currey discusses these new findings

Discussion of Otto, one of the recent studies

Nic Lewis discusses several of these results

This is still tough work, likely with a lot of necessary improvement, because it is really hard to dis-aggregate multiple drivers in such a complex system.  There may, for example, be causative variables we don't even know about so by definition were not included in the study.  However, it is nice to see that folks are out there trying to solve the problem with real observations of Nature, and not via computer auto-eroticism.

Postscript:  Alarmists have certainly not quit the field.  The current emerging hypothesis to defend high sensitivities is to say that the heat is going directly into the deep oceans.  At some level this is sensible -- the vast majority of the heat carrying capacity (80-90%) of the Earth's surface is in the oceans, not in the atmosphere, and so they are the best place to measure warming.  Skeptics have said this for years.  But in the top 700 meters or so of the ocean, as measured by ARGO floats, ocean heating over the last 10 years (since these more advanced measuring devices were launched) has been only about 15% of what we might predict with high sensitivity models.  So when alarmists say today that the heat is going into the oceans, they say the deep oceans -- ie that the heat from global warming is not going into the air or the first 700 meters of ocean but directly into ocean layers beneath that.  Again, this is marginally possible by some funky dynamics, but just like the aerosol defense that has fallen apart of late, this defense of high sensitivity forecasts is completely unproven.  But the science is settled, of course.

Environmentalist vs. Environmentalist

The confrontation may be coming soon in the environmental community over wind power -- it certainly would have occurred already had the President promoting wind been Republican rather than Democrat.  I might have categorized this as "all energy production has environmental tradeoffs", but wind power is so stupid a source to be promoting that this is less of a tradeoff and more of another nail in the coffin.  As a minimum, the equal protection issues vis a vis how the law is enforced for wind companies vs. oil companies are pretty staggering.

“It happens about once a month here, on the barren foothills of one of America’s green-energy boomtowns: A soaring golden eagle slams into a wind farm’s spinning turbine and falls, mangled and lifeless, to the ground.

Killing these iconic birds is not just an irreplaceable loss for a vulnerable species. It’s also a federal crime, a charge that the Obama administration has used to prosecute oil companies when birds drown in their waste pits, and power companies when birds are electrocuted by their power lines.”

“[The Obama] administration has never fined or prosecuted a wind-energy company, even those that flout the law repeatedly. Instead, the government is shielding the industry from liability and helping keep the scope of the deaths secret.”

“Wind power, a pollution-free energy intended to ease global warming, is a cornerstone of President Barack Obama’s energy plan. His administration has championed a $1 billion-a-year tax break to the industry that has nearly doubled the amount of wind power in his first term. But like the oil industry under President George W. Bush, lobbyists and executives have used their favored status to help steer U.S. energy policy.”

“The result [of Obama energy policy] is a green industry that’s allowed to do not-so-green things. It kills protected species with impunity and conceals the environmental consequences of sprawling wind farms.”

“More than 573,000 birds are killed by the country’s wind farms each year, including 83,000 hunting birds such as hawks, falcons and eagles, according to an estimate published in March in the peer-reviewed Wildlife Society Bulletin.

This Shouldn't Be Necessary, But Here Is Some Information on CO2 and Tornadoes

Well, I have zero desire to score political points off the tragedy in Oklahoma, but unfortunately others are more than eager to do so.  As a result, it is necessary to put a few facts on the table to refute the absurd claim that this tornado is somehow attributable to CO2.

  1. I really should not have to say this, but there is no mechanism by which CO2 has ever been accused of causing tornadoes except via the intervening step of warming.  Without warming, CO2 can't be the cause (even with warming, the evidence is weak, since tornadoes are cause more by temperature differentials, than by temperature per se).  So it is worth noting that there have been no unusually warm temperatures in the area of late, and in fact the US has had one of its coolest springs in several decades.
  2. I should also not have to say this, but major tornadoes occurred in Oklahoma at much lower CO2 levels.

    torgraph-big

  3. In fact, if anything the trend in major tornadoes in the US over the last several decades is down
  4. And, this is actually a really, really low tornado year so far.  So its hard to figure an argument that says that global warming reduced tornadoes in general but caused this one in particular

EF3-EF5

 

Much more at this link

Update:  In 1975, tornado outbreaks blamed in Newsweek on global cooling

Matt Ridley's 10 Questions For Climate Alarmists

As I have read Mr. Ridley over the years, I have found him to have staked out a position on anthropogenic climate change very similar to mine  (we are both called "lukewarmers" because we accept that man's addition of greenhouse gasses to the atmosphere warms the world incrementally but do not accept catastrophic positive-feedback driven catastrophic warming forecasts).

I generally find room to nitpick even those whom I largely agree with, but from my perspective, this piece by Ridley is dead on.   (thanks to a reader for the link)

Mission Drift in Charitable Trusts

Much has been written about 2nd and 3rd generation trustees leading charitable trusts in completely different directions from the intentions of their original founder / donor.  These charitable trusts seem to, over time, become reflective of the goals and philosophy of a fairly closed caste of, lacking a better word, non-profit-runners.  Their typically leftish, Eastern, urban outlook is sometimes bizarrely at odds with the trust's founding intentions and mission.

Here is one that caught my eye:  Bill McKibben is known as a global warming crusader, via his 350.org (the 350 refers to the fact that they feel the world was safe at 349 ppm CO2 but was headed for ruin at 351 ppm).  But if you hear him speak, as my son did at Amherst, he sounds more alike a crusader against fossil fuels rather than against just global warming per se.  I am left with the distinct impression that he would be a passionate opponent of fossil fuel consumption even if there were no such thing as greenhouse gas warming.

Anyway, the thing I found interesting is that most of his anti-fossil fuel work is funded by a series of Rockefeller family trusts.  I am not privy to the original founding mission of these trusts, but my suspicion is that funding a campaign to paint producers of fossil fuels as outright evil, as McKibben often does, is a pretty bizarre use of money for the Rockefeller family.

In contrast to McKibben, I have argued that John D. Rockefeller, beyond saving the whales, did as much for human well-being as any person in the last two centuries by driving down the cost and increasing the quality, safety, and availability of fuels.   Right up there with folks like Norman Borlaug and Louis Pasteur.

Best and the Brightest May Finally Be Open To Considering Lower Climate Sensitivity Numbers

For years, readers of this site know that I have argued that:

  • CO2 is indeed a greenhouse gas, and since man is increasing its atmospheric concentration, there is likely some anthropogenic contribution to warming
  • Most forecasts, including those of the IPCC, grossly exaggerate temperature sensitivity to CO2 by assuming absurd levels of net positive feedback in the climate system
  • Past temperature changes are not consistent with high climate sensitivities

Recently, there have been a whole spate of studies based on actual observations rather than computer models that have been arriving at climate sensitivity numbers far below the IPCC number.   While the IPCC settled on 3C per doubling of CO2, it strongly implied that all the risk was to the upside, and many other prominent folks who typically get fawning attention in the media have proposed much higher numbers.

In fact, recent studies are coming in closer to 1.5C - 2C.  I actually still think these numbers will turn out to be high.  For several years now my money has been on a number from 0.8 to 1 C, sensitivity numbers that imply a small amount of negative feedback rather than positive feedback, a safer choice in my mind since most long-term stable natural systems are dominated by negative feedback.

Anyway, in an article that was as surprising as it is welcome, NY Times climate writer Andy Revkin has quite an article recently, finally acknowledging in the paper of record that maybe those skeptics who have argued for alower sensitivity number kind of sort of have a point.

Worse than we thought” has been one of the most durable phrases lately among those pushing for urgent action to stem the buildup of greenhouse gases linked to global warming.

But on one critically important metric — how hot the planet will get from a doubling of the pre-industrial concentration of greenhouse gases, a k a “climate sensitivity” — someclimate researchers with substantial publication records are shifting toward the lower end of the warming spectrum.

By the way, this is the only metric that matters.  All the other BS about "climate change" and "dirty weather" are meaningless without warming.  CO2 cannot change the climate  or raise sea levels or any of that other stuff by any mechanism we understand or that has even been postulated, except via warming.  Anyway, to continue:

There’s still plenty of global warming and centuries of coastal retreats in the pipeline, so this is hardly a “benign” situation, as some have cast it.

But while plenty of other climate scientists hold firm to the idea that the full range of possible outcomes, including a disruptively dangerous warming of more than 4.5 degrees C. (8 degrees F.), remain in play, it’s getting harder to see why the high-end projections are given much weight.

This is also not a “single-study syndrome” situation, where one outlier research paper is used to cast doubt on a bigger body of work — as Skeptical Science asserted over the weekend. That post focused on the as-yet-unpublished paper finding lower sensitivity that was inadvisedly promoted recently by the Research Council of Norway.

In fact, there is an accumulating body of reviewed, published researchshaving away the high end of the range of possible warming estimates from doubled carbon dioxide levels. Chief among climate scientists critical of the high-sensitivity holdouts is James Annan, an experienced climate modeler based in Japan who contributed to the 2007 science report from the Intergovernmental Panel on Climate Change. By 2006, he was already diverging from his colleagues a bit.

The whole thing is good.  Of course, for Revkin, this is no excuse to slow down all the actions supposedly demanded by global warming, such as substantially raising the price and scarcity of hydrocarbons.  Which to me simply demonstrates that people who have been against hydrocarbons have always been against them as an almost aesthetic choice, and climate change and global warming were mere excuses to push the agenda.  After all, as there certainly are tradeoffs to limiting economic growth and energy use and raising the price of energy, how can a reduction in postulated harms from fossil fuels NOT change the balance point one chooses in managing their use?

PS-  I thought this was a great post mortem on Hurricane Sandy and the whole notion that this one data point proves the global warming trend:

In this case several factors not directly related to climate change converged to generate the event. On Sandy’s way north, it ran into a vast high-pressure system over Canada, which prevented it from continuing in that direction, as hurricanes normally do, and forced it to turn west. Then, because it traveled about 300 miles over open water before making landfall, it piled up an unusually large storm surge. An infrequent jet-stream reversal helped maintain and fuel the storm. As if all that weren’t bad enough, a full moon was occurring, so the moon, the earth, and the sun were in a straight line, increasing the moon’s and sun’s gravitational effects on the tides, thus lifting the high tide even higher. Add to this that the wind and water, though not quite at hurricane levels, struck an area rarely hit by storms of this magnitude so the structures were more vulnerable and a disaster occurred.

The last one is a key for me -- you have cities on the Atlantic Ocean that seemed to build and act as if they were immune from ocean storms.  From my perspective growing up on the gulf coast, where one practically expects any structure one builds on the coast to be swept away every thirty years or so, this is a big contributing factor no one really talks about.

She goes on to say that rising sea levels may have made the storm worse, but I demonstrated that it couldn't have added more than a few percentage points to the surge.

Wow

This is one of the more amazing things I have read of late.  Environmentalist recants his opposition to GMOs.  Good, I hope Greenpeace is listening and will reconsider its absurd and destructive opposition to golden rice.

As an environmentalist, and someone who believes that everyone in this world has a right to a healthy and nutritious diet of their choosing, I could not have chosen a more counter-productive path. I now regret it completely.

So I guess you’ll be wondering – what happened between 1995 and now that made me not only change my mind but come here and admit it? Well, the answer is fairly simple: I discovered science, and in the process I hope I became a better environmentalist....

So I did some reading. And I discovered that one by one my cherished beliefs about GM turned out to be little more than green urban myths.

I’d assumed that it would increase the use of chemicals. It turned out that pest-resistant cotton and maize needed less insecticide.

I’d assumed that GM benefited only the big companies. It turned out that billions of dollars of benefits were accruing to farmers needing fewer inputs.

I’d assumed that Terminator Technology was robbing farmers of the right to save seed. It turned out that hybrids did that long ago, and that Terminator never happened.

I’d assumed that no-one wanted GM. Actually what happened was that Bt cotton was pirated into India and roundup ready soya into Brazil because farmers were so eager to use them.

I’d assumed that GM was dangerous. It turned out that it was safer and more precise than conventional breeding using mutagenesis for example; GM just moves a couple of genes, whereas conventional breeding mucks about with the entire genome in a trial and error way.

Bravo Mr Lynas.  It is hard to admit one was wrong.  It is even harder, though, for a man like Lynas to declare himself on the "wrong" side of a "progressive" issue like this.  He has now likely put himself into a category along with black Republicans who will incur special wrath and disdain from progressives.

Speaking of the need for a little science in the environmental movement, I was channel surfing over Bill Moyer's show yesterday on PBS (actually I was navigating to our local PBS station to  make sure Downton Abbey was set to record later in the day) when I heard Moyer whip out a stat that even with a carbon tax, the world will warm over 6 degrees this century.  Now, I don't know if he was talking in degrees F or C, but in either case, a 6 degree number far outstrips the climate sensitivity numbers used even by the IPCC, which many of us skeptics believe has exaggerated warming estimates.  It is constantly frustrating to be treated as an enemy of science by those who display such a casual contempt for it, while at the same time fetishizing it.

Trusting Experts and Their Models

Russ Roberts over at Cafe Hayek quotes from a Cathy O’Neill review of Nate Silvers recent book:

Silver chooses to focus on individuals working in a tight competition and their motives and individual biases, which he understands and explains well. For him, modeling is a man versus wild type thing, working with your wits in a finite universe to win the chess game.

He spends very little time on the question of how people act inside larger systems, where a given modeler might be more interested in keeping their job or getting a big bonus than in making their model as accurate as possible.

In other words, Silver crafts an argument which ignores politics. This is Silver’s blind spot: in the real world politics often trump accuracy, and accurate mathematical models don’t matter as much as he hopes they would....

My conclusion: Nate Silver is a man who deeply believes in experts, even when the evidence is not good that they have aligned incentives with the public.

Distrust the experts

Call me “asinine,” but I have less faith in the experts than Nate Silver: I don’t want to trust the very people who got us into this mess, while benefitting from it, to also be in charge of cleaning it up. And, being part of the Occupy movement, I obviously think that this is the time for mass movements.

Like Ms. O'Neill, I distrust "authorities" as well, and have a real problem with debates that quickly fall into dueling appeals to authority.  She is focusing here on overt politics, but subtler pressure and signalling are important as well.  For example, since "believing" in climate alarmism in many circles is equated with a sort of positive morality (and being skeptical of such findings equated with being a bad person) there is an underlying peer pressure that is different from overt politics but just as damaging to scientific rigor.  Here is an example from the comments at Judith Curry's blog discussing research on climate sensitivity (which is the temperature response predicted if atmospheric levels of CO2 double).

While many estimates have been made, the consensus value often used is ~3°C. Like the porridge in “The Three Bears”, this value is just right – not so great as to lack credibility, and not so small as to seem benign.

Huybers (2010) showed that the treatment of clouds was the “principal source of uncertainty in models”. Indeed, his Table I shows that whereas the response of the climate system to clouds by various models varied from 0.04 to 0.37 (a wide spread), the variation of net feedback from clouds varied only from 0.49 to 0.73 (a much narrower relative range). He then examined several possible sources of compensation between climate sensitivity and radiative forcing. He concluded:

“Model conditioning need not be restricted to calibration of parameters against observations, but could also include more nebulous adjustment of parameters, for example, to fit expectations, maintain accepted conventions, or increase accord with other model results. These more nebulous adjustments are referred to as ‘tuning’.”  He suggested that one example of possible tuning is that “reported values of climate sensitivity are anchored near the 3±1.5°C range initially suggested by the ad hoc study group on carbon dioxide and climate (1979) and that these were not changed because of a lack of compelling reason to do so”.

Huybers (2010) went on to say:

“More recently reported values of climate sensitivity have not deviated substantially. The implication is that the reported values of climate sensitivity are, in a sense, tuned to maintain accepted convention.”

Translated into simple terms, the implication is that climate modelers have been heavily influenced by the early (1979) estimate that doubling of CO2 from pre-industrial levels would raise global temperatures 3±1.5°C. Modelers have chosen to compensate their widely varying estimates of climate sensitivity by adopting cloud feedback values countering the effect of climate sensitivity, thus keeping the final estimate of temperature rise due to doubling within limits preset in their minds.

There is a LOT of bad behavior out there by models.  I know that to be true because I used to be a modeler myself.  What laymen do not understand is that it is way too easy to tune and tweak and plug models to get a preconceived answer -- and the more complex the model, the easier this is to do in a non-transparent way.  Here is one example, related again to climate sensitivity

When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models.  Even if all past warming were attributed to CO2  (a heroic assertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10  (I show this analysis in more depth in this video).

My skepticism was increased when several skeptics pointed out a problem that should have been obvious.  The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures?  If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data.  But they all do.  It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).

The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl.  To understand his findings, we need to understand a bit of background on aerosols.  Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.

What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures.  When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures.  Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.

Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures.  In my terminology, aerosol cooling was the plug variable.

By the way, this aerosol issue is central to recent work that is pointing to a much lower climate sensitivity to CO2 than has been reported in past IPCC reports.

Climate De-Bait and Switch

Dealing with facile arguments that are supposedly perfect refutations of the climate skeptics' position is a full-time job akin to cleaning the Augean Stables.  A few weeks ago Kevin Drum argued that global warming added 3 inches to Sandy's 14-foot storm surge, which he said was an argument that totally refuted skeptics and justified massive government restrictions on energy consumption (or whatever).

This week Slate (and Desmog blog) think they have the ultimate killer chart, on they call a "slam dunk" on skeptics.  Click through to my column this week at Forbes to see if they really do.

Sandy and Global Warming

The other day I linked my Forbes column that showed that there was no upward trend in global hurricane number and strength, the number of US hurricane strikes, or the number of October hurricanes.  Given these trends, anyone who wants to claim Sandy is proof of global warming is forced to extrapolate from a single data point.

Since I wrote that, Bob Tisdale had an interesting article on Sandy.  The theoretical link between global warming and more and stronger Atlantic hurricanes has not been fully proven, but the theory says that warmer waters will provide energy for more and larger storms (like Sandy).  Thus the theory is that global warming has heated up the waters through which hurricanes pass and that feed these hurricanes' strength.

Bob Tisdale took a look at the historical trends in sea surface temperatures in the area bounded by Sandy's storm track.  These are the temperature trends for the waters that fueled Sandy.  This is what he got:

If he has done the analysis right, this means there is no warming trend over the last 60+ years in the ocean waters that fed Sandy.  This means that the unusually warm seas that fed Sandy's growth were simply a random event, an outlier which appears from this chart to be unrelated to any long-term temperature trend.

Update:  I challenge you to find any article arguing that Sandy was caused by anthropogenic global warming that actually includes a long term trend chart (other than global temperatures) in the article.  The only one I have seen is a hurricane strike chart that is cut off in the 1950's (despite data that goes back over 100 years) because this is the only cherry-picked cut off point that delivers an upward trend.  If you find one, email me the link, I would like to see it.

Extrapolating From A Single Data Point: Climate and Sandy

I have a new article up at Forbes on how crazy it is to extrapolate conclusions about the speed and direction of climate change from a single data point.

Positing a trend from a single database without any supporting historical information has become a common media practice in discussing climate.  As I wrote several months ago, the media did the same thing with the hot summer, arguing frequently that this recent hot dry summer proved a trend for extreme temperatures, drought, and forest fires.  In fact, none of these are the case — this summer was not unprecedented on any of these dimensions and no upward trend is detectable in long-term drought or fire data.   Despite a pretty clear history of warming over the last century, it is even hard to establish any trend in high temperature extremes  (in large part because much of the warming has been in warmer night-time lows rather than in daytime highs).  See here for the data.

As I said in that earlier article, when the media posits a trend, demand a trendline, not just a single data point.

To this end, I try to bring so actual trend data to the trend discussion.

OMG -- More Smoke!

Kudos to a reader who pointed this one out to me from the Mail online.  It is a favorite topic of mine, the use by the more-scientific-than-thou media of steam to illustrate articles on smoke and pollution.

Check out the captions - smoke is billowing out.  Of course, what they are likely referring to -- the white plumes from the 8 funnel-shaped towers -- is almost certainly pure water.  These are cooling towers, which cool water through evaporative cooling.  These towers are often associated with nuclear plants (you can see that in the comments) but are used for fossil fuel plants as well.  There does appear to be a bit of smoke in the picture, but you have to look all the way in the upper left from the two tall thin towers, and one can see a hint of emissions.  Even in this case, the plume from the nearer and smaller of the two stacks appears to contain a lot of water vapor as well.  My guess is the nasty stuff, to the extent it exists, is coming from the tallest stack, and it is barely in the picture and surely not the focus of the caption.

The article itself is worth a read, arguing that figures from the UK Met office show there has not been any global warming for 16 years.  This is not an insight for most folks who follow the field, so I did not make a big deal about it, but it is interesting that a government body would admit it.

A Truly Bad Study

Imagine this study:  An academic who is a strong Democrat wants to do a study to discover if Republicans suffer from a psychological tendency to bizarre conspiracy theories.  OK, the reasonable mind would already be worried about this.  The academic says his methodology will be an online survey of the first 1000 people who reply to him from the comment sections of certain blogs.   This is obviously terrible -- a 12-year-old today understands the problems with such online surveys.  But the best part is that he advertises the survey only on left-wing sites like the Daily Kos, telling anyone from those heavily Democratic sites that if they self-identify as Republicans, they can take this survey and their survey responses will be published as typical of Republicans.  Anyone predict what he would get?

It is hard to believe that even in this post-modern academic world, that such a piece of garbage could get published.  But it did.  The only difference is that the academic was a strong believer in global warming, he was writing about skeptics, and sought out survey respondents only on strong-believer sites.   What makes this story particularly delicious is the juxtaposition of the author's self-appointed role as defender of science with his atrocious scientific methodology.   The whole story is simply amazing, and you can read about it at JoNova's site.

In one way, it is appropriate to have this published in a psychology journal, as it is such a great example of the psychological need for confirmation.  You can just see those climate alarmists breathing a little easier - "we don't have to listen to those guys, do we?"  No need for debate, no need for analysis, no need for thought.  Just immediate dismissal of their arguments because they come from, well, bad people.   Argumentum ad hominem, indeed.

 

I Was Reading Matt Ridley's Lecture at the Royal Society for the Arts....

... and it was fun to see my charts in it!  The lecture is reprinted here (pdf) or here (html).  The charts I did are around pages 6-7 of the pdf, the ones showing the projected curve of global warming for various climate sensitivities, and backing into what that should imply for current warming.  In short, even if you don't think warming in the surface temperature record is exaggerated, there still has not been anywhere near the amount of warming one would expect for the types of higher sensitivities in the IPCC and other climate models.  Warming to date, even if not exaggerated and all attributed to man-made and not natural causes, is consistent with far less catastrophic, and more incremental, future warming numbers.

These charts come right out of the IPCC formula for the relationship between CO2 concentrations and warming, a formula first proposed by Michael Mann.  I explained these charts in depth around the 10 minute mark of this video, and returned to them to make the point about past warming around the 62 minute mark.   This is a shorter video, just three minutes, that covers the same ground.  Watching it again, I am struck by how relevant it is as a critique five years later, and by how depressing it is that this critique still has not penetrated mainstream discussion of climate.  In fact, I am going to embed it below:

The older slides Ridley uses, which are cleaner (I went back and forth on the best way to portray this stuff) can be found here.

By the way, Ridley wrote an awesome piece for Wired more generally about catastrophism which is very much worth a read.

"Abnormal" Events -- Droughts and Perfect Games

Most folks, and I would include myself in this, have terrible intuitions about probabilities and in particular the frequency and patterns of occurance in the tail ends of the normal distribution, what we might call "abnormal" events.  This strikes me as a particularly relevant topic as the severity of the current drought and high temperatures in the US is being used as absolute evidence of catastrophic global warming.

I am not going to get into the global warming bits in this post (though a longer post is coming).  Suffice it to say that if it is hard to accurately directly measure shifts in the mean of climate patterns given all the natural variability and noise in the weather system, it is virtually impossible to infer shifts in the mean from individual occurances of unusual events.  Events in the tails of the normal distribution are infrequent, but not impossible or even unexpected over enough samples.

What got me to thinking about this was the third perfect game pitched this year in the MLB.  Until this year, only 20 perfect games had been pitched in over 130 years of history, meaning that one is expected every 7 years or so  (we would actually expect them more frequently today given that there are more teams and more games, but even correcting for this we might have an expected value of one every 3-4 years).  Yet three perfect games happened, without any evidence or even any theoretical basis for arguing that the mean is somehow shifting.  In rigorous statistical parlance, sometimes shit happens.  Were baseball more of a political issue, I have no doubt that writers from Paul Krugman on down would be writing about how three perfect games this year is such an unlikely statistical fluke that it can't be natural, and must have been caused by [fill in behavior of which author disapproves].  If only the Republican Congress had passed the second stimulus, we wouldn't be faced with all these perfect games....

Postscript:  We like to think that perfect games are the ultimate measure of a great pitcher.  This is half right.  In fact, we should expect entirely average pitchers to get perfect games every so often.  A perfect game is when the pitcher faces 27 hitters and none of them get on base.  So let's take the average hitter facing the average pitcher.  The league average on base percentage this year is about .320 or 32%.  This means that for each average batter, there is a 68% chance for the average pitcher in any given at bat to keep the batter off the base.  All the average pitcher has to do is roll these dice correctly 27 times in a row.

The odds against that are .68^27 or about one in 33,000.  But this means that once in every 33,000 pitcher starts  (there are two pitcher starts per game played in the MLB), the average pitcher should get a perfect game.  Since there are about 4,860 regular season starts per year (30 teams x 162 games) then average pitcher should get a perfect game every 7 years or so.  Through history, there have been about 364,000 starts in the MLB, so this would point to about 11 perfect games by average pitchers.  About half the actual total.

Now, there is a powerful statistical argument for demonstrating that great pitchers should be over-weighted in perfect games stats:  the probabilities are VERY sensitive to small changes in on-base percentage.  Let's assume a really good pitcher has an on-base percentage against him that is 30 points less than the league average, and a bad pitcher has one 30 points worse.   The better pitcher would then expect a perfect game every 10,000 starts, while the worse pitcher would expect a perfect game every 113,000 starts.  I can't find the stats on individual pitchers, but my guess is the spread between best and worst pitchers on on-base percentage against has more than a 60 point spread, since the team batting average against stats (not individual but team averages, which should be less variable) have a 60 point spread from best to worst. [update:  a reader points to this, which says there is actually a 125-point spread from best to worst.  That is a different in expected perfect games from one in 2,000 for Jared Weaver to one in 300,000 for Derek Lowe.  Thanks Jonathan]

Update:  There have been 278 no-hitters in MLB history, or 12 times the number of perfect games.  The odds of getting through 27 batters based on a .320 on-base percentage is one in 33,000.  The odds of getting through the same batters based on a .255 batting average (which is hits but not other ways on base, exactly parallel with the definition of no-hitter) the odds are just one in 2,830.  The difference between these odds is a ratio of 11.7 to one, nearly perfectly explaining the ratio of no-hitters to perfect games on pure stochastics.

The Real Issue in Climate

I know I hammer this home constantly, but it is often worth a reminder.  The issue in the scientific debate over catastrophic man-made global warming theory is not whether CO2 is a greenhouse gas, or even the approximate magnitude of warming from CO2 directly, but around feedbacks.   Patrick Moore, Greenpeace founder, said it very well:

What most people don't realize, partly because the media never explains it, is that there is no dispute over whether CO2 is a greenhouse gas, and all else being equal would result in a warming of the climate. The fundamental dispute is about water in the atmosphere, either in the form of water vapour (a gas) or clouds (water in liquid form). It is generally accepted that a warmer climate will result in more water evaporating from the land and sea and therefore resulting in a higher level of water in the atmosphere, partly because the warmer the air is the more water it can hold. All of the models used by the IPCC assume that this increase in water vapour will result in a positive feedback in the order of 3-4 times the increase in temperature that would be caused by the increase in CO2 alone.

Many scientists do not agree with this, or do not agree that we know enough about the impact of increased water to predict the outcome. Some scientists believe increased water will have a negative feedback instead, due to increased cloud cover. It all depends on how much, and a t what altitudes, latitudes and times of day that water is in the form of a gas (vapour) or a liquid (clouds). So if  a certain increase in CO2 would theoretically cause a 1.0C increase in temperature, then if water caused a 3-4 times positive feedback the temperature would actually increase by 3-4C. This is why the warming predicted by the models is so large. Whereas if there was a negative feedback of 0.5 times then the temperature would only rise 0.5C.

My slightly lengthier discussions of this same issue are here and here.

Summer of the Shark, Global Warming Edition

My new column is up, comparing coverage of this summer's heat wave to "Summer of the Shark"

Before I discuss the 2012 global warming version of this process, let's take a step back to 2001 and the "Summer of the Shark."  The media hysteria began in early July, when a young boy was bitten by a shark on a beach in Florida.  Subsequent attacks received breathless media coverage, up to and including near-nightly footage from TV helicopters of swimming sharks.  Until the 9/11 attacks, sharks were the third biggest story of the year as measured by the time dedicated to it on the three major broadcast networks' news shows.

Through this coverage, Americans were left with a strong impression that something unusual was happening -- that an unprecedented number of shark attacks were occurring in that year, and the media dedicated endless coverage to speculation by various "experts" as to the cause of this sharp increase in attacks.

Except there was one problem -- there was no sharp increase in attacks.  In the year 2001, five people died in 76 shark attacks.  However, just a year earlier, 12 people had died in 85 attacks.  The data showed that 2001 actually was  a down year for shark attacks.

This summer we have been absolutely bombarded with stories about the summer heat wave in the United States.  The constant drumbeat of this coverage is being jumped on by many as evidence of catastrophic man-made global warming....

What the Summer of the Shark needed, and what this summer’s US heatwave needs, is a little context.  Specifically, if we are going to talk about supposed “trends”, then we should look at the data series in question over time.  So let’s do so.

I go on to present a number of data series on temperatures, temperature maximums, droughts, and fires.   Enjoy.

Climate and Post-Modern Science

I have written before of my believe that climate has become the first post-modern science.  This time, I will yield the floor to Garth Paltridge to make the same point:

But the real worry with climate research is that it is on the very edge of what is called postmodern science. This is a counterpart of the relativist world of postmodern art and design. It is a much more dangerous beast, whose results are valid only in the context of society’s beliefs and where the very existence of scientific truth can be denied. Postmodern science envisages a sort of political nirvana in which scientific theory and results can be consciously and legitimately manipulated to suit either the dictates of political correctness or the policies of the government of the day.

There is little doubt that some players in the climate game – not a lot, but enough to have severely damaged the reputation of climate scientists in general – have stepped across the boundary into postmodern science. The Climategate scandal of 2009, wherein thousands of emails were leaked from the Climate Research Unit of the University of East Anglia in England, showed that certain senior members of the research community were, and presumably still are, quite capable of deliberately selecting data in order to overstate the evidence for dangerous climate change. The emails showed as well that these senior members were quite happy to discuss ways and means of controlling the research journals so as to deny publication of any material that goes against the orthodox dogma. The ways and means included the sacking of recalcitrant editors.

Whatever the reason, it is indeed vastly more difficult to publish results in climate research journals if they run against the tide of politically correct opinion. Which is why most of the sceptic literature on the subject has been forced onto the web, and particularly onto web-logs devoted to the sceptic view of things. Which, in turn, is why the more fanatical of the believers in anthropogenic global warming insist that only peer-reviewed literature should be accepted as an indication of the real state of affairs. They argue that the sceptic web-logs should never be taken seriously by “real” scientists, and certainly should never be quoted. Which is a great pity. Some of the sceptics are extremely productive as far as critical analysis of climate science is concerned. Names like Judith Curry (chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology in Atlanta), Steve McIntyre (a Canadian geologist-statistician) and blogger Willis Eschenbach come to mind. These three in particular provide a balance and maturity in public discussion that puts many players in the global warming movement to shame, and as a consequence their outreach to the scientifically inclined general public is highly effective. Their output, together with that of other sceptics on the web, is fast becoming a practical and stringent substitute for peer review.

Update:  The IPCC does not seem to be on a path to building the credibility of climate science.  In their last report, the IPCC was rightly criticized for using "grey" literature as a source for their findings, against their own rules.  Grey literature encompasses about anything that is not published peer-reviewed literature, including, from the last report, sources that were essentially press releases from advocacy groups like the IPCC.  They even use a travel brochure as a source.

This time, to avoid this criticism, the IPCC is ... changing their rules to allow such grey literature citations.    I am pretty sure that this was NOT passed in order to get more material from Steve McIntyre's blog.  In related news, the IPCC also changed the makeup of its scientific panel, putting geographical and gender diversity over scientific qualifications as a criteria.  The quota for African climate scientists will, for example, be higher than that of North America.  See the whole story here.

Though this was all presented with pious words, my guess is that it was felt by the political leaders of the IPCC in the UN that the last report was not socialist or totalitarian enough and that more of such content was necessary.  We'll see.

Global Warming Ate My House

This has already made the rounds but I can't resist mocking an HBS professors whose classes I assiduously avoided when I was there.  Her house was hit by lightning.  Apparently, this was not the fault of poor lightning protection for her house, but was due to your SUV:

I am not a climate change scientist, but I have come to understand that I am a climate change victim. Our daughter took the lead investigating destructive lightning in Maine. She found that the NASA Goddard Institute estimates a 5-6% change in global lightning frequencies for every 1 degree Celsius global warming. The Earth has already warmed .8 degrees Celsius since 1802 and isexpected to warm another 1.1-6.4 degrees by the end of the century. Maine's temperatures rose 1.9 degrees Celsius in the last century and another 2.24 degree rise is projected by 2104. I learned from our insurance company that while the typical thunderstorm produces around 100 lightning strikes, there were 217 strikes around our house that night. I was shocked to discover that when it comes to increased lightning frequency and destructiveness, a NASA study concluded that eastern areas of North America like Maine are especially vulnerable. Scientists confirm a 10% increase in the incidence of extreme weather events in our region since 1949.

This is one of those paragraphs that is so bad, I put off writing about it because I could write a book about all the errors.

  • The 5-6% lightning strike estimate comes from one single study that I have never seen replicated, but more importantly comes from running a computer model.  Though it may exist, I have found no empirical evidence that lightning activity has net increased with increases in temperature
  • The world has warmed about 0.8C over the last century or two. Congrats.  Infinite monkeys and Shakespeare and all that.
  • We could argue the forecasts, but they are irrelevant to this discussion as we are talking about current weather which cannot be influenced by future warming.
  • Her claim that Maine's temperature rose 1.9C in the last Century is simply absurd.  Apparently she got the data from some authoritative place called nextgenerationearth.com, but its impossible to know since in the few days since she published this article that site has taken down the page.  So we will just have to rely on a lesser source like the NOAA for Maine temperatures.  Here story is from 2009 so I used data through 2009

Annual Averages in Maine:

Oops, not a lot of warming here, and certainly not 1.9C.  In fact, there has not even been a single year that has been 1.9C above the average for the century since the early 1900s.  And 2009 was a below average year.
Well, she said it was in summer.  That's when we get the majority of thunderstorms.  Maybe it is just summer warming?  The NOAA does not have a way to get just summer, but I can run average temperatures for July-September of each year, which matches summer within about 8 days.

Whoa!  What's this?  A 0.3-0.4C drop in the last 100 years.   And summer of 2009 (the last data point) was well below average. Wow, I guess cooling causes lightning.  We better do something about that cooling, and fast!  Or else buy this professor some lightning rods.
And you have to love evidence like this

I learned from our insurance company that while the typical thunderstorm produces around 100 lightning strikes, there were 217 strikes around our house that night

What is this, the climate version of the Lake Wobegone Effect?  If all our storms are not below average, then that is proof of climate change.  Is this really how a Harvard professor does statistical analysis?  She can just look at a sample and the mean and determine from that one sample that the mean is shifting?

Finally, she goes on to say that extreme weather in her area is up 10% from some source called the Gulf of Maine Council on Marine Environment.  Well, of course, you can't find that fact anywhere on the source she links.  And besides, even if Maine extreme weather is up, it can't be because of warming because Maine seems to be cooling.

This is just a classic example of the observer bias that is driving the whole "extreme weather" meme.  I will show you what is going on by analogy.  This is from the Wikipedia page on "Summer of the Shark":

The media's fixation with shark attacks began on July 6, when 8-year-old Mississippi boy Jessie Arbogast was bitten by a bull shark while standing in shallow water at Santa Rosa Island's Langdon Beach. ...

Immediately after the near-fatal attack on Arbogast, another attack severed the leg of a New Yorker vacationing in The Bahamas, while a third attack on a surfer occurred about a week later on July 15, six miles from the spot where Arbogast was bitten.[6] In the following weeks, Abrogast's spectacular rescue and survival received extensive coverage in the 24-hour news cycle, which was renewed (and then redoubled) with each subsequent report of a shark incident. The media fixation continued story with a cover story in the July 30th issue of Time magazine.

In mid-August, many networks were showing footage captured by helicopters of hundreds of sharks coalescing off the southwest coast of Florida. Beach-goers were warned of the dangers of swimming,[7] despite the fact that the swarm was likely part of an annual shark migration.[8] The repeated broadcasts of the shark group has been criticized as blatant fear mongering, leading to the unwarranted belief of a so-called shark "epidemic".[8]...

In terms of absolute minutes of television coverage on the three major broadcast networks—ABCCBS, and NBCshark attacks were 2001's third "most important" news story prior toSeptember 11, behind the western United States forest fires, and the political scandal resulting from the Chandra Levy missing persons case.[11] However, the comparatively higher shock value of shark attacks left a lasting impression on the public. According to the International Shark Attack File, there were 76 shark attacks that occurred in 2001, lower than the 85 attacks documented in 2000; furthermore, although 5 people were killed in attacks in 2001, this was less than the 12 deaths caused by shark attacks the previous year.[12]

A trend in news coverage <> a trend in the underlying frequency. If these were correlated, gas prices would only go up and would never come down.

An Amazing Hypothesis: Supernovas and Earth's Climate

A reader sent this abstract of a Henrik Svensmark study with a one word caption:  Wow!  I agree.  The notion that "local" (and by local, we mean unimaginably far away) supernova affecting the Earth's climate is certainly creative.  Haven't even read the thing so certainly not buying it yet, but it certainly is an amazing hypothesis.

Observations of open star clusters in the solar neighbourhood are used to calculate local supernova (SN) rates for the past 510 Myr. Peaks in the SN rates match passages of the Sun through periods of locally increased cluster formation which could be caused by spiral arms of the Galaxy. A statistical analysis indicates that the Solar system has experienced many large short-term increases in the flux of Galactic cosmic rays (GCR) from nearby SNe. The hypothesis that a high GCR flux should coincide with cold conditions on the Earth is borne out by comparing the general geological record of climate over the past 510 Myr with the fluctuating local SN rates. Surprisingly, a simple combination of tectonics (long-term changes in sea level) and astrophysical activity (SN rates) largely accounts for the observed variations in marine biodiversity over the past 510 Myr. An inverse correspondence between SN rates and carbon dioxide (CO2) levels is discussed in terms of a possible drawdown of CO2 by enhanced bio-productivity in oceans that are better fertilized in cold conditions – a hypothesis that is not contradicted by data on the relative abundance of the heavy isotope of carbon, 13C.

I was initially very skeptical of Svensmark's work attempting to link cosmic rays to cloud formation, with that affect acting as an amplifier (in terms of warming and cooling effects) of changes in solar output.  I must say that over time, that work has survived replication effects pretty well.