Posts tagged ‘media’

I Hate to Repeat Myself, But Trump Did Not Win: Clinton Lost

This article by Damon Linker totally mirrors my take on this election -- a competent Democratic candidate without Clinton's many flaws should have wiped the floor with Trump.  Biden would have won, I am absolutely convinced.  Anyway, I liked this bit from Linker:

Most of all, I don't want to hear about how unfairly Clinton was treated by the media. In comparison to whom? All the other candidates who've run for president while under criminal investigation by the FBI? (Maybe that substantial handicap should have overridden the party's presumption that she was owed the nomination because it was "her turn.") Or do you mean, instead, that she was treated badly in comparison to her opponent? Really? You mean the one whose 24/7 media coverage was overwhelmingly, relentlessly negative in tone and content? Either way, a halfway competent campaign should have been able to take advantage of the great good fortune of running against Donald J. Trump and left him bleeding in the ditch.

I am exhausted with folks talking about some fundamental political shift to a white male resurgence, or whatever.  There was no shift.  Trump got about the same number of votes as Romney and McCain.  He won no more white male votes than those guys and if anything performed better than them in traditional Democratic categories like single women and blacks.  The reason Trump won is because Clinton had 10 million fewer votes than Obama had in his first win.  Traditional Democratic supporters were unenthusiastic about Clinton and stayed home.

Um, What Part of "Off the Record" Doen't Anyone Understand

I don't know if this is coming from the media folks present in the room or the Trump side, but the New York Post has a pretty complete record of Trump's "off the record" meeting with the media.  Yet another reason not to trust the media -- they don't follow their own rules.

The Term "Fake News" Joins "Hate Speech" As A New Tool for Ideological Speech Suppresion

The term "hate speech" has become a useful tool for speech suppression, mostly from the Left side of the political aisle.  The reason it is such a dangerous term for free speech is that there is no useful definition of hate speech, meaning that in practice it often comes to mean, "confrontational speech that I disagree with."   I think most of us would agree that saying, "all black men should be lynched" is unambiguously hateful.  But what about saying something like "African Americans need to come to terms with the high rate of black on black violence."  Or even, "President Obama plays too much golf."   I would call both the latter statements opinions that, even if wrong, reasonably fit within the acceptable bounds of public discourse, but both have been called hate speech and racist.

The Left's new tool for speech suppression appears to be the term "fake news."  Certainly a news story that says, "American actually has 57 states" would be considered by most to be fake.  We understand (or most of us outside places like the New York Times, which still seems to get fooled) that sites like the Onion are fake.   But, as I suspected the very first time I heard the term, "fake news" also seems to be defined as "political sites with which I disagree."  Via Reason:

But Zimdars' list is awful. It includes not just fake or parody sites; it includes sites with heavily ideological slants like Breitbart, LewRockwell.com, Liberty Unyielding, and Red State. These are not "fake news" sites. They are blogs that—much like Reason—have a mix of opinion and news content designed to advance a particular point of view. Red State has linked to pieces from Reason on multiple occasions, and years ago I wrote a guest commentary for Breitbart attempting to make a conservative case to support gay marriage recognition....

Reporting on the alleged impact of fake news on the election is itself full of problems. BuzzFeed investigated how well the top "fake" election news stories performed on Facebook compared to the top "real" election news stories. The fake stories had more "engagement" on Facebook than stories from mainstream media outlets. There's basic problems with this comparison—engagement doesn't mean that people read the stories or even believed them (I know anecdotally that when a fake news story shows up in my feed, the "engagement" is often people pointing out that the story is fake).

There's also a problem when you look at the top stories from mainstream media outlets—they tend toward ideologically supported opinion pieces as well. Tim Carney over at The Washington Examinernoted that two of the top three stories are essentially opinion pieces:

Here's the top "Real News" stories: "Trump's history of corruption is mind-boggling. So why is Clinton supposedly the corrupt one?" As the headline suggests, this is a liberal opinion piece, complaining that the media doesn't report enough on Trump's scandals.

No. 2 is "Stop Pretending You Don't Know Why People Hate Hillary Clinton." This is a rambling screed claiming that people only dislike Clinton because she is a woman.

So in an environment where "fake news" is policed by third parties that rely on expert analysis, we could see ideologically driven posts from outlets censored entirely because they're lesser known or smaller, while larger news sites get a pass on spreading heavily ideological opinion pieces. So a decision by Facebook to censor "fake news" would heavily weigh in favor of the more mainstream and "powerful" traditional media outlets.

The lack of having a voice in the media is what caused smaller online ideology-based sites to crop up in the first place. Feldman noted that he's already removed some sites that he believes have been included "unfairly" in Zimdars' list. His extension also doesn't block access to any sites in any event. It just produces a pop-up warning.

Tellingly, in a quick scan of the sites, I don't see any major sites of the Left, while I see many from the Right (though Zero Hedge is on the list and writes from both the Left and the Right).   Daily Kos anyone?  There are conspiracy sites on the list but none that I see peddle conspiracies (e.g. 9/11 trutherism) of the Left.

This is yet another effort to impose ideological censorship but make it feel like it is following some sort of neutral criteria.

Update on My Letter to Princeton

Part of what I wrote to Princeton:

left-leaning kids ... today can sail through 16 years of education without ever encountering a contrary point of view. Ironically, it is kids on the Left who are being let down the most, raised intellectually as the equivalent of gazelles in a petting zoo rather than wild on the Serengeti.

Princeton gazelle student writing in the Daily Princetonian:

In the morning, I woke up to a New York Times news alert and social media feeds filled with disappointment. The United States had democratically elected a man who, among so many other despicable qualities and policies, is accused of and boasts about committing sexual assault. As a woman passionate about gender equality, women’s leadership, and ending sexual violence; as someone dedicated to the Clinton campaign and ready to make history; and, quite frankly, as a human being, I didn’t know how to process this. I still don’t. I felt for my friends and anyone who feels that this result puts their safety and their loved ones’ safety at risk, acknowledging that I am not the person this outcome will affect the most.

I didn’t leave my room Wednesday morning. I sat and sobbed and I still have the tissues all over my floor to prove it. When I absolutely had to get up for class, I put on my “Dare to say the F-word: Feminism” t-shirt and my “A woman belongs in the House and the Senate” sweatshirt to make myself feel stronger. Still crying, I left my room.

After hearing the election results, I had expected that the vandal would have torn down my angry note or left some snide comment. To my surprise, it was still there, and people had left supportive notes beside it. I have no idea whether the vandal is a Trump supporter or a misguided prankster unable to fathom the negative impact that a Trump presidency will have on so many people. But I know that the love and kindness others anonymously left gave me the support I needed Wednesday morning.

In every election since I was about 18 years old, I woke up on the day after the election to a President-elect I did not support, one who championed policies I thought to be misguided or even dangerous.   But I had the mental health to go on with my life;  and I had the knowledge, from a quality western history education (which no longer seems to be taught in high school or at Princeton), that our government was set up to be relatively robust to bad presidents; and I had the understanding, because I ate and drank and went to class and lived with many other students with whom I disagreed (rather than hiding in rubber room safe spaces created by my tribe), that supporters of other political parties were not demons, but were good and well-intentioned people with whom I disagreed.

News Selection Bias

When some sort of "bad" phenomenon is experiencing a random peak, stories about this peak flood the media.  When the same "bad" phenomenon has an extraordinarily quiet year, there are no stories in the media.  This (mostly) innocuous media habit (based on their incentives) creates the impression among average folks that the "bad" phenomenon is on the rise, even when there is no such trend.

Case in point: tornadoes.  How many stories have you seen this year about what may well be a record low year for US tornadoes?

Postscript: By the way, some may see the "inflation-adjusted" term in the heading of the chart and think that is a joke, but there is a real adjustment required. Today we have doppler radar and storm chasers and all sorts of other tornado detection tools that did not exist in, say, 1950. So tornado counts in 1950 are known to understate actual counts we would get today and thus can't be compared directly. Since we did not miss many of the larger tornadoes in 1950, we can adjust the smaller numbers based on the larger numbers. This is a well-known effect and an absolutely necessary adjustment, though Al Gore managed to completely fail to do so when he discussed tornadoes in An Inconvenient Truth. Which is why the movie got the Peace prize, not a science prize, from the crazy folks in Oslo.

Does My Generation Have More Tolerance for Spouses Who Don't Agree Politically?

Coming out of voting today, I met two different couples who I know who both said the same thing to me:  "we cancelled each other out".  Meaning, I think, that the husband and wife voted differently in key elections.  I know this is also true of my wife and I.  Which leads me to wonder if there is a generational difference in toleration for spouses with different political views, or if (as is often the case) nothing is really changing on this and the examples given in the media of intolerant millennials who won't socialize with people who don't pass various political litmus tests are just that, isolated examples.

Speaking of which, I took my daughter to vote for the first time today.  She was pretty excited, and planned her outfit in advance.

dsc_0679

She asked me why I was not wearing my "I voted" sticker.  I told her that it made me feel like a sucker.  She told me that she had clearly come to vote her first time with the wrong person, and should have found a doe-eyed idealist.

The Higher Education Monoculture

I have written before that many universities have focused on creating true diversity of skin pigments and reproductive plumbing among their students but in their primary world of ideas, have created an intellectual monoculture.  If you don't believe it, check out this quote from a Yale dean in the Yale Daily News.

Despite ongoing campus discussions about free speech, Yale remains deeply unwelcoming to students with conservative political beliefs, according to a News survey distributed earlier this month.

Nearly 75 percent of 2,054 respondents who completed the survey — representing views across the political spectrum — said they believe Yale does not provide a welcoming environment for conservative students to share their opinions on political issues. Among the 11.86 percent of respondents who described themselves as either “conservative” or “very conservative,” the numbers are even starker: Nearly 95 percent said the Yale community does not welcome their opinions. About two-thirds of respondents who described themselves as “liberal” or “very liberal” said Yale is not welcoming to conservative students.

...

By contrast, more than 98 percent of respondents said Yale is welcoming to students with liberal beliefs. And among students who described themselves as “liberal” or “very liberal,” 85 percent said they are “comfortable” or “very comfortable” sharing their political views in campus discussions.

In an interview with the News, Yale College Dean Jonathan Holloway said the results of the survey were lamentable but unsurprising. Holloway attributed conservative students’ discomfort at sharing their views partly to the pervasiveness of social media.

“So much of your generation’s world is managed through smart phones. There’s no margin anymore for saying something stupid,” Holloway said. “People have been saying dumb things forever, but when I was your age word of mouth would take a while. Now it’s instantaneous, now context is stripped away.

So the reason Conservatives have a problem at Yale, according to the Yale administration, is that Yale people don't tolerate folks who are stupid.  LOL.  The Dean later tried to back away from this statement, arguing that he did not mean Conservatives said stupid things, but his comments don't make any sense in any other context.

The institution is certainly hurt by this sort of narrow-mindedness.  It is more of a mixed bag for students.  While Conservatives are certainly frustrated they are frequently not allowed to bring speakers from their side of political issues to campus, there is potentially a silver lining.  As I wrote previously in my letter to Princeton:

I suppose I should confess that this has one silver lining for my family. My son just graduated Amherst College, and as a libertarian he never had a professor who held similar views. This means that he was constantly challenged to defend his positions with faculty and students who at a minimum disagreed, and in certain cases considered him to be a pariah. In my mind, he likely got a better education than left-leaning kids who today can sail through 16 years of education without ever encountering a contrary point of view. Ironically, it is kids on the Left who are being let down the most, raised intellectually as the equivalent of gazelles in a petting zoo rather than wild on the Serengeti,.

December Surprise

I have written a number of times in the past that the media is often reluctant to publish potential issues about pending legislation that they support -- but, once the legislation is passed, the articles about problems with the legislation or potential unintended consequences soon come out, when it is too late to affect the legislative process.  My guess is that these media outlets want the legislation to pass, but they want to cover their butts in the future, so they can say "see, we discussed the potential downsides -- we are even-handed."

I don't know if this practice spills over from legislation to elections, but if it does, we should see the hard-hitting articles about Hillary Clinton sometime in December.

EEK! Those Power Plants Are Spewing Water Into the Atmosphere!

Yet another media article on CO2 illustrated with steam plumes

Postscript:  This is even funnier, potentially, since given the size and design of those cooling towers, this is very likely a nuclear plant, which of course has no CO2 emissions at all.

Postscript #2:  I tried a reverse image search to try to confirm my guess this is a nuclear plant.  This is what Google returned:

co2_fail

That will give you some idea how often the media has used this stock image of water vapor to illustrate CO2 articles.

Guide for Politicians: How to Lie in the 21st Century

Lying is an old, old skill among politicians.  What is new in the 21st century is that with the advent of the Internet and alternative media, it is much more likely for a politician to get caught publicly in a lie.  Based on my observations over the last year of the political-media process, here is my brief guide for politicians on how to lie, or more accurately, how to manage affairs when caught lying.

First, there must be a lie, as represented by this chart:

21st century lying

There is some underlying truth out there (shown with the blue dot), and given the squishiness of the English language at times, there are a variety of ways that truth could reasonably be restated, shown by the blue circle around it.  On the left we will assume someone has lied or made an incorrect statement about that truth, and again there is a reasonable range of meanings around that untrue statement, shown by the red circle around it.  Note that the reasonable range of meanings for the original statement do not encompass the truth.

So what happens next?  Well, one possibility is that no one calls you on the untruth.  Congratulations, you are done!  The other possibility, though, is that some crazy dude on the Internet found a cell phone video embedded in a World of Warcraft chat room that reveals you did not tell the truth.  So what now?

The thing to remember at this point is that you have two assets.  First, you presumably have supporters.  Your supporters want to believe you.  They are looking for some explanation or statement from you that is even minimally convincing, and they are ready to trumpet that explanation like it is the Word of God to the rest of the world.

Your second asset is the media.  Your original lie was maybe a week ago.  That is the Jurassic Period for the media.  They don't have the staff to track down what is happening today, much less go back over something from a week ago.

Taking these two assets in mind, you are going to restate your original untrue statement, as so in orange:

slide2

The key for this to work is to make sure the range of meanings from your original statement and the range of possible meanings from your new statement overlap.  By doing so, you haven't admitted to lying or changed your position -- you have clarified.  Cognitive dissonance in your supporters will cause their brains to immediately substitute all instances of your first statement in their memories with your new restatement.

OK, but what happens when that dude in his pajamas does it again, and claims you are still lying with your new restatement.  What do you do?  Same thing as last time: another restatement.  If necessary, you will keep restating until the range of meanings of your restatement overlaps with the truth:

slide3

Yay!  You are done.  If you really want to win the news cycle, take your final restatement to Politifact and get them to rate it as mostly true.   Sure, some crazies on the other side of the aisle are going to be screaming that the ultimate truth does not at all resemble your original statement, but just claim that they are dredging up old news and that it has already been settled.  For extra points, if you are a female and/or the member of an ethnic minority, claim discrimination, saying that the opposition is driven by racism, misogyny, etc.

I think this is all clearer with an example.  So let's take the case of Philander J. Donkeyphant, who is running for reelection.  Phil decides to lie about the vehicle he was driving yesterday.  Why does he lie?  Who knows, but Phil is a successful politician and senior government official and therefore one of our betters and let's not question his tactics.   So let's see how his lie plays out:

Lie:  I drove a red car yesterday

Soon, Philander has a problem.  Some crazy lady finds a traffic camera video and proves no red car drove by that could have been Philander's.  So Phil is forced into his first restatement:

First Restatement:  I was driving a deep-red pickup truck

A bit of a stretch but we can't really call it changing his story, since many folks might refer to the family car and actually be talking about a pickup truck.  And the "deep red" comment seems downright helpful, trying to provide more detail.  But wouldn't you know it, that lady can't find any deep red pickup trucks on camera.  So Phil moves to his second restatement:

Second Restatement:  I was driving a violet truck.

Again, a bit of a stretch, but violet is not far from deep-red.  He has dropped the detail of it being a pickup truck, now it is just a truck, but still arguably consistent with his immediately previous statement.

Finally, our annoying blogger-lady finds Philander and his vehicle on a video.  It turns out:

Truth:  He was driving a purple 18-wheeler.

When shown the video, old Phil says, "Sure, that's what I said.  A violet truck.  Obviously my opposition has nothing better to do than make stupid issues like this out of nothing.  Politifact confirms that "violet truck" is a truthful way to describe a "purple 18-wheeler" so the issue is closed.

Perfect Example of Blaming the Free Market for Government Interventions

Hillary Clinton, along with many politicians and most of the media, is arguing that the recent large price increase in Epipens is some sort of market failure requiring government intervention to solve.

Democratic presidential nominee Hillary Clinton jumped into the fray over rapid price increases for the EpiPen, a life-saving injection for people who are having severe allergic reactions.

Mrs. Clinton called the recent price hikes of the EpiPen “outrageous, and just the latest example of a company taking advantage of its consumers.”

In a written statement calling for Mylan to scale back EpiPen prices, Clinton added, “It’s wrong when drug companies put profits ahead of patients, raising prices without justifying the value behind them.”

Why aren't similar government interventions required to curb greed in the pricing of paint, or tacos, or toilet paper?  Because the markets are allowed to operate and competitors know that if they raise prices too high, their existing competitors will take sales from them, and new competitors may enter the market.  The reason this is not happening with Epipens is that the Federal government blocks other companies from competing with Mylan for the Epipen business with a tortuous and expensive and pointless regulatory process (perhaps given even more teeth because Mylan's CEO has a lot of political pull).  The MSNBC article fails to even mention why Mylan has no competition, and in fact essentially assumes that Epipens are a natural monopoly and should be treated as such, despite the fact that there are 3 or 4 different companies that have tried (and failed) to clear the regulatory process over the last several years with competing products.  Perhaps these other companies would have been smarter to appoint a Senator's daughter to a senior management position.

Hillary Clinton is proposing a dumb government intervention to try to fix some of the symptoms of a previous dumb government intervention.  It would be far better to work the root cause instead.

Postscript:  Credit Vox with the stupid argument of the day:  

Other countries do this for drugs and medical care – but not other products, like phones or cars – because of something fundamentally unique about medication: If consumers can’t afford the product, they could have worse odds of living. In some cases, they face quite certain odds of dying. So most governments have decided that keeping these products affordable is a good reason to introduce more government regulation.

Hmm, let me pick a slightly different example -- food.  I will substitute that into the Vox comment.   I think it would be perfectly correct to say that there is not price regulation of food in the US, and that "If consumers can’t afford [food], they could have worse odds of living. In some cases, they face quite certain odds of dying."  In fact, the best place today to face high odds of dying due to lack of food is Venezuela, where the government heavily regulates food prices in the way Vox wished to regulate drugs prices.

Being A Victim Apparently Has More Status Now Than Being A Gold Medal Winner -- Ryan Lochte Channels "Jackie"

There appears to be no rational way to explain Ryan Lochte's bizarre need to make up a story about being the victim of an armed robbery.  The media seems to be pushing the notion that he made up the story to cover up his own vandalism at a gas station, but that makes zero sense.  He had already defused the vandalism incident with a payment of cash to the station owner.  The rational response would be to just shut up about the whole thing and let it be forgotten.

But instead, he purposely made a big deal about the incident, switching around the facts until he was a victim of an armed assault by men posing as police officers, up to and including harrowing details of a cocked gun being jammed into his forehead.  The incident, likely ignored otherwise, suddenly became a BIG DEAL and subsequent investigation (including multiple video sources) showed Lochte to be a bald-faced liar.

The only way I can explain Lochte's motivation is to equate it with the lies by "Jackie" at the University of Virginia, whose claims of being gang-raped as published in the Rolling Stone turned out to be total fabrications.  Like Lochte, she dressed up the story with horrifying details, such as being thrown down and raped on a floor covered in broken glass.  The only real difference I can see, in fact, between Lochte and Jackie  is that the media still protects Jackie (via anonymity) from well-deserved humiliation for her lies while it is piling on Lochte.

I can sort of understand Jackie's motivation -- she was by all accounts a frustrated, perhaps disturbed, certainly lonely young woman who was likely looking for some way to dramatically change her life.  But Lochte?  Ryan Lochte has won multiple Olympic medals, historically in the sports world a marker of the highest possible status.  But in today's world, Lochte viewed victimhood as even higher status.

Update:  This is probably the fairest account of the whole incident.

Uncertainty Intervals and the Olympics

If I had to pick one topic or way of thinking that engineers and scientists have developed but other folks are often entirely unfamiliar with, I might pick the related ideas of error, uncertainty, and significance.  A good science or engineering education will spend a lot of time on assessing the error bars for any measurement, understanding how those errors propagate through a calculation, and determining which digits of an answer are significant and which ones are, as the British might say, just wanking.

It is quite usual to see examples of the media getting notions of error and significance wrong.  But yesterday I saw a story where someone actually dusted these tools off and explained why the Olympics don't time events to the millionths of a second, despite clocks that are supposedly that accurate:

Modern timing systems are capable of measuring down to the millionth of a second—so why doesn’t FINA, the world swimming governing body, increase its timing precision by adding thousandths-of-seconds?

As it turns out, FINA used to. In 1972, Sweden’s Gunnar Larsson beat American Tim McKee in the 400m individual medley by 0.002 seconds. That finish led the governing body to eliminate timing by a significant digit. But why?

In a 50 meter Olympic pool, at the current men’s world record 50m pace, a thousandth-of-a-second constitutes 2.39 millimeters of travel. FINA pool dimension regulations allow a tolerance of 3 centimeters in each lane, more than ten times that amount. Could you time swimmers to a thousandth-of-a-second? Sure, but you couldn’t guarantee the winning swimmer didn’t have a thousandth-of-a-second-shorter course to swim. (Attempting to construct a concrete pool to any tighter a tolerance is nearly impossible; the effective length of a pool can change depending on the ambient temperature, the water temperature, and even whether or not there are people in the pool itself.)

By this, even timing to the hundredth of a second is not significant.  And all this is even before talk of currents in the Olympic pool distorting times.

Wow, With This Level of Understanding of How Government Works, It's Hard To Believe We Struggle to Have Meaningful Public Discourse

I don't have any particular comment on the Supreme Court decision in Voisine v. United States, but I have to highlight the headline that was just shared with me on Facebook:

Another Big Win: SCOTUS Just Banned Domestic Abusers From Owning Firearms

Um, pretty sure that is not what happened.

First, convicted domestic abusers generally are already banned from owning firearms.

Second, I am fairly certain that SCOTUS did not ban anything (not surprising since they don't have a Constitutional power to ban anything).  There was some legal uncertainty in the definitions of certain terms in a law (passed by Congress and signed by the President) that restricted gun ownership based on certain crimes.  This dispute over the meaning of these terms bounced back and forth in the courts until the Supreme Court took the case and provided the final word on how the terms should be interpreted by the judicial system.

This decision strikes me as a pretty routine sort of legal result fixing a niche issue in the interpretation of terms of the law.  How niche?  Well apparently Voisine was convicted (multiple times) of "“intentionally, knowingly, or recklessly” hurting his girlfriend.  The facts of the case made it pretty clear that he was beating on her on purpose, but he argued that due to the "or" in the wording of the crime he was convicted of, as far as the law is concerned he might have only been convicted of recklessness which shouldn't be covered under the gun ownership ban.  Really, this silliness should never have reached the Supreme Court, and did (in my interpretation) only because second amendment questions were involved, questions stripped off by SCOTUS.  Freed on any Second Amendment implications, SCOTUS rightly slapped his argument down as stupid and said he was subject to the ban.  Seems sensible to me, and this sort of thing happens literally constantly in the courts -- the only oddball thing in my mind was how this incredibly arcane niche issue made it to the SCOTUS.

Instead, the article is breathless about describing this incredibly niche case as closing a "gaping loophole."  It is written as if it is some seminal event that overturns a horror just one-notch short of concentration camps  -- "This is a win for feminism, equality in the home, and in finally making movements on reigning in this country’s insane, libertarian approach to gun-owning."    And then of course the article bounces around in social media, making everyone who encounters it just a little bit dumber.

The Lifestyle Charity Fraud

For decades I have observed an abuse of charities that I am not sure has a name.  I call it the "lifestyle" charity or non-profit.  These are charities more known for the glittering fundraisers than their actual charitable works, and are often typified by having only a tiny percentage of their total budget flowing to projects that actually help anyone except their administrators.  These charities seem to be run primarily for the financial maintenance and public image enhancement of their leaders and administrators.  Most of their funds flow to the salaries, first-class travel, and lifestyle maintenance of their principals.

I know people first hand who live quite nicely as leaders of such charities -- having gone to two different Ivy League schools, it is almost impossible not to encounter such folks among our alumni.  They live quite well, and appear from time to time in media puff pieces that help polish their egos and reinforce their self-righteous virtue-signaling.  I have frequently attended my university alumni events where these folks are held out as exemplars for folks working on a higher plane than grubby business people like myself.  They drive me crazy.  They are an insult to the millions of Americans who do volunteer work every day, and wealthy donors who work hard to make sure their money is really making a difference.  My dad, who used his substantial business success to do meaningful things in the world virtually anonymously (like helping save a historically black college from financial oblivion), had great disdain for these people running lifestyle charities.

So I suppose the one good thing about the Clinton Foundation is it is raising some awareness about this kind of fraud.   This article portrays the RFK Human Rights charity as yet another example of this lifestyle charity fraud.

The Media's Role in Generating Polarization

A while back, I was asked to write a short essay answering the question of whether the National Parks should be privatized.  Here is my full answer.

Let me show you the first paragraph and a half of my answer, because I want to use it to make a point:

Should National Park’s be privatized, in the sense that they are turned entirely over to private owners?  No.  Public lands are in public hands for a reason — the public wants the government, not, say, Ritz-Carlton, to decide the use and character and access to the land.  No one wants a McDonald’s in front of Old Faithful, a common fear I hear time and again when privatization is mentioned.

However, once the agency determines the character of and facilities on the land, should their operation (as opposed to their ownership) be privatized?  Sure.   The NPS faces hundreds of millions of dollars in capital needs and deferred maintenance.  It is crazy to use its limited budget to have Federal civil service employees cleaning bathrooms and manning the gatehouse, when private companies have proven they can do a quality job so much less expensively....

It goes on from there, but I think that is a fairly nuanced and balanced answer, particularly given that I am probably the most vocal advocate in the country for public-private partnerships in public recreation.

But that nuance is not really interesting to the media.  They like point-counterpoint polarization.  So a web site called Blue Ridge Outdoors reprints me answer, but they edit it:

YES

No one wants a McDonald’s in front of Old Faithful, a fear I hear time and again when privatization is mentioned. However, once the government determines how to manage a particular park, should its operation be privatized? Sure. The National Park Service faces hundreds of millions of dollars in capital needs and deferred maintenance. It is crazy to use that limited budget for federal employees to clean bathrooms and man the gatehouse, when private companies have proven they can do a quality job much less expensively.

So my answer, which is pretty much "no" gets edited to a "YES" and the entire first paragraph of nuance is deleted.    And we wonder why the world seems polarized?

The Middle Class Is Shrinking Because They Are Becoming Rich

I have made this point before, but Tyler Cowen has a great chart from a new study.  The explanation is here, but basically they have defined the bands based on some income break points corrected for family size and inflation over time.

upper-middle

A reader sent me a nice note with this link, saying that I had been right many years ago when I began making this point.  That's good, but I will also confess to be wrong on a related point -- I said 8 years ago that the one good thing about having a Democratic President was that the media would become much more positive suddenly about the economy.  On that, I was wrong.  The media still has a strong bias towards telling everyone that their life is getting ever worse, even when no such thing is true.

Citizens United Haters, Is This Really What You Want? John Oliver Brexit Segment Forced to Air After Vote

A lot of folks, particularly on the Left, despise the Citizens United decision that said it was unconstitutional to limit third party political speech, particularly prior to an election (even if that speech was made by nasty old corporations).  The case was specifically about whether the government could prevent the airing of a third-party produced and funded documentary about one of the candidates just before an election.  The Supreme Court said that the government could not put in place such limits (ie "Congress shall make no law...") but Britain has no such restrictions so we can see exactly what we would get in such a regime.  Is this what you want?

As Britain gears up to vote in the EU referendum later this week, broadcasters are constantly working to ensure their coverage remains impartial. One such company is Sky, which has this week been forced to delay the latest instalment of John Oliver's Last Week Tonight HBO show. Why? Because it contains a 15-minute diatribe on why the UK should remain part of Europe.

Instead of airing the programme after Game of Thrones on Sky Atlantic on Monday night, like it does usually, Sky has pushed it back until 10:10pm on Thursday, just after the polls close. Social media users are up in arms about the decision, but in reality, Sky appears to be playing everything by the book.

Sky's decision allows it to adhere to Ofcom rules that come into effect during elections and referendums. "Sky have complied with the Ofcom broadcasting restrictions at times of elections and referendums that prohibit us showing this section of the programme at this moment in time. We will be able to show it once the polls close have closed on Thursday," a Sky spokesperson told Engadget.

In March, the regulator warned broadcasters that they'd need to take care when covering May's local elections and the subsequent Brexit vote. Section Five (which focuses on Due Impartiality) and Section Six (covering Elections and Referendums) of Ofcom's Code contain guidelines that are designed stop companies like Sky from influencing the public vote. Satirical content is allowed on UK TV networks during these times, but Oliver's delivery is very much political opinion based on facts, rather than straight humour.

By the way, the fact vs. satire distinction strikes me as particularly bizarre and arbitrary.

When will folks realize that such speech limitations are crafted by politicians to cravenly protect themselves from criticism.  Take that Citizens United decision.  Hillary Clinton has perhaps been most vociferous in her opposition to it, saying that if President she will appoint Supreme Court judges that will overturn it.  But note the specific Citizens United case was about whether a documentary critical of .... Hillary Clinton could be aired.  So Clinton is campaigning that when she takes power, she will change the Constitution so that she personally cannot be criticized.  And the sheeple on the Left nod and cheer as if shielding politicians from accountability is somehow "progressive."

 

Denying the Climate Catastrophe: 5a. Arguments For Attributing Past Warming to Man

This is part A of Chapter 5 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made (this article); B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

Having established that the Earth has warmed over the past century or so (though with some dispute over how much), we turn to the more interesting -- and certainly more difficult -- question of finding causes for past warming.  Specifically, for the global warming debate, we would like to know how much of the warming was due to natural variations and how much was man-made.   Obviously this is hard to do, because no one has two thermometers that show the temperature with and without man's influence.

I like to begin each chapter with the IPCC's official position, but this is a bit hard in this case because they use a lot of soft words rather than exact numbers.  They don't say 0.5 of the 0.8C is due to man, or anything so specific.   They use phrases like "much of the warming" to describe man's affect.  However, it is safe to say that most advocates of catastrophic man-made global warming theory will claim that most or all of the last century's warming is due to man, and that is how we have put it in our framework below:

click to enlarge

By the way, the "and more" is not a typo -- there are a number of folks who will argue that the world would have actually cooled without manmade CO2 and thus manmade CO2 has contributed more than the total measured warming.  This actually turns out to be an important argument, since the totality of past warming is not enough to be consistent with high sensitivity, high feedback warming forecasts.  But we will return to this in part C of this chapter.

Past, Mostly Abandoned Arguments for Attribution to Man

There have been and still are many different approaches to the attributions problem.  In a moment, we will discuss the current preferred approach.  However, it is worth reviewing two other approaches that have mostly been abandoned but which had a lot of currency in the media for some time, in part because both were in Al Gore's film An Inconvenient Truth.

Before we get into them, I want to take a step back and briefly discuss what is called paleo-climatology, which is essentially the study of past climate before the time when we had measurement instruments and systematic record-keeping for weather.   Because we don't have direct measurements, say, of the temperature in the year 1352, scientists must look for some alternate measure, called a "proxy,"  that might be correlated with a certain climate variable and thus useful in estimating past climate metrics.   For example, one might look at the width of tree rings, and hypothesize that varying widths in different years might correlate to temperature or precipitation in those years.  Most proxies take advantage of such annual layering, as we have in tree rings.

One such methodology uses ice cores.  Ice in certain places like Antarctica and Greenland is laid down in annual layers.  By taking a core sample, characteristics of the ice can be measured at different layers and matched to approximate years.  CO2 concentrations can actually be measured in air bubbles in the ice, and atmospheric temperatures at the time the ice was laid down can be estimated from certain oxygen isotope ratios in the ice.  The result is that one can plot a chart going back hundreds of thousands of years that estimates atmospheric CO2 and temperature.  Al Gore showed this chart in his movie, in a really cool presentation where the chart wrapped around three screens:

click to enlarge

As Gore points out, this looks to be a smoking gun for attribution of temperature changes to CO2.  From this chart, temperature and CO2 concentrations appear to be moving in lockstep.  From this, CO2 doesn't seem to be a driver of temperatures, it seems to be THE driver, which is why Gore often called it the global thermostat.

But there turned out to be a problem, which is why this analysis no longer is treated as a smoking gun, at least for the attribution issue.  Over time, scientists got better at taking finer and finer cuts of the ice cores, and what they found is that when they looked on a tighter scale, the temperature was rising (in the black spikes of the chart) on average 800 years before the CO2 levels (in red) rose.

This obviously throws a monkey wrench in the causality argument.  Rising CO2 can hardly be the cause of rising temperatures if the CO2 levels are rising after temperatures.

It is now mostly thought that what this chart represents is the liberation of dissolved CO2 from oceans as temperatures rise.  Oceans have a lot of dissolved CO2, and as the oceans get hotter, they will give up some of this CO2 to the atmosphere.

The second outdated attribution analysis we will discuss is perhaps the most famous:  The Hockey Stick.  Based on a research paper by Michael Mann when he was still a grad student, it was made famous in Al Gore's movie as well as numerous other press articles.  It became the poster child, for a few years, of the global warming movement.

So what is it?  Like the ice core chart, it is a proxy analysis attempting to reconstruct temperature history, in this case over the last 1000 years or so.  Mann originally used tree rings, though in later versions he has added other proxies, such as from organic matter laid down in sediment layers.

Before the Mann hockey stick, scientists (and the IPCC) believed the temperature history of the last 1000 years looked something like this:

click to enlarge

Generally accepted history had a warm period from about 1100-1300 called the Medieval Warm Period which was warmer than it is today, with a cold period in the 17th and 18th centuries called the "Little Ice Age".  Temperature increases since the little ice age could in part be thought of as a recovery from this colder period.  Strong anecdotal evidence existed from European sources supporting the existence of both the Medieval Warm Period and the Little Ice Age.  For example, I have taken several history courses on the high Middle Ages and every single professor has described the warm period from 1100-1300 as creating a demographic boom which defined the era (yes, warmth was a good thing back then).  In fact, many will point to the famines in the early 14th century that resulted from the end of this warm period as having weakened the population and set the stage for the Black Death.

However, this sort of natural variation before the age where man burned substantial amounts of fossil fuels created something of a problem for catastrophic man-made global warming theory.  How does one convince the population of catastrophe if current warming is within the limits of natural variation?  Doesn't this push the default attribution of warming towards natural factors and away from man?

The answer came from Michael Mann (now Dr. Mann but actually produced originally before he finished grad school).  It has been dubbed the hockey stick for its shape:

 

click to enlarge

The reconstructed temperatures are shown in blue, and gone are the Medieval Warm Period and the Little Ice Age, which Mann argued were local to Europe and not global phenomena.  The story that emerged from this chart is that before industrialization, global temperatures were virtually flat, oscillating within a very narrow band of a few tenths of a degree.  However, since 1900, something entirely new seems to be happening, breaking the historical pattern.  From this chart, it looks like modern man has perhaps changed the climate.  This shape, with the long flat historical trend and the sharp uptick at the end, is why it gets the name "hockey stick."

Oceans of ink and electrons have been spilled over the last 10+ years around the hockey stick, including a myriad of published books.  In general, except for a few hard core paleoclimatologists and perhaps Dr. Mann himself, most folks have moved on from the hockey stick as a useful argument in the attribution debate.  After all, even if the chart is correct, it provides only indirect evidence of the effect of man-made CO2.

Here are a few of the critiques:

  • Note that the real visual impact of the hockey stick comes from the orange data on the far right -- the blue data alone doesn't form much of a hockey stick.  But the orange data is from an entirely different source, in fact an entirely different measurement technology -- the blue data is from tree rings, and the orange is form thermometers.  Dr. Mann bristles at the accusation that he "grafted" one data set onto the other, but by drawing the chart this way, that is exactly what he did, at least visually.  Why does this matter?  Well, we have to be very careful with inflections in data that occur exactly at the point that where we change measurement technologies -- we are left with the suspicion that the change in slope is due to differences in the measurement technology, rather than in the underlying phenomenon being measured.
  • In fact, well after this chart was published, we discovered that Mann and other like Keith Briffa actually truncated the tree ring temperature reconstructions (the blue line) early.  Note that the blue data ends around 1950.  Why?  Well, it turns out that many tree ring reconstructions showed temperatures declining after 1950.  Does this mean that thermometers were wrong?  No, but it does provide good evidence that the trees are not accurately following current temperature increases, and so probably did not accurately portray temperatures in the past.
  • If one looks at the graphs of all of Mann's individual proxy series that are averaged into this chart, astonishingly few actually look like hockey sticks.  So how do they average into one?  McIntyre and McKitrick in 2005 showed that Mann used some highly unusual and unprecedented-to-all-but-himself statistical methods that could create hockey sticks out of thin air.  The duo fed random data into Mann's algorithm and got hockey sticks.
  • At the end of the day, most of the hockey stick (again due to Mann's averaging methods) was due to samples from just a handful of bristle-cone pine trees in one spot in California, trees whose growth is likely driven by a number of non-temperature factors like precipitation levels and atmospheric CO2 fertilization.   Without these few trees, most of the hockey stick disappears.  In later years he added in non-tree-ring series, but the results still often relied on just a few series, including the Tiljander sediments where Mann essentially flipped the data upside down to get the results he wanted.  Taking out the bristlecone pines and the abused Tiljander series made the hockey stick go away again.

There have been plenty of other efforts at proxy series that continue to show the Medieval Warm Period and Little Ice Age as we know them from the historical record

 

click to enlarge

As an aside, Mann's hockey stick was always problematic for supporters of catastrophic man-made global warming theory for another reason.  The hockey stick implies that the world's temperatures are, in absence of man, almost dead-flat stable.   But this is hardly consistent with the basic hypothesis, discussed earlier, that the climate is dominated by strong positive feedbacks that take small temperature variations and multiply them many times.   If Mann's hockey stick is correct, it could also be taken as evidence against high climate sensitivities that are demanded by the catastrophe theory.

 

The Current Lead Argument for Attribution of Past Warming to Man

So we are still left wondering, how do climate scientists attribute past warming to man?  Well, to begin, in doing so they tend to focus on the period after 1940, when large-scale fossil fuel combustion really began in earnest.   Temperatures have risen since 1940, but in fact nearly all of this rise occurred in the 20 year period from 1978 to 1998:

 

click to enlarge

To be fair, and better understand the thinking at the time, let's put ourselves in the shoes of scientists around the turn of the century and throw out what we know happened after that date.  Scientists then would have been looking at this picture:

click to enlarge

Sitting in the year 2000, the recent warming rate might have looked dire .. nearly 2C per century...

click to enlarge

Or possibly worse if we were on an accelerating course...

click to enlarge

Scientists began to develop a hypothesis that this temperature rise was occurring too rapidly to be natural, that it had to be at least partially man-made.  I have always thought this a slightly odd conclusion, since the slope from this 20-year period looks almost identical to the slope centered around the 1930's, which was very unlikely to have much human influence.

 

click to enlarge

But never-the-less, the hypothesis that the 1978-1998 temperature rise was too fast to be natural gained great currency.  But how does one prove it?

What scientists did was to build computer models to simulate the climate.  They then ran the computer models twice.  The first time they ran them with only natural factors, or at least only the natural factors they knew about or were able to model (they left a lot out, but we will get to that in time).  These models were not able to produce the 1978-1998 warming rates.  Then, they re-ran the models with manmade CO2, and particularly with a high climate sensitivity to CO2 based on the high feedback assumptions we discussed in an earlier chapter.   With these models, they were able to recreate the 1978-1998 temperature rise.   As Dr. Richard Lindzen of MIT described the process:

What was done, was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multidecadal Oscillation), claim that such models nonetheless accurately depicted natural internal climate variability, and use the fact that these models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man.

Another way to put this argument is "we can't think of anything natural that could be causing this warming, so by default it must be man-made.  With various increases in sophistication, this remains the lead argument in favor of attribution of past warming to man.

In part B of this chapter, we will discuss what natural factors were left out of these models, and I will take my own shot at a simple attribution analysis.

The next section, Chapter 6 Part B, on natural attribution is here

Denying the Climate Catastrophe: 4a. Actual Temperature Data

This is the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data (this article);   B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In our last chapter, we ended a discussion on theoretical future warming rates by saying that no amount of computer modelling was going to help us choose between various temperature sensitivities and thus warming rates.  Only observational data was going to help us determine how the Earth actually responds to increasing CO2 in the atmosphere.  So in this chapter we turn to the next part of our framework, which is our observations of Earth's temperatures, which is among the data we might use to support or falsify the theory of catastrophic man-made global warming.

click to enlarge

The IPCC position is that the world (since the late 19th century) has warmed about 0.8C.  This is a point on which many skeptics will disagree, though perhaps not as substantially as one might expect from the media.   Most skeptics, myself included, would agree that the world has certainly warmed over the last 100-150 years.  The disagreement tends to be in the exact amount of warming, with many skeptics contending that the amount of warming has been overstated due to problems with temperature measurement and aggregation methodology.

For now, we will leave those issues aside until part B of this section, where we will discuss some of these issues.  One reason to do so is to focus, at least at first, on the basic point of agreement that the Earth has indeed warmed somewhat.  But another reason to put these differences over magnitude aside is that we will find, a few chapters hence, that they essentially don't matter.  Even the IPCC's 0.8C estimate of past warming does not support its own estimates of temperature sensitivity to CO2.

Surface Temperature Record

The most obvious way to measure temperatures on the Earth is with thermometers near the ground.   We have been measuring the temperature at a few select locations for hundreds of years, but it really is only in the last century that we have fairly good coverage of the land surface.  And even then our coverage of places like the Antarctic, central Africa, parts of South America, and all of the oceans (which cover 75% of the Earth) is even today still spotty.  So coming up with some sort of average temperature for the Earth is not a straight averaging exercise -- data must be infilled and estimated, making the process complicated and subject to a variety of errors.

But the problem is more difficult than just data gaps.  How does one actually average a temperature from Denver with a temperature from San Diego?  While a few folks attempt such a straight average, scientists have developed a theory that one can more easily average what are known as temperature anomalies than one can average the temperature itself.  What is an anomaly?  Essentially, for a given thermometer, researchers will establish an average for that thermometer for a particular day of the year.  The exact time period or even the accuracy of this average is not that important, as long as the same time period is used consistently.  Then, the anomaly for any given measurement is the deviation of the measured temperature from its average.   So if the average historical temperature for this day of the year is 25C and the actual measured for the day is 26C, the anomaly for today at this temperature station is +1.0C.

Scientists then develop programs that spatially average these temperature anomalies for the whole Earth, while also adjusting for a myriad of factors, from time-of-day changes in measurement to technology changes over time of the temperature stations to actual changes in the physical location of the measurement.  This is a complicated enough a task, with enough explicit choices that must be made about techniques and adjustments, that there are many different temperature metrics floating around out there, many of which get different results from essentially the same data.  The Hadley Center in England's CRUT4 global temperature metric is generally considered the gold standard, and is the one used preferentially by the IPCC.  Its metric is shown below, with the monthly temperature anomaly in dark blue and the 5 year moving average (centered on its mid-point):

click to enlarge

Again, the zero point of the chart is arbitrary and merely depends on the period of time chosen as the base or average.  Looking at the moving average, one can see the temperature anomaly bounces around -0.3C in the late 19th century and has been around +0.5C over the last several years, which is how we get to about 0.8C warming.

Satellite Temperature Record

There are other ways to take temperature measurements, however.  Another approach is to use satellites to measure surface temperatures (or at least near-surface temperatures).   Satellites measure temperature by measuring the thermal microwave emissions of oxygen atoms in the lower troposphere (perhaps 0-3 miles above the Earth).  Satellites have the advantage of being able to look at the entire Earth without gaps, and are not subject to siting biases for surface temperatures stations (which will be discussed in our part B of this chapter).

The satellite record does, however, rely on a shifting array of satellites all of which have changing orbits for which adjustments must be made.  Of necessity, the satellite record cannot reach as far back into the past.  And the satellites are not actually measuring the temperature of the Earth, but rather a temperature a mile or two up.  Whether that matters is subject to debate, but the clincher for me is that the IPCC and most climate models have always shown that the first and most anthropogenic warming should show up in exactly this spot -- the lower troposphere -- which makes observation of this zone a particularly good way to look for a global warming signal.

Roy Spencer and John Christy have what is probably the leading satellite temperature metric, called "UAH" as a shorthand for University of Alabama, Huntsville's space science center.  The UAH record looks like this:

click to enlarge

Note that the absolute magnitude of the anomaly isn't comparable between the surface and satellite record, as they use different base periods, but changes and growth rates in the anomalies should be comparable between the two indices.

The first thing to note is that, though they are different, both the satellite and surface temperature records show warming since 1980.  For all that some skeptics may want to criticize the authors of the surface temperature databases, and there indeed some grounds for criticism, these issues should not distract us from the basic fact that in every temperature record we have (including other technologies like radiosonde balloons), we see recent warming.

In terms of magnitude, the two indices do not show the same amount of warming -- since 1980 the satellite temperature record shows about 30% less warming than does  the surface temperature record for the same period.   So which is right?  We will discuss this in more depth in part B, but the question is not made any easier by the fact that the surface records are compiled by prominent alarmist scientists while the satellite records are maintained by prominent skeptic scientists.  Which causes each side to accuse the other of having its thumb on the scale, so to speak.  I personally like the satellite record because of its larger coverage areas and the fact that its manual adjustments (which are required of both technologies) are for a handful of instruments rather than thousands, and are thus easier to manage and get right.  But I am also increasingly of the opinion that the differences are minor, and that neither are consistent with catastrophic forecasts.

So instead of getting ourselves involved in the dueling temperature data set food fight (we will dip our toe into this in part B), let's instead apply both these data sets to several propositions we see frequently in the media.  We will quickly see the answers we reach do not depend on the data set chosen.

Test #1:  Is Global Warming Accelerating

One frequent meme you will hear all the time is that "global warming is accelerating."  As of today it had 550,000 results on Google.  For example:

click to enlarge

So.  Is that true?  They can't print it if its not true, right (lol)?  Let's look first at the satellite record through the end of 2015 when this presentation was put together (there is an El Nino driven spike in 2 months after this chart was made, which does not affect the conclusions that follow in the least, but I will update to include ASAP).

click to enlarge

If you want a name for this chart, I could call it the "bowl of cherries" because it has become a cherry-picker's delight.   Everyone in the debate can find a starting point and an end point in this jagged data to find any trend they want to find.  So how do we find an objective basis to define end points for this analysis?  Well, my background is more in economic analysis.  Economists have the same problem in looking at trends for things like employment or productivity because there is a business cycle that adds volatility to these numbers above and beyond any long term trend.  One way they manage this is to measure variables from peak to peak of the economic cycle.

I have done something similar.  The equivalent cyclical peaks in the temperature world are probably the very high Pacific Decadal Oscillation, or El Nino, events.  There was one in 1998 and there is one occurring right now in late 2015/early 2016.  So I defined my period as 18 years from peak to peak.  By this timing, the satellite record shows temperatures to be virtually dead flat for those 18 years.  This is "the pause" that you may have heard of in climate debates.   Such an extended pause is not predicted by global warming theory, particularly when the theory (as in the IPCC main case) assumes high temperature sensitivities to CO2 and low natural variation in temperatures.

So if global warming were indeed accelerating, we would expect the warming rate over the last 18 years to be higher than the rate over the previous 18 years.  But just the opposite is true:

click to enlarge

While "the pause" does not in and of itself disprove the theory of catastrophic manmade global warming, it does easily falsify the myriad statements you see that global warming is accelerating.  At least for the last 20 years, it has been decelerating.

By the way, this is not somehow an artifact of just the satellite record.  This is what the surface record looks like for the same periods:

click to enlarge

Though it shows (as we discussed earlier) higher overall warming rates, the surface temperature record also shows a deceleration rather than acceleration over the last 20 years.

 

Test #2:  Are Temperatures Rising Faster than Expected

OK, let's consider another common meme, that the "earth is warming faster than predicted."

click to enlarge

Again, there over 500,000 Google matches for this meme.  So how do we test it?  Well, certainly not against the last IPCC forecasts -- they are only a few years old.  The first real high-sensitivity or catastrophic forecast we have is from James Hansen, often called the father of global warming.

click to enlarge

In June of 1988, Hanson made a seminal presentation to Congress on global warming, including this very chart (sorry for the sucky 1980's graphics).  In his testimony, he presented his models for the Earth's temperature, which showed a good fit with history**.  Using his model, he then created three forecasts:  Scenario A, with high rates of CO2 emissions;  Scenario B, with more modest emissions; and scenario C, with drastic worldwide emissions cuts (plus volcanoes, that tend to belch dust and chemicals that have a cooling effect).  Surprisingly, we can't even get agreement today about which forecast for CO2 production was closer to the mark (throwing in the volcanoes makes things hard to parse) but it is pretty clear that over the 30 years after this forecast, the Earth's CO2 output has been somewhere between A and B.

click to enlarge

As it turns out, it doesn't matter whether we actually followed the CO2 emissions from A or B.  The warming forecasts for scenario A and B turn out to be remarkably similar.  In the past, I used to just overlay temperature actuals onto Hansen's chart, but it is a little hard to get the zero point right and it led to too many food fights.  So let's pull the scenario A and B forecasts off the chart and compare them a different way.

click to enlarge

The left of chart shows Hanson's scenario A and B, scanned right from his chart.  Scenario A implies a warming rate from 1986 to 2016 of 3.1C per century.  Scenario B is almost as high, at 2.8C per century.  But as you can see on the right, the actual warming rates we have seen over the same period are well below these forecasts.  The surface temperature record shows only about half the warming, and the satellite record shows only about a third the warming, that Hansen predicted.   There is no justification for saying that recent warming rates have been higher than expected or forecast -- in fact, the exact opposite has been true.

We see the same thing when looking at past IPCC forecasts.  At each of its every-five-year assessments, the IPCC has included a forecast range for future temperatures.  In this case, though, we don't have to create a comparison with actuals because the most recent (5th) IPCC Assessment did it for us:

click to enlarge

The colored bands are their past forecasts.  The grey areas are the error bands on the forecast.  The black dots are global temperatures (which actually are shown with error bars, which is good practice but seldom done except perhaps when they are trying to stretch to get into the forecast range).  As you can see, temperatures have been so far below forecasts that they are dropping out of the low end of even the most generous forecast bands.  If temperatures were rising faster than expected, the black dots would be above the orange and yellow bands.  We therefore have to come to the conclusion that, at least for the last 20-30 years, temperatures have not been rising faster than expected, they have been rising slower than expected.

Day vs. Night

There is one other phenomenon we can see in the temperature data that we will come back to in later chapters:  that much of the warming over the last century has been at night, rather than in the daytime.   There are two possible explanations for this.  The first is that most anthropogenic warming models predict more night time warming than they do day time warming.  The other possibility is that a portion of the warming in the 20th century temperature record is actually spurious bias from the urban heat island effect due to siting of temperature stations near cities, since urban heat island warming shows up mainly at night.  We will discuss the latter effect in part B of this chapter.

Whatever the cause, much of the warming we have seen has occurred at night, rather than during the day.  Here is a great example from the Amherst, MA temperature station (Amherst was the first location where I gave this presentation, if that seems an odd choice).

Click to enlarge

As you can see, the warming rate since 1945 is 5 times higher at night than during the day.  This directly affects average temperatures since daily average temperature for a location in the historic record is the simple average of the daily high and daily low.  Yes, I know that this is not exactly accurate, but given technology in the past, this is the best that could be done.

The news media likes to cite examples of heat waves and high temperature records as a "proof" of global warming.   We will discuss this later, but this is obviously a logical fallacy -- one can't prove a trend in noisy data simply by citing isolated data points in one tail of the distribution.  But it is also fallacious for another reason -- we are not actually seeing any upwards trends in high temperature records, at least for daytime highs:

Click to enlarge

To get this chart, we obviously have to eliminate newer temperature stations from the data set -- any temperature station that is only 20 years old will have all of its all time records in the last 20 years (you would be surprised at how many otherwise reputable scientists miss simple things like this).  Looking at just the temperature stations in the US we have a long record for, we see with the black line that there is really no upwards trend in the number of high temperature records (Tmax) being set.   The 1930s were brutally hot, and if not for some manual adjustments we will discuss in part B of this section, they would likely still show as the hottest recent era for the US.   It turns out, with the grey line (Tmin), that while there is still no upward trend, we are actually seeing more high temperature records being set with daily lows (the highest low, as it were) than we are with daily highs.  The media is, essentially, looking in the wrong place, but I sympathize because a) broiling hot daytime highs are sexier and b) it is brutally hard to talk about highest low temperatures without being confusing as hell.

In our next chapter, or really part B of this chapter, we will discuss some of the issues that may be leading the surface temperature record to be exaggerated, or at least inaccurate.

Chapter 4, Part B on problems with the surface temperature record continues here.

If you want to skip Part B, and get right on with the main line of the argument, you can go straight to Chapter 5, part A, which starts in on the question of how much of past warming can be attributed to man.

 

** Footnote:  The history of Wall Street is full of bankrupt people whose models exactly matched history.  I have done financial and economic modeling for decades, and it is surprisingly easy to force multi-variable models to match history.  The real test is how well the model works going forward.  Both Hanson's 1988 models and the IPCC's many models do an awesome job matching history, but quickly go off the rails in future years.  I am reminded of a simple but famous example of the perfect past correlation between certain NFL outcomes and Presidential election outcomes.   This NFL model of presidential elections perfectly matches history, but one would be utterly mad to bet future elections based on it.

Net Neutrality: I Told Your So

From the WSJ (emphasis added):

Netflix now admits that for the past five years, all through the debate on net neutrality, it was deliberately slowing its videos watched by users on AT&T and Verizon’s wireless networks. The company did so for good reason—to protect users from overage penalties. But it never told users at a time when Netflix was claiming carriers generally were deliberately slowing its service to protect their own TV businesses—a big lie, it turned out.

All this has brought considerable and well-deserved obloquy on the head of Netflix CEOReed Hastings for his role in inviting extreme Obama utility regulation of the Internet. Others deserve blame too. Google lobbied the administration privately but was too chicken to speak up publicly against utility regulation.

But Netfix appears to have acted out of especially puerile and venal motives. Netflix at the time was trying to use political pressure to cut favorable deals to connect directly to last-mile operators like Comcast and Verizon—a penny-ante consideration worth a few million dollars at best, for which Netflix helped create a major public policy wrong-turn.

This is what I wrote about net neutrality a couple of years ago:

Net Neutrality is one of those Orwellian words that mean exactly the opposite of what they sound like.  There is a battle that goes on in the marketplace in virtually every communication medium between content creators and content deliverers.  We can certainly see this in cable TV, as media companies and the cable companies that deliver their product occasionally have battles that break out in public.   But one could argue similar things go on even in, say, shipping, where magazine publishers push for special postal rates and Amazon negotiates special bulk UPS rates.

In fact, this fight for rents across a vertical supply chain exists in virtually every industry.  Consumers will pay so much for a finished product.  Any vertical supply chain is constantly battling over how much each step in the chain gets of the final consumer price.

What "net neutrality" actually means is that certain people, including apparently the President, want to tip the balance in this negotiation towards the content creators (no surprise given Hollywood's support for Democrats).  Netflix, for example, takes a huge amount of bandwidth that costs ISP's a lot of money to provide.  But Netflix doesn't want the ISP's to be be able to charge for this extra bandwidth Netflix uses - Netflix wants to get all the benefit of taking up the lion's share of ISP bandwidth investments without having to pay for it.  Net Neutrality is corporate welfare for content creators....

I am still pretty sure the net effect of these regulations, whether they really affect net neutrality or not, will be to disarm ISP's in favor of content providers in the typical supply chain vertical wars that occur in a free market.  At the end of the day, an ISP's last resort in negotiating with a content provider is to shut them out for a time, just as the content provider can do the same in reverse to the ISP's customers.  Banning an ISP from doing so is like banning a union from striking.

 

When You Give Up On Allocating Resources via Markets and Prices, All That is Left is Interest Group Politics

One of the ugly facts about how we manage water is that by eschewing markets and prices to allocate scarce water, all that is left is command and control allocation to match supply and demand.  The uglier fact is that politicians like it that way.  A golf course that pays a higher market rate for water doesn't help a politician one bit.  A golf course that has to beg for water through a political process is a source of campaign donations for life.

In a free society without an intrusive government, it would not matter whether California almond growers were loved or hated.  If people did not like them, then they just wouldn't buy their product.  But in California, the government holds the power of life or death over businesses through a number of levers, not least of which is water.

Almonds have become the Left's] new bête noir. The nut is blamed for exacerbating the California drought, overtaxing honeybee colonies, starving salmon of river water, and price-gauging global consumers. Almonds may be loved by consumers, but almond growers, it seems, are increasingly despised in the media. In 2014, The Atlantic published a melodramatic essay, “The Dark Side of Almond Use”—with the ominous subtitle, “People are eating almonds in unprecedented amounts. Is that okay?” If no one much cared that California agriculture was in near depression for much of the latter twentieth century—and that almonds were hardly worth growing in the 1970s—they now worry that someone is netting $5,000 to $10,000 per acre on the nut.

It is almost too much to bear for a social or environmental activist that a corporate farm of 5,000 acres could in theory clear $30 million a year—without either exploiting poor workers or poisoning the environment, but in providing cool people with a healthy, hip, natural product. The kind of people who eat almond butter and drink almond milk, after all, are the kind of people who tend to endorse liberal causes.

As for almonds worsening the drought: The truth is that the nut uses about the same amount of water per acre as other irrigated California crops such as pasture, alfalfa, tree fruit, pistachios, cotton, or rice. In fact, almonds require a smaller percentage of yearly irrigation use than their percentage of California farmland calls for. Nonetheless, the growth of almond farming represents to many a greedy use of scarce collective resource.

The Bloggess's Rules of Social Media

Her rules for social media seem about dead on, at least in actual practice.  Here are a few:

2. Be shocked and outraged at least once a day. If you can’t start a tweet or Facebook status with “HOW DARE YOU” then it’s probably not worth saying.

3. If strangers online disagree with you, devote your day to yelling at them and getting everyone you know to yell at them as well. Don’t just unfollow them. Track them down and destroy them. Put your entire life on hold to focus on all-caps fights with them. It’s pretty much the written equivalent of public scream-crying and people fucking LOVE that.

...

7. Intentionally misread satire. Get really pissed about it. Share it online and demand that everyone else share it too.  Then get more pissed when others clarify that it’s clearly sarcasm. Block those people. Block them as loudly and as hard as you can.

 

Corporations Don't Want to Report Their True Earnings. Why is The Financial Press So Eager to Help?

I totally understand why corporations may wish to push the envelope on earnings adjustments to make their stock look like a better buy.  But why is the financial media generally complicit with this?  Take any earnings announcement you read about or hear on the TV -- almost every single time it turns out that the earnings number quoted by the press, at least in the headline or the TV sound bite, is the company's non-GAAP adjusted number, not their actual GAAP number.

I might be OK with this if this were being done for good reasons, ie if the financial press thought the adjusted number was somehow more representative.  But I don't get this sense at all.  It feels more like the press is just lazy and accepts whatever number is in the press release without digging further.   Often in a longer story you will find the GAAP number, but buried many grafs in.

Oh, and by the way, the two numbers are diverging:

click to enlarge

A good way to think about this chart is that, if you are not careful, you are paying for the bar on the right but getting the bar on the left.  Note that without adjustments, earnings fell pretty substantially in 2015.  It is not at all clear to me why we have not seen this story.

Never, Ever Trust Media Reporting of Scientific (Or Quasi-Scientific) Studies -- The Github Sexism Study and the Response.

I recommend this article (via Tyler Cowen) on the interesting topic of whether women's open source software contributions on Github are accepted more or less frequently than those of men.   The findings of the study are roughly as follows:

They find that women get more (!) requests accepted than men for all of the top ten programming languages. They check some possible confounders – whether women make smaller changes (easier to get accepted) or whether their changes are more likely to serve an immediate project need (again, easier to get accepted) and in fact find the opposite – women’s changes are larger and less likely to serve project needs. That makes their better performance extra impressive....

Among insiders [essentially past contributors], women do the same as men when gender is hidden, but better than men when gender is revealed. In other words, if you know somebody’s a woman, you’re more likely to approve her request than you would be on the merits alone. We can’t quantify exactly how much this is, because the paper doesn’t provide numbers, just graphs. Eyeballing the graph, it looks like being a woman gives you about a 1% advantage. I don’t see any discussion of this result, even though it’s half the study, and as far as I can tell the more statistically significant half.

Among outsiders, women do the same as/better than men when gender is hidden, and the same as/worse than men when gender is revealed. I can’t be more specific than this because the study doesn’t give numbers and I’m trying to eyeball confidence intervals on graphs. The study itself say that women do worse than men when gender is revealed, so since the researchers presumably have access to their real numbers data, that might mean the confidence intervals don’t overlap. From eyeballing the graph, it looks like the difference is 1% – ie, men get their requests approved 64% of the time, and women 63% of the time. Once again, it’s hard to tell by graph-eyeballing whether these two numbers are within each other’s confidence intervals.

OK, so generally good news for women on all fronts -- they do better than men -- with one small area (63 vs 64 percent) where there might or might not be an issue.

This was an interesting side bit:

Oh, one more thing. A commenter on the paper’s pre-print asked for a breakdown by approver gender, and the authors mentioned that “Our analysis (not in this paper — we’ve cut a lot out to keep it crisp) shows that women are harder on other women than they are on men. Men are harder on other men than they are on women.”

Depending on what this means – since it was cut out of the paper to “keep it crisp”, we can’t be sure – it sounds like the effect is mainly from women rejecting other women’s contributions, and men being pretty accepting of them. Given the way the media predictably spun this paper, it is hard for me to conceive of a level of crispness which justifies not providing this information.

So here is an example press report of this study and data:

Here’s Business Insider: Sexism Is Rampant Among Programmers On GitHub, Research Finds. “A new research report shows just how ridiculously tough it can be to be a woman programmer, especially in the very male-dominated world of open-source software….it also shows that women face a giant hurdle of “gender bias” when others assess their work. This research also helps explain the bigger problem: why so many women who do enter tech don’t stick around in it, and often move on to other industries within 10 years. Why bang your head against the wall for longer than a decade?” [EDIT: the title has since been changed]

This article, and many many like it, bear absolutely no relationship to the actual data in the study.  Since the article of course is all most people even read, now a meme is created forever in social media that is just plain wrong.  Nice job media.