I have written a number of times before that having only a few page-limited scientific journals is creating a bias towards positive results that can't be replicated
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
This is not really wildly surprising. Consider 20 causal relationships that don’t exist. Now consider 20 experiments to test for this relationship. Likely 1 in 20 will show a false positive at the 95% certainty level — that’s what 95% certainty means. All those 1 in 20 false positives get published, and the other studies get forgotten.
Also, Kevin Drum links a related finding that journal retractions are on the rise (presumably from false positives that could not be replicated or were the results of bad process).
In 1890, there were technological and cost reasons why only a select few studies were culled into page-limited journals. But that is not the case today. Why do we still tie science to the outdated publication mechanism. Online publication would allow publication of both positive and negative results. It would also allow mechanisms for attaching critiques and defenses to the original study as well as replication results. Sure, this partially breaks the academic pay and incentive system, but I think most folks are ready to admit that it needs to be broken.
This is a pretty well-known non-secret among about anyone who does academic research, but Arnold Kling provides some confirmation that there seems to be a tremendous bias towards positive results. In short, most of these can't be replicated.
A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."...
Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."
This is not really wildly surprising. Consider 20 causal relationships that don't exist. Now consider 20 experiments to test for this relationship. Likely 1 in 20 will show a false positive at the 95% certainty level -- that's what 95% certainty means. All those 1 in 20 false positives get published, and the other studies get forgotten.
To some extent, this should be fixable now that we are not tied to page-limited journals. Simply require as a grant condition that all findings be published online, positive or negative, would be a good start.
I don't often defend Conservatives but I will say that there is nothing much more useless to the public discourse that bullsh*t sociology studies trying to show that Conservatives are dumber or whatever (and remember, those same studies show libertarians the smartest, so ha ha).
In this general category of schadenfreude masquerading as academics is the recent "finding" that conservatives are increasingly anti-science or have lost trust in science. But here is the actual interview question:
166. I am going to name some institutions in this country. Some people have complete confidence in the people running these institutions. Suppose these people are at one end of the scale at point number 1. Other people have no confidence at all in teh people running these institutions. Suppose these people are at the other end, at point 7. Where would you place yourself on this scale for: k. Scientific community?
A loss of trust in the scientific community is way, way different than a loss of trust in science. Confusing these two is roughly like equating a loss in trust of Con Edison to not believing in electricity. Here is an example from Kevin Drum describing this study's results
In other words, this decline in trust in science has been led by the most educated, most engaged segment of conservatism. Conservative elites have led the anti-science charge and the rank-and-file has followed.
There are a lot of very good reasons to have lost some trust in our scientific institutions, in part due to non-science that gets labeled as real science today. I don't think that makes me anti-science. This sloppy mis-labeling of conclusions in ways that don't match the data, which Drum is ironically engaging in, is one reason may very scientific-minded people like myself are turned off by much of the public discourse on science. The irony here is that while deriding skepticism in the scientific community, Drum provides a perfect case example of why this skepticism has grown.
I have read a number of stories about how Tesla batteries become bricked if they are completely discharged. What I have not seen is an explanation of the physics or chemistry of why this is true. Can anyone explain it or give me a pointer to an explanation? Certainly if this happened to, say, iPod batteries we would have had torches and pitchforks outside of Cupertino long ago.
One of the classic mistakes in graphics is the height / volume fail. This is how it works: the length of an object is used to portray some sort of relative metric. But in the quest to make the graphic prettier, the object is turned into a 2D, or worse, 3D object. This means that for a linear dimension where one object is 2x as long as another, its area is actually 4x the other and its volume is 8x. The eye tends to notice the area or volume, so that the difference is exaggerated.
The Tebow character is, by the data, supposed to be about 1.7x the Brady character. And this may be true of the heights, but visually it looks something like 4x larger because the eye is processing something in between area and volume, distorting one's impression of the data. The problem is made worse by the fact that the characters are arrayed over a 3D plane. Is there perspective at work? Is Rodgers smaller than Peyton Manning because his figure is at the back, or because of the data? The Vick figure, by the data, should be smaller than the Rodgers figure but due to tricks of perspective, it looks larger to me.
This and much more is explained in this Edward Tufte book, the Visual Display of Quantitative Information. You will find this book on a surprising number of geek shelves (next to a tattered copy of Goedel-Escher-Bach) but it is virtually unknown in the general populace. Every USA Today graphics maker should be forced to read it.
The old saying goes, "where there is smoke, there's fire." I think we all are at least subconciously suceptible to thinking this way vis a vis the cancer risks in the media. We hear so much about these risks that, even if the claims seem absurd, we worry if there isn't something there. After all, if the media is concerned, surely the balance of evidence must be at least close - there is probably a small risk or increase in mortality.
Cell phones do not cause cancer. They do not even theoretically cause cancer. Why? Because they simply do not produce the type of electromagnetic radiation that is capable of causing cancer. Michael Shermer explains, using basic physics:
...known carcinogens such as x-rays, gamma rays and UV rays have energies greater than 480 kilojoules per mole (kJ/mole), which is enough to break chemical bonds... A cell phone generates radiation of less than 0.001 kJ/mole. That is 480,000 times weaker than UV rays...
If the radiation from cell phones cannot break chemical bonds, then it is not possible for cell phones to cause cancer, no matter what the World Health Organization thinks. And just to put the "possible carcinogen" terminology into perspective, the WHO also considers coffee to be a possible carcinogen. Additionally, it appears that politics and ideology may have trumped science in the WHO's controversial decision.
I thought this was an interesting discussion of leap seconds. At its heart, the debate is about a tradeoff between hassle (a lot of programming goes into inserting a second into a day every year or so) and how close we want time to match its traditional association with astronomical observations (e.g. noon is exactly noon at Greenwich). This is a debate that has occured at least since the imposition of time zones (mainly at the behest of railroads) which for many cities converted "sun time" to "railroad time." Until then, every town was on a different time with noon set to local astronomic noon. Now, only a few cities actually have noon at noon. Of course, daylight savings time took this even further.
Even more interesting than the soft consensus in favor of government intervention was a strong undercurrent that those who disagreed with it were guilty of denying basic truths. One of the questions from an audience full of Senate staffers, policy wonks, and journalists was how can we even have a rational policy discussion with all these denialist Republicans who disregarded Daniel Patrick Moynihan’s famous maxim that “Everyone is entitled to his own opinion, but not his own facts”? Jared Bernstein couldn’t have been more pleased.
“I feel like we’re in a climate in which facts just aren’t welcome,” he said. “I think the facts of the case are that we know what we can do to nudge the unemployment rate down.…I think the consensus among economists is that this is a good time to implement fiscal stimulus that would help create jobs and make the unemployment rate go down. I consider that a fact.”
In science, you insist most loudly on a fact based on how much it has withstood independent peer review. In politics, it’s closer to the opposite—the more debatable a point is, the more it becomes necessary to insist (often in the face of contrary evidence) that the conclusion is backed by scientific consensus
EU bans claim that water can prevent dehydration...
EU officials concluded that, following a three-year investigation, there was no evidence to prove the previously undisputed fact.
Producers of bottled water are now forbidden by law from making the claim and will face a two-year jail sentence if they defy the edict, which comes into force in the UK next month.
For three years a group of government employees actually got paid to come to the conclusion that drinking water does not prevent dehydration. Congrats.
If you want an explanation, my guess is that this is part of the Left's war on bottled water. For some bizarre reason, bottled water has been singled out as one of the evils of modern technology that will drive us into a carbon dioxide-induced climate disaster. So I don't think the EU would have approved any label claim for water. Since this is such an absurdly obvious claim that most consumers would just chuckle at (yes, consumers can be trusted to parse product claims), I almost wonder if some water company didn't just float this to make the point that no claim could be approved in the EU system.
Not only does this mean that we have have billions of people on Earth and not starve, but it also has freed up labor for more productive and value-enhancing activities.
As an aside, remember this chart when global warming alarmists argue the the warming trend of the last 50 years is reducing crop yields. (If the linked article seems simply bizarre given the chart above, realize the NYT is saying that crop yields are down from what they might have been. This is the same kind of faulty logic that was used by Obama to credit his stimulus with job gains when in fact the economy was losing jobs. They posit some unproveable hypothetical, and then say reality diverged from that hypothetical because of whatever factor they are trying to push, whether it be CO2 or stimulus).
The problem with food prices is not production, its the fact that we take such a huge percentage of our food grains and, by government dictat, convert them to automotive fuel.
The European Union is overestimating the reductions in greenhouse gas emissions achieved through reliance on biofuels as a result of a “serious accounting error,” according to a draft opinion by an influential committee of 19 scientists and academics.
The European Environment Agency Scientific Committee writes that the role of energy from crops like biofuels in curbing warming gases should be measured by how much additional carbon dioxide such crops absorb beyond what would have been absorbed anyway by existing fields, forests and grasslands.
Instead, the European Union has been “double counting” some of the savings, according to the draft opinion, which was prepared by the committee in May and viewed this week by The International Herald Tribune and The New York Times.
“The potential consequences of this bioenergy accounting error are immense since it assumes that all burning of biomass does not add carbon to the air,” the committee wrote.
Duh. This has been a known fact to about everyone else, as most independent studies not done by a corn-state university have found ethanol to have, at best, zero utility in reducing atmospheric CO2.
It is worth noting that the EU would likely have never made this admission had it solely been under the pressure of skeptics, for whom this is just one of a long list of fairly obvious errors in climate-related science. But several years ago, environmental groups jumped on the skeptic bandwagon opposing ethanol, both for its lack of efficacy in reducing emissions as well as the impact of increasing ethanol product on land use and food prices.
It's a wonder how, when over "97 percent to 98 percent" of scientific authorities accepted the Ptolomeic view of the solar system that we ever got past that. Though I could certainly understand why in the current economy a die-hard Keynesian might be urging an appeal to authority rather than thinking for oneself.
When, by the way, did the children of the sixties not only lose, but reverse their anti-authoritarian streak?
Postscript: I have always really hated the nose-counting approach to measuring the accuracy of a scientific hypothesis. If we want to label something as anti-science, how about using straw polls of scientists as a substitute for fact-based arguments?
Yes indeed, the number of people in the newly made-up profession of "climate science" that are allowed by the UN control the content of the IPCC reports and whose funding is dependent on global warming being scary probably is very high. The number of people in traditional scientific fields like physics, geology, chemistry, oceanography and meteorology who never-the-less study climate related topics that wholeheartedly are all-in for catastrophic man-made global warming theory would be very different
I was listening to the WSJ radio podcast while getting some dinner ready, and one of their reporters said, in the context of discussing Fukushima, that some of the engineers at the plant "knew there was a risk" in the plant's older design and could conceivably face charges for not doing something about said risk.
This kind of talk really grinds my gears. In any engineering situation there is always some risk. You can have less risk, or more risk, but risk is not something you either have or do not have.
I will go one step further. This ex post facto witch hunt aimed at folks who discussed risks (an pogrom that occurs in nearly every product liability lawsuit with fishing expeditions through company memos) is the WORST possible thing for consumers concerned about the safety of their products and environment. Engineers have to feel free to express safety concerns within organizations no matter how hypothetical these suppositions may be.
Some concerns will turn out to be unfounded. Some suggested risks will be deemed too small to economically overcome. And some will turn out to be substantial and require action. And sometimes well-intentioned people will make what is, in retrospect, the wrong trade-offs with risks. These witch hunts only tend to suppress this very valuable and necessary internal dialog within organizations. Nothing is going to turn the brains of engineers off faster than an incentive system that punishes them retroactively for well-intentioned discussions about risk.
A time lapse youtube video of locations of nuclear detonations on Earth (all but a couple, of course, being tests). There are far more than I would have guessed. Had you given me an over-under of 2000, I would have surely taken the under. And been wrong.
I can't vouch for the accuracy of this, of course. May be they are counting a test differently than I would.
The other day I was reading an article on the crash of Air France flight 447, discussing recovery of the black box (two years after the crash). It was suspected that the air speed measurement devices may have failed, thus impairing the automatic pilot, but it was not understood why the pilots were unable to fly the plane manually. Was something else going wrong? Did automatic systems, operating off bad data, override manual controls somehow?
The article said that the black box showed the plane went into a stall, and the pilots spent much of the fall pulling back on the yoke to regain altitude. This made zero sense to me. A stall occurs when the wing is angled to steeply. The wing generates lift because the air on top of the wing must follow a longer curve than the air on the bottom of the wing, and thus must move at higher velocity. This higher velocity results in lower pressures. In effect, the plane is sucked up. When the wing is too steeply angled, the air on the top of the wing breaks away from the surface, and lift is lost.
It is therefore absolutely fundamental in stall situations to drop the nose. This does two things -- it decreases the wing angle out of stall territory, and it increases speed, which also increases lift.
Apparently, the experts are just as befuddled as I was reading the data. Dropping the nose in a stall is on the first page of pilot 101. This is not some arcane fact buried on page 876 of the textbook. This is so basic I know it and I don't have a pilots license. But apparently the pilots of 447 were yanking up on the nose through the whole long fall to Earth.
Scientists studying Creutzfeldt-Jakob Disease in the field are still deeply divided about whether BSE can be transmitted to humans, and about the potentially terrifying consequences for the population.
"It's too late for adults, but children should not be fed beef. It is as simple as that," said Stephen Dealler, consultant medical microbiologist at Burnley General Hospital, who has studied the epidemic nature of BSE and its human form, Creutzfeldt-Jakob Disease, since 1988.
He believes that the infectious agent would incubate in children and lead to an epidemic sometime in the next decade.
"Any epidemic in humans would start about 15 years after that in cattle, and about 250,000 BSE-infected cows were eaten in 1990. There could be an epidemic of this new form in the year 2005. These 10 cases were probably infected sometime before the BSE epidemic started."
His worst case scenario, assuming a high level of infection, would be 10 million people struck down by CJD by 2010. He thought it was now "too late" to assume the most optimistic scenario of only about 100 cases.
One of the great things about the Internet is that it is going to be much easier to hold alarmists accountable for wild scare-mongering predictions that prove to be absurd. Though, I suppose Paul Ehrlich still gets respect in some quarters despite being 0-for-every-prediction-he-has-ever-made, so maybe its too much to hope for accountability.
The mysteries of the brain may be virtually endless, but a team of researchers from two institutes in Göttingen, Germany now claim to have an answer for at least one question that has remained a puzzle: just how fast does the brain forget information? According to the new model of brain activity that the researchers have devised, the answer to that is one bit per active neuron per second. As Fred Wolf of the Max Planck Institute for Dynamics and Self-Organization further explains, that "extraordinarily high deletion rate came as a huge surprise," and it effectively means that information is lost in the brain as quickly as it can be delivered -- something the researchers say has "fundamental consequences for our understanding of the neural code of the cerebral cortex."
I don't know why I have so much fun fact checking the "science" at green blog "the Thin Green Line," but I do. Today's exercise:
There are, right now, at least half a million pieces of junk in orbit around our cosmic Pig Pen of a planet. Space junk isn't just an aesthetic problem, either: Even tiny pieces of junk orbit at speeds above 15,000 miles per hour, so even the tiniest bit of debris can cause serious damage to anything it comes into contact with. Space junk threatens satellites, manned space missions and even the International Space Station.
While certainly space junk can be a problem in certain instances, I am constantly left helpless with laughter at the absolute urgency this type of blog approaches every problem. Here are a couple of things that might help you sleep better at night:
The speed space junk is traveling is largely irrelevant. It could be 15,000 mph or 50,000. The important variable is the closing speed of two objects, not their absolute speed. And (thanks to our friend Newton) we know that objects in the same stable orbits have to be moving at the same speed. Now, orbits don't all have to parallel and can cross, yielding real relative velocities, but recognize that since over 95% of these half million objects are less than 4 inches in diameter, its a bit like you and your friends firing guns and having the bullets meet in mid-air.
The drawing he shows makes the sky seem really cluttered. But let's just take a small portion of this space. Let's consider the volume of space between 100 and 500 miles above the Earth's surface. Using a bit of geometry, this space works out to be 93 trillion cubic miles of volume. Which means one object, generally less than 4 inches in diameter, in space per every 186,000 cubic miles, which for scale is the equivalent volume to a building 40 stories tall that covers the entire continental United States.
Certainly avoiding these objects is a navigation concern for powered spacecraft, which is why all these pieces of junk are watched in the first place. But the idea of a space superfund to clean this stuff up is so hilariously expensive (given current tech) and such a staggering waste of resources compared to other uses of those funds that one would only expect to find it on, well, an environmental blog.
I will tell you that no matter how confidence in one has in his own intellectual ability, it's hard not to experience an "am I crazy?" moment when one reaches a conclusion different from everybody else's. Case in point is my critique of the EPA's mpg numbers for electric vehicles. The EPA's methodology strikes me as complete BS, but everyone, even folks like Popular Mechanics, keep treating the number like it is a serious representation of the fossil fuel use of vehicles like the Volt and Leaf.
I thought this was an incredibly cool image, showing the changing path of the Mississippi River (in this case where it meets the Ohio). (via Flowing Data)
When I was a kid, I was fascinated by water flow and erosion. I remember spending a whole day on a woodside hill watching the evolution of an ad hoc stream of water, playing around with damming it in some places, creating new channels, etc. When I went to the beach, I never built castles but attempted to build walls and channels to shape the way the tide flowed. Since I am free associating, I also remember visiting a huge model of the Mississippi, I think near Vicksburg, that I thought at the time was the coolest thing on Earth. Not even sure today if it still exists.
I second Alex's nomination - this is one of my favorite documentaries as well. The book by the same name is very good as well and covers more of the math history. I actually watched it just the other day in a home double feature with a A Beautiful Mind, mainly showing my kids the scenes shot at Princeton** but it turned out to be a great essay on math and the human mind.
** I suppose I could have thrown in Transformers 2 as a Princeton triple feature but it seemed somehow out of place in terms of tone. Also, seeing all the ASU girls walking around the Princeton campus was almost weirder than the hallucinations in A Beautiful Mind.