I have written a number of times before that having only a few page-limited scientific journals is creating a bias towards positive results that can't be replicated
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
This is not really wildly surprising. Consider 20 causal relationships that don’t exist. Now consider 20 experiments to test for this relationship. Likely 1 in 20 will show a false positive at the 95% certainty level — that’s what 95% certainty means. All those 1 in 20 false positives get published, and the other studies get forgotten.
Actually, XKCD did a better job of making this point. It's a big image so I won't embed it but check it out.
Also, Kevin Drum links a related finding that journal retractions are on the rise (presumably from false positives that could not be replicated or were the results of bad process).
In 1890, there were technological and cost reasons why only a select few studies were culled into page-limited journals. But that is not the case today. Why do we still tie science to the outdated publication mechanism. Online publication would allow publication of both positive and negative results. It would also allow mechanisms for attaching critiques and defenses to the original study as well as replication results. Sure, this partially breaks the academic pay and incentive system, but I think most folks are ready to admit that it needs to be broken.