No amount of experimentation can ever prove me right; a single experiment can prove me wrong. —Albert Einstein
For some time, a dilemma in the scientific community known as a “reproducibility crisis” has been an issue of concern. This term refers to the inability to successfully replicate results contained in previous scientific studies. Most recently, multiple reports have shown that the reproducibility crisis has been prevalent in the field of cancer research.
While many studies appear to be promising upon one’s initial review, they may not be nearly as reproducible as once believed. An initiative called the Reproducibility Project, led by University of Virginia psychology professor Brian Nosek, has been hard at work since 2011 seeking to analyze the reliability of dozens of studies by attempting to replicate them.
The project began after two separate discoveries, from pharmaceutical companies Bayer Healthcare and Amgen, that revealed their scientists were only able to verify a low percentage of previous studies.
The process in itself saw considerable difficulty. Elizabeth Iorns, a leader of The Reproducibility Project, and project manager Tim Errington aimed to ensure that the replications precisely followed the original methodology. They soon discovered that tracking down the raw data and identifying what the labs consisted of was a time-consuming endeavor. As noted in The Atlantic, the methods included in many of the original studies “theoretically ought to provide recipes for doing the same experiments. But often, those recipes are incomplete, missing out important steps, details, or ingredients. In some cases, the recipes aren’t described at all; researchers simply cite an earlier study that used a similar technique.”
According to a report from Wired, Iorns and Errington “had to track down that information from each of the original authors, a time-consuming process neither party much enjoyed,” Wired reported. “Not only did some of the researchers find the whole thing a nuisance, but sometimes the labs didn’t even know who did what on the original paper, as graduate students or post-docs who did the bulk of the work had since moved on.”
So far, the Reproducibility Project has been able to share their assessments of five cancer studies, with several more yet to be completed. According to BBC, “the team was able to confirm only two of the original studies’ findings. Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.”
Errington, a microbiologist managing the project, noted that these results are “worrying because replication is supposed to be a hallmark of scientific integrity.”
“Our intent with this project is to perform these direct replications so that we can understand collectively how reproducible our research is,” Errington stated in an article from PBS. While some researchers reportedly expressed concern that the project may cast negativity on the studies and create future funding problems, Errington noted that the project’s purpose is to “create a discussion.” The Reproducibility Project has provided further information on their initiative which can be seen here.
The Reproducibility Project’s findings follow a survey of 1,576 researchers published in the journal Nature in 2015. The survey revealed that 70% of researchers attempted to reproduce a fellow researcher’s experiment and failed, and over half failed attempts to replicate their own experiments. The survey also revealed that over half of the researchers acknowledged a reproducibility crisis.
However, at least 20 more studies have yet to be analyzed and replicated, so it appears that it’s too early to confirm that the remainder of these studies will not be successfully replicated. Marcus Munafo, a biological psychology professor at the University of Bristol, said that the problem with reproducing results may be partially attributed to the final versions of published studies which can be a “highly curated version of what’s actually happened.” Munafo further explained that “the trouble is that gives you a rose-tinted view of the evidence because the results that get published tend to be the most interesting, the most exciting, novel, eye-catching, unexpected results.”