Reproducibility

From RationalWiki
(Redirected from Replication crisis)
Jump to navigation Jump to search
Eyes wearing
inverted lenses

Philosophy of science
Icon philosophy of science.svg
Foundations
Method
Conclusions

Reproducibility is a key part of the scientific method and refers to the possibility of reproducing an experimental result by a third party following the same procedures as the original experimenter.

Having the experiment repeated by an independent investigator reduces the impact of a single study's experimental error and increases the chances of the hypothesis being accepted by the scientific community. Of course, reproducibility alone is not enough when even random numbers can reproduce prior results.[1]

One of the signs of pseudoscience is a lack of reproducibility in their experiments. Scientific experiments can also lack reproducibility,[2] but this just indicates either that the original experiment was flawed, that a better understanding of the problem is needed, or it is something that for its nature happens very often.[3]

Replication crisis[edit]

The replication crisis means that many scientific experiments fail to replicate. Experiment replication attempts are part of the philosophy of good science. However, in the contemporary academic community, there are powerful disincentives against them. Academic journals are generally not interested in publishing replication attempts, and publications are important for the career of scientists. In contrast, academic journals are interested in novel experiments with significant results.

A failure to replicate an experiment does not necessarily mean that there was anything wrong with the original experiment or that questionable research practices took place; the scientific effect found in the original experiment may be real and the failed replication may be due to chance, or due to differences in methodology. Obtaining the same or similar results using different methodology does, however, increase confidence in the results.

The replication crisis is most pronounced in the field of psychology.[4] A large-scale replication project was published in 2015 of the results of replication attempts of original published psychology experiments from 2008. Among social psychology experiments, only 14 out of 55 (25%) replicated; among cognitive psychology experiments, only 21/41 (50%) replicated.[5][6] However, the replication crisis has also emerged in the biomedical field. The drug company Bayer could only replicate 20% to 25% of 67 peer-reviewed studies in oncology, women's health, and cardiovascular diseases that were performed between 2007 and 2011.[7] Another drug company, Amgen, could only replicate 6 out of 53 "landmark" studies in cancer research.[8]

Notably, though, the most famous "failed replication" study supposedly had its own replicability issues. A group of Harvard researchers, led by Daniel Gilbert, claimed that its failure to replicate a given study tended to coincide with divergent methodology from the original.[9] The authors of the original 2015 study in Science responded to these criticisms by arguing that the critics' "...very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data."[10] Gilbert et al.'s conclusions have been criticized for other reasons as well, including fundamental statistical errors.[11]

Economics also faces a replicability crisis, with a 2015 study found that 49% (29/59) of economics research is replicable with the assistance of the authors or only 33% (22/67) of papers without the assistance of the authors (or in the authors' own words: "economics research is usually not replicable.")[12]

See also[edit]

External links[edit]

References[edit]

  1. NJL Brown, DA MacDonald, MP Samanta, HL Friedman, and JC Coyne (2014). A critical reanalysis of the relationship between genomics and well-being. Proceedings of the National Academy of Sciences of the United States of America, 111(35):12705-12709.
  2. Is redoing scientific research the best way to find truth? During replication attempts, too many studies fail to pass muster
  3. For example the claimed detections of magnetic monopolesWikipedia
  4. Closed Thinking: Without scientific competition and open debate, much psychology research goes nowhere (Science News, May 16, 2013)
  5. Psychology results evaporate upon further review: Surprising reports, findings with marginal statistical significance least likely to be reproduced, study concludes by Bruce Bower (2:00pm, August 27, 2015) Science News.
  6. Estimating the reproducibility of psychological science by B. A. Nosek et al. 28 August 2015: Vol. 349 no. 6251 Science DOI: 10.1126/science.aac4716.
  7. Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: how much can we rely on published data on potential drug targets? Nature reviews Drug discovery, 10(9), 712. DOI:10.1038/nrd3439-c1
  8. Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483, 531–533. DOI:10.1038/483531a
  9. Study that undercut psych research got it wrong, Harvard Gazette
  10. Response to Comment on “Estimating the reproducibility of psychological science”, Anderson et al., 2016, Science
  11. The statistical conclusions in Gilbert et al (2016) are completely invalid, Daniel Lakens, March 8, 2016
  12. Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say "Usually Not" by Andrew C. Chang & Phillip Li (2015). Finance and Economics Discussion Series 2015-083. Board of Governors of the Federal Reserve System. doi:10.17016/FEDS.2015.083.

<references>