The hallmark of sound scientific research is replicability—the ability of subsequent researchers to reproduce a study’s results. When research results can’t be replicated, it suggests that the results might have been a fluke, that researcher bias might have colored the results, or that the study was poorly designed. A new analysis of 100 psychology studies suggests that the field may have a replicability problem. An international team of researchers was disappointed to find they could only reproduce the results of 39 studies.
Scientists Fail to Reproduce Results of Psychological Studies
The effort was launched in response to growing concerns about fraud, researcher bias, and erroneous data analysis in 2008. A team of 270 researchers pulled 100 studies published in three major psychology journals—the Journal of Personality and Social Psychology, Psychological Science, and the Journal of Experimental Psychology: Learning, Memory, and Cognition. Relying on highly structured protocols to reproduce the studies they selected, researchers set out to determine whether they could replicate the studies’ results.
Only 39 studies were successfully repeated with the same results as the original research. An additional 24 studies produced “moderately similar” results, though they did not fully reproduce the results of the original studies. Fourteen of the studies produced results that were in no way similar to the original results, with the rest producing some similarities, but not enough to meet scientific standards of replicability.
Does This Study Undermine Psychology’s Credibility?
The studies were all published in peer-reviewed journals, suggesting that not only the studies’ authors, but also the scientists who reviewed the research, didn’t notice important flaws. This certainly hints at a problem with psychological research, but the research may reflect a larger trend among all sciences, not just psychology. For instance, in 2014, Springer and the Institute of Electrical and Electronic Engineers removed 100 studies from their databases after it was revealed the studies were computer-generated gibberish.
Psychology, like all sciences, is home to an increasingly cutthroat political and academic culture. The well-known mandate to “publish or perish” may force academics to rush through their research or to publish results that aren’t nearly as strong or compelling as they first appear. Exciting results may make for flashy headlines, only to subsequently be retracted—or fade into the woodwork after other researchers can’t reproduce the original findings. To armchair psychologists, therapists, and mental health advocates who are interested in or affected by psychological research, this study serves as an important reminder that published scientific studies don’t always “prove” what they claim to.
- Do normative scientific practices and incentive structures produce a biased body of research evidence? (2015, April 30). Retrieved from https://osf.io/ezcuj/wiki/home/
- First results from psychology’s largest reproducibility test. (2015, April 30). Retrieved from http://www.nature.com/news/first-results-from-psychology-s-largest-reproducibility-test-1.17433
- Van Noorden, R. (2014, February 24). Publishers withdraw more than 120 gibberish papers. Retrieved from http://www.nature.com/news/publishers-withdraw-more-than-120-gibberish-papers-1.14763
- Yong, E. (2012, May 16). Replication studies: Bad copy. Retrieved from http://www.nature.com/news/replication-studies-bad-copy-1.10634
© Copyright 2015 GoodTherapy.org. All rights reserved.
The preceding article was solely written by the author named above. Any views and opinions expressed are not necessarily shared by GoodTherapy.org. Questions or concerns about the preceding article can be directed to the author or posted as a comment below.