Posted on behalf of Ed Yong.
Psychologists are going through a period of intense self-reflection regarding the reliability of research in their field, fuelled by recently uncovered cases of fraud, failed attempts to replicate classic results, and calls from prominent psychologists to replicate key results in disputed fields.
The latest volley in this debate is a special issue of Perspectives on Psychological Science, consisting of 18 papers that outline the scope of the so-called “replicability crisis”, and potential ways of fixing it.
Among the contributions, Matthew Makel from Duke University and two other colleagues have uncovered just how uncommon replications are in psychology, especially from independent groups. By scanning the top 100 psychology journals since 1900, and analysing 500 randomly selected articles more deeply, they showed that just 1% of publications are replications of earlier work. Of these, only 14% are direct replications that follow the original experimental recipes, while the others are conceptual replications that test related hypotheses using different methods and settings.
Makel also found that around half of these replications are done by the same scientists behind the original experiments (and many are published as part of the original papers). This matters because the odds that the replication will be successful are 92% if the original authors were involved, but just 65% if done by an independent group.
This lack of independent replications, combined with the low statistical power of many studies and the tendency to only publish positive results, is a serious problem, according John Ioannidis from Stanford School of Medicine in California. In a commentary, Ioannidis, probably the biggest name in the meta-field of scientific credibility, writes that “the overall credibility of psychological science at the moment may be in serious trouble”.
Later papers in the issue outline potential ways of fixing these problems by encouraging more replications, including: using undergraduate projects as a route for replicating existing studies; encouraging adversarial collaborations where sceptics replicate studies alongside original investigators; providing accessible outlets for publishing replications; opening up data, methods and workflow; and pre-registering studies including all the intended methods and analyses.
Many of these reforms should be enacted in parallel, argue the issue’s editors, Hal Pashler from University of California, San Diego and Eric-Jan Wagenmakers from University of Amsterdam. They write that psychologists have “found ourselves in the very unwelcome position of being… the public face for the replicability problems of science in the early 21st century”. But they also see a solid opportunity “to rise to the occasion and provide leadership in finding better ways to overcome bias and error in science generally”.