Those conclusions come from a new analysis of recent papers that use a popular animal model for multiple sclerosis. Published today in PLoS Biology, the analysis concludes that voluntary reporting guidelines for animal studies, which have been endorsed by hundreds of research journals, are largely being ignored.
David Baker, a neuroimmonologist at Queen Mary University of London who led the study, says journals ought to compel animal researchers to disclose experimental details that could lead to bias, such as whether animals were randomly assigned to different treatment groups or not. “Unless there is enforcement there is no change,” he says.
In 2010, the UK National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3R) laid out best practices in the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Modeled after clinical-trial guidelines, ARRIVE is a 20-item checklist that addresses study design, experimental procedures and animal care. More than 300 research journals, including Nature Publishing Group (NPG) and PLoS journals, have since endorsed the guidelines.
To see if researchers and journals were following those suggestions, Baker and his team analysed papers published in NPG journals and PLoS journals that used a multiple sclerosis model called experimental autoimmune encephalomyelitis (EAE). The team compared papers published in the two years before ARRIVE was released with papers published in the following two years.
The ARRIVE guidelines did not have much impact on which details were included in papers, Baker’s team found. For instance, less than 21% of the NPG or PLoS journal articles reported whether animals were randomly assigned to experimental groups or not, both before and after the ARRIVE guidelines were issued. Similarly, before and after 2010, less than 7% of studies reported whether or not they conducted sample size analyses, which can indicate whether enough animals were used in the experiment to tell if it worked or not. Baker’s team noticed some uptick in the reporting of the sex, age and number of animals used, particularly in Nature journals.
In a separate analysis, Baker’s team found problems with how statistics were reported in papers using EAE data. The researchers analysed 180 papers using the model between 1 December 2011 and 21 May 2012 and calculated how many used so called “non-parametric” statistical tests. Baker says non-parametric tests are most appropriate for EAE data because they make no assumptions about the relationship between data points or the overall distribution of data. (Not all researchers agree with his argument.)
The team found that only 39% reported the use of non-parametric statistical calculations for the data. In the case of Nature, Science and Cell, just 4% of papers indicated that they used such tests. Baker says these studies are more likely to report false positives, indicating, for instance, that an experimental treatment for multiple sclerosis is effective.
“There’s this very vocal minority of clinicians who say ‘animal data is rubbish, it never translates into human benefit,’” Baker says. “By doing bad science which is of poor quality and experimental design, we just pander to that problem.”
Baker says journals should compel researchers to report key details from animal experiments, which would allow other scientists to assess and replicate the work. Because of such requirements, statements about ethical approval for research are more likely to appear in manuscripts than in the 1970s or 1980s, Baker says.
According to an editorial published alongside Baker’s article, PLoS ONE is likely to require all researchers conducting animal experiments to fill out a checklist including the ARRIVE guidelines, and PLoS Medicine already has this requirement (though it publishes few animal studies). PLoS Biology is mulling its options, according to the editorial.
“Voluntary guidelines are very important, but journals need to take tangible steps to implement them with standards flexible enough to work for a broad range of studies,” says Nature’s Editor-in-Chief, Philip Campbell. In May 2013, NPG journals began asking researchers to fill out a checklist addressing statistical calculations and some details of animal experiments. (See “Reducing our Irreproducibility“) “These guidelines were established in consultation with the community to address the most common problems in transparency and potential bias in research papers. We will be reviewing their implementation and impact later this year,” Campbell adds.