Mike Rossner, Managing Editor of the Journal of Cell Biology, has published an editorial in which he criticizes the report from a committee convened by Science to investigate their handling of infamous stem cell papers by Korean scientist Woo Suk Hwang.
One question that emerged after this instance of fraud was uncovered had to do with the responsibility of scientific journals to screen every image in every figure of the papers they publish in order to ensure that they don’t violate standards of data integrity.
The Science committee supported the idea of enforcing these standards, but recommended that special attention be applied to a vaguely defined group of high-profile papers that is most likely to have the largest scientific impact—an idea that Rossner dismisses.
He is also dismissive of the spot checks made by our journals, which randomly select for screening one paper from each issue, referring to the Nature approach as Russian roulette policy.
Rossner concludes by stating that “the progress of science depends on the reliability of the entire published record, and journal editors must do their part to ensure that reliability”, and urges editors to “participate in this dialogue with the scientific community, to help devise effective and practical standards that can be applied to the published literature”.
I think that Rossner might be worrying a bit too much about the enormous number of papers that are published and no-one will ever read or cite, let alone try to reproduce (which are also part of the entire published record), but he is right to say that “effective and practical standards” to monitor data integrity ought to be devised.
So, let’s talk about a couple of practical issues. Checking images in every paper will use human and financial resources, the cost of which will be passed by publishers to subscribers. Is the scientific community ready to foot this bill? And if librarians don’t want to pay more money for their journals, can small, society-managed journals afford this extra expense?
What about the “Law of diminishing returns”? If I’m not mistaken, the number of papers that J. Cell Biol. has identified as fraudulent is very small. Of course, it can be argued that it doesn’t matter if only one paper per million is the product of misconduct; what matters is that we erradicate this problem once and for all. This may be so, but if we’re talking about practical standards, I would also argue that, from the practical perspective, this is not the most effective deployment of a journal’s resources. I would very much prefer to have an extra News editor than an image screener.
Don’t get me wrong, though. Scientific fraud is a very serious problem that we discussed at length in the journal last May, a lead that Nature followed this week. Our journals have no tolerance for misconduct, and we will continue fighting against it.
At the same time, one wonders whether academic and legal institutions could also do more to counter scientific fraud. In Scandinavia, for example, it is mandatory for PhD students and senior scientists to receive training in good research practice. In the UK, the law protects whistleblowers from victimization or dismissal by an employer. And in Croatia, the science ministry has taken the lead since 1996 by actively teaching topics related to responsible research conduct.
Above and beyond these considerations, I think Rossner’s conclusion is correct; there needs to be a dialogue between journals and the community to devise standards for the protection of data integrity. What do you think these standards should be?