Nature Chemistry | The Sceptical Chymist

They did a bad bad thing

[This post is based on the editorial in the May 2011 issue — the full text can be accessed here, available for free to all registered users. We welcome feedback on our editorials in the comments section below.]

When it comes to research misconduct, burying one’s head in the sand and pretending it doesn’t exist is the worst possible plan.

With human nature as it is, the only surprising thing about scientific misconduct should be that it continues to surprise us. Scientists are human, so why should we be more surprised when they behave unethically than, say, those in business or politics? Surprised or not, we should acknowledge that scientific misconduct is happening, will always happen, and probably always has happened. With an increased awareness, however, we can all be more vigilant and perhaps better equipped to prevent it happening.

To give some examples of wrongdoing, in case anyone is unaware of its existence, look at the rise and fall of Jan Hendrik Schön. Or the less headline-grabbing, but still worrying, 60 falsified structures published in Acta Crystallographica E. Or the 70 questionable articles published by Pattium Chiranjeevi, from Sri Venkateswara University.

These are just three relatively well-known examples. For more, the interested reader is directed to Retraction Watch, which, contrary to its moderators’ initial concerns that they would struggle to find enough examples to cover, has averaged around six posts per week since its inception in mid-2010. Of course, many of these retracted papers are not the result of unethical behaviour, but a worrying proportion are.

Very often the reaction to the discovery of these cases is ‘Why on earth did journal X publish THAT?’. However, when it comes to outright fabrication or falsification, editors and peer reviewers must consider the data with which they are presented at face value. Beyond a healthy scepticism, there are analytical tools available that can help identify suspicious data, for example for assessing tampered images and crystallography data. It is, however, hard to see that these would have been useful in the case of the determined fabrication that Schön engaged in.

Journals have a much greater stake in cases of plagiarism, against which Nature Chemistry and other Nature family journals can use CrossCheck. This tool can check the text of submitted articles against a large database of published papers. As the publication ethics section of our author guidelines clearly state, “[…] when large chunks of text have been cut-and-pasted, [s]uch manuscripts would not be considered for publication in a Nature journal.”

Of course, journals also have an important role in many other cases beyond plagiarism and cannot reject all responsibility. Publishers should ensure that data is made as widely available as possible. The outcome of any action a journal does take — such as correcting or retracting a paper — should be transparent, freely available and disseminated in the same way as the original paper. Investigations into data fabrication or manipulation are beyond the remit of publishers, and should be conducted by the relevant institutions and funding agencies.

One of the fundamental tenets of science is that experiments should be reproducible. ‘Peer review’ is broader than the pre-publication assessment that most people are referring to when they use the phrase. The true test comes once every aspect of a discovery can be scrutinized by one’s peers — and then built on. In spite of automated data-checkers and text-comparison tools, physically and independently recreating an experiment remains the best way to validate data.

So what should be done to deter misconduct? A shared awareness of correct research ethics needs to be fostered and passed on to the next generation. This should be emphasized by formal training from departments and institutions, which must have their own policies and guidelines for ethical behaviour and dealing with misconduct. Most of all, it needs to be put into everyday practice and an example of high standards should be shown by mentors.

Ultimately, science and the scientific record is self-correcting but only at the expense of much unnecessary work and potential anguish by those prepared to stand up and put things straight. No-one should have to put their careers on the line — or on hold — to investigate and report deliberately incorrect results. It is surely far better to act preventatively by insisting on higher standards at every step of research.

[Since we wrote, re-drafted, edited, laid-out, typeset etc this article we’ve found a few more interesting links for you all. Firstly, Science Betrayed on BBC Radio 4 by TV’s Adam Rutherford (iPlayer link probably only works in UK). Derek Lowe blogged about a recent PLoS1 paper on misconduct. Finally, The Scholarly Kitchen blogged about ‘paying for impact’ – the Chinese funding model for directly rewarding researchers based on which journals they publish in which we touched on in the full editorial.]


  1. Report this comment

    Neil said:

    Some (tidied-up) comments we’ve had through Twitter so far:

    @J_ap_M: Well put, but do you think there is some room for publishers to require primary data, as in the Schön and Acta Cryst E this was key?

    @CBC_excimer: Acta Cryst E is now requiring the submission of X-ray reflection data, which are harder to fabricate…

  2. Report this comment

    gyges said:

    One of the Daubert criteria for the admissibility of scientific opinion evidence in the US is that the evidence has been subject to peer review and publication (if appropriate).

    Well, that’s out of the window, then.

  3. Report this comment

    Unstable Isotope said:

    I’ll admit that the question of scientific misconduct is one that has fascinated me for a while. My question is how self-correcting is the peer review process? We never hear about articles that are rejected by reviewers for this reason. Is it a lot? The scientific misconduct is often found after it is published, because people start trying to do the experiments and find they don’t work. Do we need more professional reviewers (instead of busy professionals) or are we resigned to the fact that a certain number of fraudulent papers are going to be published?

    True story:

    I was a grad student when a paper about using NMR magnetism to influence enantiomer selection (published in Angew Chem I think). One of my fellow group members presented the article during a journal meeting. My advisor said right away – it’s a fraud. He was right – the paper was retracted and the student who did the work lost his Ph.D. My question: why didn’t any reviewers find it if it was obviously fraudulent?

  4. Report this comment

    Neil said:

    @Unstable isotope

    I think that pre-publication peer-review can only catch so much – if someone is determined to blatantly make up data and use it in response to reviewers’ requests for back-up experiments, then what can reviewers/editors do? I know that, in some extremely sensitive and rare cases, Nature has gone to the effort of requesting that reviewers actually repeat the reported experiments (in eg cloning work), but would people sign up as reviewers for the thousands of other articles published per week? Organic Syntheses do this routinely, but is, as far as I know, the only journal to do so.

    I guess we do have to accept that some fraudulent papers will be published. As the Royal Society’s motto says, ‘Nullius in verba’ – take nobody’s word for it.

  5. Report this comment

    Unstable Isotope said:

    I agree Neil that you can’t stop someone determined to deceive but it looks like someone like Chiranjeevi exploited a loophole by sending to multiple journals. Is there a way to fix that problem?

  6. Report this comment

    Christopher R Lee said:

    I’m not sure that the kind of word-for-word plagiarism that was discussed covers everything to do with plagiarism. Back in 1983, my boss asked me to do some experiments in a hurry. I did them and wrote up my part of a publication (Eur J Pharmacol 1983, 90, 393). Shortly afterwards another paper was published on the subject (J Med Chem 1983, 26, 1348), and I found out that the idea behind our project had most likely been overheard during a conference. I didn’t say anything because I had a family and a mortgage, and anyway the contribution from the originators of the idea was more thorough and innovative than mine.

    In 1993, I wrote up an analytical method for a class of genotoxic impurities in pharmaceuticals. Our team was quite pleased with it (Analyst, 2003, 128, 857), though such research is rarely earth-shattering in nature. A few years later, when the subject had become topical, another team copied our idea, the only substantial change being a change of analytical reagent. While we had chosen to take a low-key approach in the discussion section, emphasising the aspects that require optimisation, they discussed at length the advantages of the innovative aspect (reaction headspace gas chromatography) as though they had thought of it.

    I’m retired now and have nothing to gain or lose. There were, however, various underlying issues involving industrial and professional lobbies, sponsored journals, and most of all questions related to the safety of medicines (and perhaps other products) that some people don’t want to discuss. I decided to react, but had to contact the publishers of the journal to obtain the right to reply (J Pharm Biomed Anal 2010, 52, 642). I consider the outcome unsatisfactory because in the circumstances there was no possibility of a peer review process.

  7. Report this comment

    Neil said:

    @Unstable isotope re submitting to multiple journals.

    I can’t see how that could be policed without a level of cooperation between publishers that would be pretty much unworkable – and probably undesirable to publishers and authors.

    @Christopher R Lee (btw I’ve added links to the articles you reference, or PubMed where unavailable)

    Those are both sad stories. When it comes to ‘borrowing’ ideas/concepts rather than blatant copying as you say, I guess we have to rely on editors/reviewers being up to date with the literature or have good database searching skills. Of course, it’d be best if everyone played according to the rules, but not everyone does.

    As you say – and hopefully as we addressed in the blogpost and editorial – it’s really important that journals handle correspondence arising from these problems in an open and useful way. I think journals need to realise that it’s to their advantage to be open about these things and that hiding from them helps no one. I guess it might take quite a shift in attitudes to be proud of publishing retractions/corrections/etc rather than slightly ashamed – but it shows the journal is serious about the scientific record.