Richard Van Noorden, Assistant News Editor at Nature, gives us an overview of what retractions can tell us about setting the research record straight. In his post he highlights some recent high profile cases of retraction, explaining why retraction rates appear to be increasing.
Up until the turn of this century, the research record shows that scientists hardly ever published work that was totally flawed: that is to say, so wrong that it needed to be retracted. Around the millennium, about 30 papers a year were being officially withdrawn; and so the total number of retractions in the scientific literature was admirably low. In retrospect, suspiciously low. Was the literature ever really that clean?
For suddenly, in the last ten years, retractions have shot up, rising ten- fold while the scientific literature expanded only 44%. A blog, Retraction Watch, has monitored them over the past 18 months. Recent examples include prominent psychologist Diederik Stapel’s fraud (particularly shocking because Stapel had such a stellar reputation); the dispute over whether or not chronic fatigue syndrome is linked to a virus ; and the scandal in which cancer geneticist Anil Potti’s flawed research led to patients being enrolled in clinical trials based on faulty data. Those are the ones that made headlines – but as Retraction Watch and Neil Saunders’ live feed of retraction notices on PubMed show, rarely a day goes by where a paper is not being withdrawn. The new norm nowadays is to expect hundreds of retractions, and perhaps that number will continue to rise.
The fraction of papers retracted is still utterly miniscule of course – just 0.02%; for retractions, after all, usually imply serious experimental or ethical errors. (According to Thomson Reuters, the number of retractions and corrections together has remained roughly stable at about 0.75% of the literature over the past two decades – but because there are so many, no-one has worked out whether ‘serious’ corrections are rising significantly; the signal is swamped by the myriads of corrections for trivial errors. For more data on who retracts papers, and why, see my blog post ‘The reasons for retraction’).
But as a result of the rise, the retractions system – and, more broadly, the trustworthiness of published research – has started to attract intense debate. Should we trust the science literature less, because more of it is being withdrawn? Or more, because editors and researchers are finding it easier to catch and signal mistakes?
Retractions and trust
The nagging question underlying such trust debates is whether scientific fraud and error are actually rising. From retractions data alone, this question cannot be answered – so people tend to go with their gut reactions. Many commentators opine that today’s ultra-competitive, publish or perish, photoshop-savvy, tenure-obsessed and blockbuster-drug-focused scientific culture is leading to a rise in both fraud and error. On the other hand, others think not; John Iannoidis, who certainly knows a thing or two about erroneous research findings, told me he didn’t think there was a sudden boom in the production of fraudulent or erroneous work, a view shared by the research ethicist Nick Steneck (see my feature, ‘The trouble with retractions’). Science journalist Jonah Lehrer made the same judgement in his blog.
There’s some logic behind this position. Surveys in which scientists report their misconduct all show that self-admissions of fabrication and falsification run into the low single % mark; claims to spot this in others typically reach above 10%; and admissions to a variety of other questionable research practices hit 30% or higher – see studies in the United States, the UK, Germany (PDF in German), and a much-quoted meta-analysis. Therefore errors and frauds are undoubtedly more widespread than the 0.02% retractions (and a few mega-corrections) suggest. It seems reasonable to assume that scientists have been publishing sloppy work – and in a tiny number of cases fraudulent work – for centuries; and that the number of retracted papers vastly undercounts this. Even though the rates of retraction have shot up recently, this tells us little about real rates of fraud and error. (Yet when the Wall Street Journal’s Gautam Naik covered the trend, his article was headlined ‘Mistakes in Scientific Studies Surge’).
The internet and electronic publishing has certainly helped to change retraction norms. It makes it easier to spot mistakes: image manipulation or plagiarism that once might have passed un-noticed now gets picked up by whistleblowers. Just as importantly, the internet allows the easier dissemination of papers among a wider community (including non-scientists). This community effect matters: in the 20th century, a particular group of scientists might have been justified in feeling that since everyone knew a paper was faulty, there was no need to officially signal this with a retraction (unless the signal was to warn others of a scientist’s egregious fraud). That’s increasingly not true: even very old papers can easily be resurrected among communities unfamiliar with how the field has moved on. All of this means that editors need to change any pre-millennial attitudes to officially retracting or correcting outdated research, and how those changes are signalled on journal websites.
All in all, we should be pleased that journal editors and scientists are apparently more willing to retract. A clear retraction takes guts and hard work from both parties, and there are still many problems with the system that need fixing. As Retraction Watch has pointed out, retractions are often irritatingly opaque, leaving the reader mystified about what went wrong. Journals are inconsistent in their attitudes to retraction. And it isn’t clear that retractions always work as signals to wider readers – although a recent study suggests that annual citations of an article drop 65% after retraction, compared to control articles. The launch of the CrossMark system should further improve our awareness of changes to research papers.
Setting the record straight
More widely, the rise in retractions is welcome because it’s focusing discussion on how we straighten out our research record and promote best practice in the internet era: an age when it seems more important that scientists officially retract or correct erroneous records.
As Retraction Watch’s Ivan Oransky and Adam Marcus said recently, the research paper is not a sacred, never-to-be altered object. (I hope that no scientist ever thought it was). If scientists continue to record their results by publishing a series of research papers that form ‘the literature’, then revisions to this literature can probably only go so far; after all, scientists work by publishing new papers, not obsessively revising and re-linking what they’ve already published. But there’s no doubt that right now we can afford to encourage many more such revisions and to accept that mistakes do happen. There are many honest retractions; while embarrassing, such admissions should be treated differently to fraud.
Straightening out the research record goes far beyond best practice on retractions and corrections. To avoid errors of unconscious bias affecting what gets published, we should encourage the publication of negative results, so that we see the 19 out of 20 hypotheses that failed, not just the one success. In a similar vein, we should force clinical trials to be registered before they start – though a US effort to attempt just that appears to be off to a poor start. And we should focus on instilling an honest scientific culture; training researchers on what is and isn’t acceptable when it comes to plagiarism (particularly those for whom English is a second or third language); and welcoming oversight institutions like the US Office of Research Integrity, although this is a kind of oversight which the UK science community apparently thinks it can do without.
Such prescriptions are easy to write down; much harder to put into practice. And the more retractions, corrections and negative result publications we encourage, the more perceptions may suggest that the research record is less trustworthy, prone to U-turns and failure. In fact – as we should not fail to point out – it will be more honest and trustworthy, and reflective of how science really works, than ever before.
Science Online NYC (SoNYC) is a monthly discussion series held in New York City where invited panellists and the in-person and online audiences talk about a particular topic related to how science is carried out and communicated online. For this month’s SoNYC the topic for discussion is: Setting the research record straight. We’re looking at the trends in retractions and how they relate to real or perceived increases in research misconduct. More details about this month’s SoNYC can be found here.
To complement the event, we’re running a series of guest posts discussing what steps publications are taking to deal with fraudulent research practices and what is being done to investigate and deter such practices. More guest posts coming soon.