Last summer, Science retracted a much-publicized paperon the genetics of longevity by a team of Boston University reasearchers. Outside scientists had raised questions about the validity of conclusion based on multiple types of genotyping platforms used for the study.
In January, the researchers — led by Paola Sebastiani, a biostatistician at the BU School of Public Health and Thomas Perls, a gerontologist at BU School of Medicine — published a “corrected version” of the paper. They write:” The major scientific findings were broadly the same, but there were substantial technical differences that merited publication of the corrected version in another journal, PLoS ONE.”
Nature Boston asked them for their thoughts on the systems researchers rely on to correct errors in the scientific record. For more on this topic, see the recent series of posts on Of Schemes and Memes around this month’s Science Online NYC (What do retractions tell us about keeping the scientific record straight?, Sound familiar? – detecting plagiarism, and Keeping science honest: now it’s everyone’s job).
Q. Based on your experience: Is the system broken, and what can researchers do to help fix it if it is?
For the vast majority of publications, we think the system works well, thanks to so many researchers who volunteer their substantial time to reviewing each other’s work prior to publication. Sometimes mistakes are made, but we believe this is still a substantial minority.
“It may be that the increase we perceive in detecting major mistakes and fraud is due to faster access to papers making it easier to detect errors.”
Q, Has the Internet made it harder or easier to set the scientific record straight?
On one side of the coin, the Internet has opened new opportunities for timely dissemination of peer reviewed data, results and scientific dialogue. In this way, errors, but also deliberate manipulations and duplications, are easier and faster to detect. On the other side of the coin, the proliferation of information on the Internet sometimes makes it difficult to distinguish between reliable information and comments that represent personal opinions and sometimes the inaccurate reporting of facts.
“There are also issues of undeclared conflicts of interests, or personal conflicts, and the anonymity guaranteed by the internet unfortunately favors cyber-bullying and cyber-gossiping. This is a growing phenomenon and academia does not seem to be spared from this.”
Do you think there should be a way to distinguish between error and fraud in retractions?
The current system, in which authors can indicate what went wrong in a letter of retraction, allows for this already, although sometimes this approach is not fully exploited. Accompanying editorial notes can also help distinguish between errors and fraud. Unfortunately, a cultural bias exists where retraction is often equated with misconduct, or at least there is the attempt to shame investigators for major mistakes.
There are also different types of errors, for example errors that hopelessly invalidate a discovery versus errors that change a discovery that remains scientifically valuable. In order to correct the literature, the latter necessitates dissemination of the correction, either as a corrected republication or a corrected publication in another journal. In situations when major errors lead to a retraction followed by publication of the corrected findings elsewhere, there must be a better way to link the corrected publication to the retracted ones so that errors do not persist in the literature. The Internet and online publication could be better exploited to facilitate the process of correcting the literature and allow science to move forward.
Tom Perls MD, MPH and Paola Sebastiani PhD
Boston University Schools of Medicine and Public Health
Science Online NYC (SoNYC) is a monthly discussion series held in New York City where invited panellists and the in-person and online audiences talk about a particular topic related to how science is carried out and communicated online. For this month’s SoNYC the topic for discussion is: Setting the research record straight. We’re looking at issues such as retractions and plagiarism and how they relate to real or perceived increases in research misconduct. More details about this month’s SoNYC can be found here.
To complement the event, we’re running a series of guest posts discussing what steps publications are taking to deal with fraudulent research practices and what is being done to investigate and deter such practices. We’ve already heard from Richard Van Noorden, Assistant News Editor at Nature. He gave us an overview of what retractions can tell us about setting the research record straight, highlighting some recent high profile cases of retraction, explaining why retraction rates appear to be increasing. We also compiled a Storify from a session at February’s AAAS meeting in Vancouver on Global Challenges to Peer Review which touched on some of the challenges faced by journal editors. Next we heard from Dorothy Clyde (Dot), Senior Editor at Nature Protocols, explaining the role an editor plays in avoiding plagiarism, giving advice to all parties.