Whose responsibility is it to ensure integrity and honesty in the scientific record, and how have those roles been changing as technology and social media advance?
Those were just a few of the issues discussed at the SoNYC event, “Setting the Research Record Straight” held on Tuesday night at Rockefeller University. In addition to the live lecture, people attended the event via live streaming and joined in on Twitter using the hashtag #SoNYC. You can watch the video of the event here and view our Storify of tweets on Of Schemes and Memes.
The panel consisted of Liz Williams, Executive Editor at the Journal of Cell Biology, John Krueger of the Office of Research Integrity, and Ivan Oransky of Reuters Health and co-founder of the blog RetractionWatch. Much of the night’s discussion focused on image manipulation in scientific manuscripts, primarily because that is one of the easier types of misconduct to detect. In the age of electronic submissions, editors can scan and manipulate images to identify any signs of manipulation.
Liz Williams kicked off the evening with a discussion of how JCB approaches detecting image manipulation. Williams stressed that journals have a responsibility to detect as much manipulation as possible and part of that task consists of creating clear guidelines for authors on what type of manipulations are acceptable and what constitutes as deceptive or fraudulent actions. While almost 50% of authors who submit papers to JCB are asked to remake an image for whatever reason, only 1% of articles are revoked acceptance. This indicates that most unacceptable manipulation is likely due to a misunderstanding of what is and isn’t acceptable and ignorance, or incompetence in the tools used to create and modify images.
John Krueger followed with data from the Office of Research Integrity on the increasing number of retractions over the years. However, the increase in retractions doesn’t necessarily indicate that scientists are slipping or that actual levels of misconduct are increasing. Instead, Krueger speculated that the increasing transparency in science and advances in technology and communication make science more visible to the public and allow the public to scrutinize scientific research like never before. One pervasive theme among the discussions was the idea that a paper is not set in stone upon publication. Rather, it is constantly under “post-publication” review by the public and by other scientists. And when one of those papers is contested and potentially retracted, while the reliability of science isn’t likely to be affected, the perception of science in the public eye can be significantly harmed.
Krueger also followed up on Dr. Williams’ discussion on image manipulation in science. Krueger speculated that as images become more important in communicating scientific research, they not only make science more transparent but also make it easier to detect data manipulation. Another interesting point he brought up was that technology not only makes it technically easier to falsify and manipulate data, but it also removes some of the inherent checks and balances in science. Now, because data collection has in many ways reached a certain level of automation, one person could collect, process, analyze, interpret and potentially manipulate their data without receiving input from other experts on whether each step, from raw data to processed results, was appropriate. Perhaps scientists as a community need to revisit some of these checks and balances and find new ways to vet data during the analysis and interpretation stages.
Ivan Oransky closed the panel presentation by reminding us that, “We are all gatekeepers” (view his slideshow here). Oransky focused on the role of blogs and other “whistleblowers” in detecting dubious research. Blogs, he stated, are getting more aggressive in questioning the scientific literature and journals are starting to take them more seriously. However, as Dr. Krueger asserted, if the data is solid, they will quell any misgivings. Of course, that assumes that scientists hold onto primary data long after it is collected and published, which isn’t always the case.
Like Dr. Krueger, Oransky stressed that, after publication, a paper is still constantly under review. Oransky took that idea one step further, advocating that the communications resulting from that post-publication review, such as additions, disclaimers and concerns about the paper, should be a part of the scientific record. Services such as CrossMark are starting to do this, but it can still be difficult to know by retrieving a paper whether that paper has corrections associated with it or even whether it has been retracted. Oransky mentioned several other resources that have the potential to change the world of science publishing. For example, Nature Precedings, in which scientists can pre-publish manuscripts and data to receive feedback from the scientific community and Altmetrics, which is attempting to redefine the traditional impact factor by considering other types of citations in addition to citations in the peer-reviewed literature when assessing a paper’s impact or importance.
It was implied in most of the discussion that retractions are a result of bad science, whether or not there was an initial intent to deceive. However, as John Krueger pointed out, retractions are a healthy part of the scientific process and a well-written retraction notice can contribute as much, if not more, to the advancement of science than the initial manuscript. And, as Liz Williams put it,
“If the goal is to preserve the integrity of the scientific literature, then retractions are a sign of progress.”