Mistakes, goofs and outright deceptions litter the scientific literature, but there is something that can be done about it. Scientists, writers and journal editors gathered at Rockefeller University in New York last evening to discuss increases in retracted research over the past several years and how best to correct the research record.
“Image manipulation is not a new phenomenon, but it is an increasingly visible one,” said Liz Williams, executive editor of the Journal of Cell Biology (JCB), a Rockefeller University Press journal that has led the way in ferreting out manipulated images before their publication. She was one of three panelists that I helped to bring together for the latest Science Online New York City (SONYC) event, hosted by nature.com and Rockefeller University.
JCB has one full-time employee checking the figures of every paper accepted in the journal, looking for digital traces of manipulation. Not every manipulation will raise a red flag, but as many as 50% of papers require at least one figure to be redone because it did not conform to standards. In about 10–15% of cases, Williams said, the authors are asked to send in original data for checking and in about 1% (roughly 35 papers in the 10 years JCB that has been looking), efforts to manipulate are so egregious that acceptance of the paper is revoked. In these cases, said Williams, the journal notifies the “proper authorities”, usually school administrators. Williams noted that to give readers a peek behind the showroom-quality figures in JCB’s published papers, the journal encourages researchers to deposit their raw-data images on the website for readers to peruse.
John Krueger works as a scientist–investigator at the Office of Research Integrity (ORI) in Rockville, Maryland. The ORI is a government agency responsible for oversight of university investigations into research misconduct. He talked about growth in retractions, (a topic ably covered by my colleague Richard van Noorden in the run-up to the panel discussion), but concluded that this doesn’t likely represent a rise in misbehaviour. Rather, it reflects the hundreds of thousands of extra eyes on every paper that can help to check for things such as image manipulation or plagiarism, both of which may eventually lead to a retraction (but don’t necessarily have to — some instances are innocuous). “Each paper is now reviewable in perpetuity,” he says. That doesn’t always mean that experts will be the ones making accusations of image manipulation or even fraud, however. Krueger has helped to develop a series of Photoshop plugins called “forensic droplets” that, with guidance, can help practically anyone identify signs of image manipulation.
Ivan Oransky, executive editor with Reuters and co-founder with Adam Marcus of the blog RetractionWatch, carried on the theme, discussing retractions and their negative effects on different fields of science. He discussed some of the positive ways in which some retractions are handled — generally when journals and authors are transparent about the source of the error that led to the retraction — and he pointed out the sins of others who publish one or two sentences at most and offer no insight into the reason that a paper was retracted. He credits the success of the blog (it covered about two-thirds of retractions published last year) not only to the editors’ own desire to follow certain topics but also to tips from the community. Oransky argues that readers online will increasingly become a force for rooting out publication anomalies that may reflect fraud or just sloppiness, creating, in effect, a sort of post-publication review of papers. He also pointed out approaches, such as pre-peer-review publishing efforts modelled after the physics preprint server arXiv, and a new product still in testing, called Crossmark, that treats a paper as a constantly evolving thing aided by input and comments from the community. He pointed out that anyone can help to be part of the solution.
Update March 21: The Schemes and Memes blog team has posted more details from the event last night, including a Storify version pulled from the Twittersphere.
Image: A slide from Ivan Oransky’s presentation featuring data from Neil Saunders. Update 21 March: Ivan posted the rest of his slides here.