Cleaning up an image in Photoshop or manipulating the data?

Hwang. Poehlman. Schon. Van Parijs. People who will go down in scientific infamy: they faked results. Academic misconduct is a hot topic these days, with universities, government agencies, journals (including Nature) and scientists asking themselves tough questions: just how common is data fabrication/falsification and what are we/can we do to catch and prevent it.

Those questions were front and center at a conference I attended about a week ago at Harvard called Data Fabrication and Falsification: How to avoid, detect, evaluate and report. It was hosted by Harvard Medical School, the Harvard School of Public Health and the Harvard hospitals, so there was a mix of researchers and clinicians, which provided an interesting contrast.

During the one afternoon I was able to attend, Julie Buring, an epidemiology professor with the Harvard School of Public Health, gave a talk about her involvement with large-scale, multi-center clinical trials involving hundreds or thousands of patients in the US (eg the Women’s Health Study). She discussed the resources and systems these trials have in place to verify the quality of data coming in from participating clinical centers across the country. That included data coordinators whose job it is to look at the data to make sure they didn’t look “funny.” Such large trials also people to make unannounced site visits, either as part of a routine or to follow up when data from that center look fishy.

But it was a different story when it came down to individual labs and researchers under enormous pressure to get good results and publish. Joan Brugge, chair of Harvard’s cell biology department, said that rather than outrightly making up data, she thought the more common (and least talked about) form of academic misconduct was the selective use of data: excluding failed experiments, ignoring data that don’t fit, using select data as representative ones. The effect of bias, she said, can be very strong, especially in molecular/cell biology, where the decision to include/exclude data can be a bit more subjective. It’s one thing to be taught critical thinking skills, she said, but, will researchers actually use them, when competitors are nipping at their heels and funding is growing tight?

She praised the Journal of Cell Biology, as being a leader in providing clear guidelines on what could be done or not done to images. She said that the JCB apparently asks for original data from 10 percent of accepted manuscripts. Seems like the JCB has taken the policing role quite seriously.

The most fascinating talk for me that afternoon was one by John Krueger of the US Office of Research Integrity about detecting image manipulation. In the age of Photoshop, the incidence of cases of image falsification the office was seeing has been increasingly rapidly. Sadly, this kind of data manipulation is most often done by younger scientists/students, skilled in image software. He reported that blots are the most common types of images being messed around with, but also microarray images and figures ranked high on the list.

Krueger outlined the techniques the ORI uses (including the clever use of Photoshop) to sniff out suspect images and prove that they were falsified: it was like CSI for science. While it’s easy to fake images, it’s just as easy to spot them, Krueger said. I think anyone seeing this presentation would think twice about mucking around with their images.

Leave a Reply

Your email address will not be published. Required fields are marked *