News blog

US scientists “more prone” to fake research? No.

A peer-reviewed study that claimed “American scientists are significantly more prone to engage in data fabrication or falsification than scientists from other countries” caused surprise when it dropped into Nature’s inbox. Could a researcher in the United States really be more likely to publish fake research?

In fact, no. The study (G Steen, J. Med. Ethics, 2010; doi: 10.1136/jme.2010.038125) looks at retractions of research papers in the database PubMed over the last decade and finds that retractions by US authors have a high fraud-to-error ratio (a third of US retractions were due to fraud rather than some sort of mistake).

But this does not mean that any US scientist is more likely to engage in data fraud than a researcher from another country. Indeed, a check on PubMed publications versus retractions for frauds suggests that s/he may be less likely to do so (though the statistical significance of this finding has not yet been tested).**Bob O’Hara, blogging at ’Deep Thoughts and Silliness", has run the analysis.

How did the statement about ‘more prone to engage in data falsification’ make the paper?


The study’s author is Grant Steen, a science writer and president of Medical Communications Consultants, a company that provides medical writing services. It turns out that he interprets the phrase ‘more prone to…’ as simply the statement that US scientists as a group produce the most frauds and retractions. As to the question of which scientists are most likely to produce frauds, his data do not address that point, he agrees.

In January this year, Steen looked at all retractions from papers published between 2000 and 2010 in the English language, indexed in the PubMed database by first author affiliation. He also identified whether the 788 retractions he found at that time were due to an error (including text plagiarism) or fraud (data plagiarism, data fabrication, or data falsification).

Here are Steen’s figures:

retractionsSteen.JPG

It’s clear that the United States produces most retractions and most frauds. And as Steen points out, the figures show that one in three retractions from an author affiliated in the United States were attributed to fraud instead of error. For authors affiliated in other countries, the error:fraud ratio is higher. (Though India’s ratio looks worse than the United States, that difference is not statistically significant, Steen says).

One point to make is that, within the subset of retractions, the fact that the US fraud rate is relatively high may only indicate that sloppy errors are relatively low, so that retractions tend slightly more towards fraud cases. But at least, Steen says, it’s clear that having a ‘strong’ research infrastructure does not protect against fraud.

However, it’s worth also pointing out that this doesn’t mean that US scientists are more prone to fraud than those from other countries. PubMed is dominated by papers published by US-affiliated researchers, so that their absolute fraud rate may be quite low compared to other countries. Steen does not address this point, but I quickly checked out the figures, using this PubMed tracker website. Here they are for the top seven countries including my (non-peer-reviewed) figures for their total number of publications in English:

retractions2.JPG

It appears that US researchers have a lower fraud and retraction rate than authors affiliated with China, India, and South Korea (update: for a statistical analysis see Bob O’Hara’s blog).

Apart from the (to my ears) misleading statement about US researchers’ prone-ness to fraud, Steen’s paper does contain some other intriguing findings. For instance, roughly 53% of fraudulent papers were written by a ‘repeat offender’ who had written other retracted papers. Retracted papers were more likely to appear in journals with a high impact factor – though this might be because lower impact-factor journals tend to retract fewer papers.

“The overall rate of retraction is really very low and that’s a message that should not be lost in this,” Steen adds.

Update: I’m reminded that the pattern by which retracted papers are more likely to appear in journals with a high impact factor has been noted before (M. Cokol et al. EMBO Rep. 5, 422–423; 2007) – see Nature 447, 236-237 (17 May 2007).

Comments

  1. william law said:

    Why is the ratio of error to fraud significant? What exactly is the researcher trying to extract from the data? As far as discovering who is most fraudulent, errors are irrelevant. The relevant ratio is a simple ratio of total frauds per country divided by total papers per country.

    Excluding errors altogether might clarify the matter. A separate index for errors could be calculated excluding frauds. The one category clouds the other in my estimation.

  2. Anton Kolosov said:

    I completely agree with William here. The way data is analysed is very important. To me the author of the reseach on error and fraud in publication is looking for an answer while already knowing or predicting the outcome. In this type of investigation it looks rather biased. Why not categorise various errors into separate sets and have percent ratios for every different criteria assessed? For example, retractions due to actual fraud (data manipulation, etc.), text or methods plagiarism, other errors (such as stats mistakes, graphical or image errors, etc) could be analysed separately. It would then make such investigation much more meaningful.

    Report this comment Cancel report
    Your details

    Please confirm the words below

    In order to reduce spamming, this process ensures you are a real person and not an automated program.

  3. Chad said:

    Did they dig any deeper? What kind of research? What kind of fraud?

  4. Robert Klonoski said:

    Aren’t papers presented in “high-impact” publications more well scrutinized? My guess is that when you restrict to just this subgroup, every “country” of scientists would fall into the same bucket of statistical significance. Because I can’t see exactly how “first author affiliation” would drive bad science, I’d rather see retractions based on funding source.

  5. Alpha Meme said:

    “That normalization for dominance would obviously give the opposite result indeed needs to be stated (for those to whom such is not obvious). Then one needs to get into that the paper should have been thrown out for other reasons. However, the Nature blog instead sticks to the same bad methods just modifying as far as is needed to get the opposite result and no further, proving that it is all about hidden agendas on both sides, not about good science.”

    from: http://www.science20.com/science_20/if_data_properly_framed_us_scientists_are_more_likely_engage_fraud#comment-53139

  6. Richard Van Noorden said:

    Thanks for your comments. Alphameme – it wasn’t my intention in this blog to analyse more deeply what data on retractions can or can’t tell us. I simply wanted to note the sentence in the paper that had surprised me and point out why I didn’t think it was correct. I doubt whether retractions data can really tell us much at all about levels of research fakery – given that, as the blog you cite notes, no doubt some research that should be retracted never actually is.

  7. Ryan B said:

    The offending study should be retracted due to fraud. Mmmm… irony.

Comments are closed.