News blog

Open access articles not cited more, finds study

magazines.jpgPosted on behalf of Zoë Corbyn.

Make an article open access and it is more likely to get cited – at least that is one powerful argument in open access advocates’ arsenal to get researchers to make their work publicly available.

But new research published in the Federation of American Societies for Experimental Biology FASEB Journal suggests this may not be the case. The research – which its author claims uses a more rigorous methodology than many previous studies – shows that while providing open access to scientific journal articles certainly leads to more downloads, it simply doesn’t translate into citations.

To test whether open access articles received more citations than articles requiring a subscription, Cornell University communication researcher Philip Davis convinced the publishers of 36 journal titles crossing all subjects to randomly make about one in every five of 3,245 articles they published between January 2007 and February 2008 open access.

Davis then compared the citation counts of the 712 open access articles to the 2,533 controls finding that while the open access articles were downloaded more frequently in the first year, they were cited no more frequently – nor any earlier – over the three year period.

“The widely-accepted ‘open access citation advantage’ appears to be spurious,” said Davis, who also is the executive editor of the controversial Scholarly Kitchen blog, which is not afraid of taking the open access movement to task. “There are many benefits to the free access of scientific information, but a citation advantage doesn’t appear to be one of them.”

The results stand in contrast to those found by open access advocate Stevan Harnad, from Southampton University. He led a study that was published in PLoS ONE last year but used a different method (based on comparing non self-archived and self-archived subscription journal articles) to find open access articles received “significantly more” citations.

Harnad criticised the current study as “the sound of one hand clapping” with “no basis” for drawing the conclusions it did. Davis’ sample is likely “too small” to show the citation advantage, he says, and the study does not look properly at the key question of the extent to which the citation advantage is real versus simply an artefact of researchers selectively archiving their better (and therefore more citable) papers.

Davis’ study notes that articles that were also self-archived did receive 11% more citations on average within the three years, but with only 65 articles (2%) the effect was statistically insignificant.

“We do leave open the possibility that there is a real citation effect as a result of self archiving but that we simply do not have the statistical power to detect it,” says Davis, though he also stresses that it would be “difficult, if not impossible” to tease out whether any effect was the result of enhanced access or just better (more citable) papers being self-archived.

Image: photo by theseanster93 via Flickr under Creative Commons


  1. Report this comment

    Karthik said:

    This is very sad. More and more articles should be made publically available to fasten and improve the scientific growth. The world should not depend on the highly expensive journals and publishers to get the paper they wanted.

    I hope the trend of open-access journals discussed here will change overtime.

  2. Report this comment

    Stevan Harnad said:

    ON METHODOLOGY AND ADVOCACY: Davis’s Randomization Study of the OA Advantage

    Suppose many studies report that cancer incidence is correlated with smoking and you want to demonstrate in a methodologically sounder way that this correlation is not causal but just an artifact of the fact that the people who self-select to smoke are more prone to cancer. So you test a small sample of people randomly assigned to smoke or not, and you find no difference in their cancer rates. How can you know your sample was big enough to detect the reported correlation at all unless you test whether it’s big enough to show that cancer incidence is significantly higher for self-selected smoking than for randomized smoking?

    Many studies have reported a statistically significant increase in citations for articles whose authors make them OA by self-archiving them. To show that this citation advantage is not causal but just a self-selection artifact (because authors selectively self-archive their better, more citeable papers), you first have to replicate the advantage for the self-archived OA articles in your sample, and then show that the advantage is absent for the articles made OA at random. But Davis showed only that the citation advantage was absent altogether in his sample. The likely reason is that the sample was much too small (36 journals, 712 articles randomly OA, 65 self-archived OA, 2533 non-OA).

    In a recent study (Gargouri et al 2010) we controlled for self-selection with mandated (obligatory) OA rather than random OA. The far larger sample (1984 journals, 3055 articles mandatorily OA, 3664 self-archived OA, 20,982 non-OA) revealed a statistically significant citation advantage of about the same size for both self-selected and mandated OA.

    If and when Davis’s requisite self-selected self-archiving control is ever tested, the outcome will either be (1) the usual significant OA citation advantage in the self-archiving control condition that most other published studies have reported — in which case the absence of the citation advantage in Davis’s randomized condition would indeed be evidence that the citation advantage had been a self-selection artifact that was then successfully eliminated by the randomization — or (more likely, I should think) (2) there will be no significant citation advantage in the self-archiving control condition either, in which case the Davis study will prove to have been just a non-replication of the usual significant OA citation advantage (perhaps because of Davis’s small sample size, the fields, or the fact that most of the non-OA articles become OA on the journal’s website after a year).

    Until that requisite self-selected self-archiving control is done, this is just the sound of one hand clapping.

    Readers can be trusted to draw their own conclusions as to whether Davis’s study, tirelessly touted as the only methodologically sound one to date, is that — or an exercise in advocacy. 

    Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research (2010) PLOS ONE 5 (10) (authors: Gargouri, Y., Hajjem, C., Lariviere, V., Gingras, Y., Brody, T., Carr, L. and Harnad, S.) 

  3. Report this comment

    Phil Davis said:

    While I thank Zoe Corbyn and Nature for drawing attention to my research, I take issue with how this piece was written, what it emphasized and what it omitted.

    First, the piece is not balanced. Zoe focuses on the weaknesses of the study and not its strengths, relying on the hyperbole of a vociferous rival to make unsubstantiated comments and promote his own work. [I see that Stevan Harnad has already commented on the piece.]

    Second, it draws assertions about my affiliations (a part-time unpaid blogger) as “controversial.” The purpose of this claim appears to do little more than attempt to undermine my credibility as a researcher.

    Last, Corbyn presents this story as a controversy between two longtime rivals. While I understand that drama is good for readership, I agreed to the interview for Nature as I regard the source to be credible and its aims scientific.

    In sum, the reporting does little but to fan the embers on a 10-year long debate. I truly expected more from Nature.

  4. Report this comment

    Christopher Gutteridge said:

    I think it’s really important that people are studying this question. Randomly selecting some papers to be open and testing if that increases citations seems an excellent approach.

    The fact that the results in this study were not at all what I expected means that I will be more cautious about arguing for OA from the perspective of a citation advantage.

    It does show that OA papers are read (well, downloaded) significantly more often. That alone is important.

    What may be interesting is if there is a different result in different fields. Looking at the selection of journals they are clustered around biological and medical sciences. In my expereince the culture surrounding medical publishing is very different to that around the arXiv subjects; such as physics and computer science, where people are more comfortable with pre-prints which are an anathema to medical publishing. With good reason too, the risks are very different. The public rarely make life & death decisions based on theoretical physics!

    Sadly, I can see this getting politisised to the point where it’s a monthly story, like “red wine” in the tabloids. Open Access is good for you, no its not, yes it is. Sigh. Let’s not do that, eh?

    I would love to see a wider study over more subjects. I suspect we would see different citatin profiles in engineering journals. Whatever the result, it would be useful to know! If repeated studies really show that there isn’t a citatin advantage for OA (in any scientific fields) then we should stop using it as an argument for OA. I’m not worried, it’s not like there are not heaps of other good arguments!

    What I’d really like is some data which showed a citation advantage for papers published in HTML, but otherwise identifical to papers published in PDF. But only if the theory was accurate!

  5. Report this comment

    Phil Davis said:

    Christopher Gutteridge wrote: "Sadly, I can see this getting politisised to the point where it’s a monthly story, like “red wine” in the tabloids. Open Access is good for you, no its not, yes it is. Sigh. Let’s not do that, eh?"

    I agree completely and am disturbed on how Zoe Corbyn framed this piece as a political controversy between two dichotomous views.

  6. Report this comment

    Martin Kulldorff said:

    It is important to realize that this is a study that evaluates whether an open access article in an otherwise subscription-based journal gets more citations and downloads than a standard non-open access article. It is not a study that evaluates whether an article gets more citations or downloads in an open access journal than it would have in a subscription-based journal, due to the fact that it is open access. The answers to these two questions could potentially be very different. For example, if I see a reference to an article, I am more likely to look it up if it is in a journal that I know I can access. Unfortunately, is very difficult to know that an open access article in a subscription based journal is open access, until one goes to the page to try and download it.

    This is not a criticism of the study design. As a biostatistician, I think it is great that the author used randomization, and for the linear regression analyses, the sample size is large. The study is very interesting for a scientist contemplating to pay extra for open access in a subscription based journal, using for example Springer’s Open Choice. Rather, this is a criticism of the interpretation of the results. It is wrong to use this study to make claims about citations to articles in open access journals, which is the vast majority of open access articles. It could very well be that an article will get more citations if it is published in an open access journal, and this study does not refute that.

    While it is probably impossible to randomize a sufficient number of whole journals to open access, it could still be possible to do a thorough randomized study of the second more interesting question concerning open access articles in open access journals. One could recruit a number of scientists, identify a specific article that they are about to submit, match it with a similar article from another scientist, and then randomize one of the articles to be submitted to an open access journal and the other to a subscription based journal.

  7. Report this comment

    Stevan Harnad said:

    HOW USERS ACCESS CONTENT: Reply to M Kulldorff

    On the web today, it is one click to see whether any article — whatever journal it is published in — is or is not freely accessible online.

    Hence it is extremely unlikely that it would be publishing in an OA journal — rather than just making the article OA — that increased downloads or citations. Moreover, although there are a few OA journals that are among the top journals, most OA journals are not among the top journals.

    The methodology of comparing OA vs. non-OA articles within the same journal is the right one, Comparing OA journals with non-OA journals (no matter how hard one tries to match them for content and quality) is comparing apples and oranges.

    And the idea of conducting a randomized study, submitting equivalent articles to OA and non-OA journals is extremely unrealistic.

    Besides, there’s no need for it. As noted above, mandatory OA is as good a control for a putative author self-selection artifact (i.e., authors self-selectively making better, hence more citable articles OA) as randomization; and the result, with far larger and broader samples than the randomization study, is a significant OA citation advantage for both self-selected and random OA.

    There are now plenty of studies and reviews of the OA citation advantage:

    — S. HitchcocK (2011) “The effect of open access and downloads (‘hits’) on citation impact: a bibliography of studies”

    — A. Swan (2010) The Open Access citation advantage: Studies and results to date

    — B. Wagner (2010): “Open Access Citation Advantage: An Annotated Bibliography”

    What would be useful and opportune at this point would be meta-analysis:

  8. Report this comment

    Stevan Harnad said:


    The sentence above (in paragraph -6 beginning “Besides,…”) should have read:

    “…a significant OA citation advantage for both self-selected and mandated OA”

    and not:

    “a significant OA citation advantage for both self-selected and random OA”

  9. Report this comment

    Philip Montenigro said:

    On the brighter more realistic side of things:
    I consider this an important and great debate indeed. The checks and balances of “science,” seem for some at stake as peer-reviewed gets a twist in open access. Such questions will undoubtedly intensify as the next generation of Post-Doc and graduate student wrestle with the pressures to have publications on their resume. ANY publication. The more the merrier!? I am not so sure. My concern: if you have a great idea but haven’t the time to delicately parse through it you could publish in a rapid-open platform with the intentions to develop the content later, on pertinent feedback (or negative feedback for that matter). In this line of thinking, why not submit all works as soon as possible, and only improve them when directed too by your wide and public audience? Some sarcasm is implied, but in the competitive climate so many are thinking more about their resume then they are the research. But let’s face it, as we change, so will the game. In the not so distant future the trend towards open access publication will catch up with itself. Consider that maybe open articles were not of the same caliber and that this is why they were not cited. Perhaps the editors choose poorly. Personally, I see growing evidence that the quality is equivalent if not outstanding comparatively. But as the trend develops, the next “PI-touting” generation will be familiar with open access, and much more liable to cite and more liable to confront “poor work”. There will be no hiding. Could perhaps, the concept of “authorship” evolve into one that is ultimately fluid and productive? It sure would be something: carry out a bench procedure or clinical trial only after checking in with our faithful bloggers. But then again what about credentials? As the primary researcher, there should (one hopes) be no one closer, more vested in your work then you. Ultimately I am fully behind this trend, and I think these arguments are going to resolve in a golden age when open access means keen collaboration between experts, *none to rushed for criticism. Open to the young student “responder,” such as myself, but holding him to high standards. Standards starting with those seen in Nature’s community guidelines. Until then, I too feel the pressure of being involved in ultra-competitive research, hoping my mentors give me due day. I think one injustice to the current system is that there simply are not enough avenues for overly buzzing minds to put forth their work. Medical students, Graduate students, should be more involved in publication and authorship. Simple. Lastly, if you don’t believe the trend is already on it’s way pull up the NYTimes articles on a truly historical race between Harvard and Columbia to publish the molecular mechanism by which tau moves through membranes in Alzheimer’s disease. Columbia published just a few day ahead of Harvard in PLos Medicine no less. I foresee a Nobel Prize, the winner a Plos Publication no less.

Comments are closed.