Nature Chemistry | The Sceptical Chymist

Inside our impact factor

Impact factors mean different things to different people. Some think that they are the worst thing to happen to science. On the other hand, I’ve been told stories of researchers receiving bonus payments in proportion to the impact factors of the journals they publish in – I suppose they don’t feel the same way as the first group (although I imagine this practice reinforces the strength of the opposing view). Irrespective of how you feel about impact factors, they are a measure of something – whether that measure is of any use is a different debate for a different day. What I want to do here is look at how our content is reflected in our latest impact factor – and answer a question that, as editors, we’ve heard quite a few times:

Do review articles inflate impact factors?

Anecdotally, I would say that they do – well, in chemistry publishing at least. If you look at the top three ranked chemistry journals (by impact factor), they are all review journals: #1 Chem. Rev., #2 Chem. Soc. Rev. and #3 Acc. Chem. Res. – they all have impact factors above 20. We are often asked the review-article question specifically in relation to Nature Chemistry but I’ve also heard it proposed on more than one occasion that review articles are the reason why Angewandte Chemie has a higher impact factor that JACS. I’ll leave someone else to run the numbers on those journals, but this is the story for Nature Chemistry.

So, our 2011 impact factor is 20.524.

According to Thomson Reuters’ Journal Citation Reports (JCR), this figure comes from 4618 citations in 2011 to content that we published in 2009 and 2010, divided by the 225 pieces of content that count as ‘citable items’; i.e., 4618/225 = 20.524. Looking at Web of Science (WoS), it seems we actually published a total of 467 items in 2009/2010, but it is only the research and review articles that count as ‘citable items’. In 2009/2010, we published 29 Review articles (some of them are called Perspectives, but they all count as review articles) and 196 research Articles. The other 242 non-citable-items that we published in 2009/2010 include things like News & Views articles, Editorials, Commentaries, Research Highlights, Thesis articles and Features. These 242 articles are not included in the denominator for the 2011 impact factor calculation, but the citations that they received in 2011 do get added to the numerator.

So, to summarize so far, in 2009/2010 we published the following:

196 Articles; 29 Reviews; 242 Other

Now, let’s divide up those 4618 citations in 2011 between the content that actually received them. According to WoS, the number of citations to each category of content above is as follows:

Articles 3172 (69.9%); Reviews 1027 (22.6%); Other 339 (7.5%)

The eagle-eyed amongst you might notice that 3172 + 1027 + 339 = 4538 (not 4618). So, it appears that JCR found 80 citations stuffed down the back of a sofa that weren’t included in WoS. So, let’s assume that the missing citations are divided proportionately between the content types in the same way that the other 4538 are, so that gives total citations for each content type as follows:

Articles 3228 (69.9%); Reviews 1045 (22.6%); Other 345 (7.5%)

So, our actual impact factor is given by:

(3228 + 1045 + 345) / 225 = 20.524

Let’s cut out the Reviews. This takes away 1045 citations and 29 citable items, so the result is:

(3228 + 345) / 196 = 18.230

So, without the review articles, our impact factor drops from 20.5 to 18.2.

You can also remove the citations to ‘Other’ content as well, the result then becomes:

3228 / 196 = 16.470

So, if you want our ‘pure’ impact factor based solely on the research Articles we published in 2009/2010, it’s roughly 16.5. Don’t go comparing this figure to other journals until you’ve done the same sort of calculations to remove citations to non-research content though. To be fair, however, I imagine the vast majority of JACS’ impact factor comes from the research papers (they don’t publish much else, apart from the odd book review or (the sometimes very odd) Perspective…).

As an aside, of the review-type articles we published in 2009/2010, the highest cited in 2011 received 132 citations and the least cited in 2011 received 5 citations. For comparison, of the research Articles we published in 2009/2010, the highest cited in 2011 received 118 citations and the least cited in 2011 received just 1 citation.

What does all of this mean? In the grand scheme of things, not that much. But for Nature Chemistry – and the content we published in 2009/2010 – I can say that review articles did inflate our impact factor relative to what it would be if we hadn’t published any review articles. Whether this is true for all journals that publish both review and research articles, I can’t say. You’d need to run the numbers. My suspicion would be that review articles would have a positive impact in most cases.

Right, those of you who love impact factors, you can go on loving them. Those of you who hate them, you can carry on doing so, although I suspect that you won’t have made it this far down this post.

Stuart

Stuart Cantrill (Chief Editor, Nature Chemistry)

Comments

There are currently no comments.