News blog

It’s impact factor time!

impactupdated1Once a year, information company Thomson Reuters publishes updates to a measure of popularity that every science journal displays in lights: its ‘impact factor’. This event, which happened again yesterday, always produces a slightly embarrassed buzz among science journal editors. They appreciate the absurdity that a journal’s impact (a fuzzy, multi-dimensional concept) should be reported publicly by a single number; and the bias that some journals must score more highly because the communities they serve tend to cite each other more often (citations being the key measure on which the impact factor is based). And you won’t find any thinking person who doesn’t condemn how the impact factor has been abused for wider evils – such as judging the quality of scientists or research articles by the impact factor of the journals where they are published. (This, as one bibliometrician put it in a Nature special issue on metrics, is a ‘mortal sin’).

In fact, abuses of the impact factor have been roundly criticized by all. For criticisms from Nature, see ‘Not-so-deep impact’, a still-relevant 2005 editorial that explains how research assessment “rests too heavily on the inflated status of the impact factor”. Or, for an up-to-the-minute reflection after yesterday’s updated figures, this blog from Nature Materials editor Joerg Heber. Thomson-Reuters also explain how the impact factor should be used. Many researchers are now looking at other ways to judge scientists and research papers.

But the impact factor abides, and journal editors across the world have been checking to see how they have moved in the impact rankings, or where new journals are placed. (You can follow their buzz on twitter, and for one newbie’s take, Nature Chemistry have blogged at The Sceptical Chymist about the details of their first impact factor). To that end, here’s a ladder of this year’s top 20 and their movements from last year. Reviews journals (which don’t publish original research) are left un-named; the CA-A Cancer Journal for Clinicians is off the top of the charts at 87.9. Data from Thomson Reuters Journal Citation Reports.

Comments

  1. Report this comment

    Roger said:

    Impact factor is a scientific method. The problem is that we misuse impact factor in every way that is possible. When you want to find one more reason to reject a grant, you can say the applicant publishes in low impact journals. When you do not want to give a promotion, you can say he publishes a lot, but not in high impact journals. As high and low are relative, when you want to positively assess, the same impact factor can be favorably interpreted. To increase citations, you can self-cite as many as you can when you publish. These are partly ethical issues and institutional ethical training should emphasize on these matters. Even if we find new metrics, there is no guarantee that they also will not be misused. I think journals also should take a stand that apart from putting their impact factor on their websites, they will not publicize it further.

Comments are closed.