News blog

Scientists join journal editors to fight impact-factor abuse

If enough eminent people stand together to condemn a controversial practice, will that make it stop?

That’s what more than 150 scientists and 75 science organizations are hoping for today, with a joint statement called the San Francisco Declaration on Research Assessment (DORA). It deplores the way some metrics — especially the notorious Journal Impact Factor (JIF) — are misused as quick and dirty assessments of scientists’ performance and the quality of their research papers.

“There is a pressing need to improve the ways in which the output of scientific research is evaluated,” DORA says.

Scientists routinely rant that funding agencies and institutions judge them by the impact factor of the journal they publish in — rather than by the work they actually do. The metric was introduced in 1963 to help libraries judge which journals to buy (it measures the number of citations the average paper in a journal has received over the past two years). But it bears little relation to the citations any one article is likely to receive, because only a few articles in a journal receive most of the citations. Focus on the JIF has changed scientists’ incentives, leading them to be rewarded for getting into high-impact publications rather than for doing good science.

“We, the scientific community, are to blame — we created this mess, this perception that if you don’t publish in Cell, Nature or Science, you won’t get a job,” says Stefano Bertuzzi, executive director of the American Society for Cell Biology (ACSB), who coordinated DORA after talks at the ACSB’s annual meeting last year. “The time is right for the scientific community to take control of this issue,” he says. Science and eLife also ran editorials on the subject today.

It has all been said before, of course. Research assessment “rests too heavily on the inflated status of the impact factor”, a Nature editorial noted in 2005; or as structural biologist Stephen Curry of Imperial College London put it in a recent blog post: “I am sick of impact factors and so is science”.

Even the company that creates the impact factor, Thomson Reuters, has issued advice that it does not measure the quality of an individual article in a journal, but rather correlates to the journal’s reputation in its field. (In response to DORA, Thomson Reuters notes that it’s the abuse of the JIF that is the problem, not the metric itself.)

But Bertuzzi says: “The goal is to show that the community is tired of this. Hopefully this will be a cultural change.” It’s notable that those signing DORA are almost all from US or European institutions, even though the ACSB has a website where anyone can sign the declaration.

(Nature Publishing Group, which publishes this blog, has not signed DORA: Nature’s editor-in-chief, Philip Campbell, said that the group’s journals had published many editorials critical of excesses in the use of JIFs, “but the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to”.)

DORA makes 18 recommendations to funders, institutions, researchers, publishers and suppliers of metrics. Broadly, these involve phasing out journal-level metrics in favour of article-level ones, being transparent and straightforward about metric assessments and judging by scientific content rather than publication metrics where possible.

The report does include a few contentious ideas: one, for example, suggests that organizations that supply metrics should “provide the data under a licence that allows unrestricted reuse, and provide computational access to the data”.

Thomson Reuters sells its Journal of Citation Reports (JCR) as a paid subscription and doesn’t allow unrestricted reuse of data, although the company notes in response that many individual researchers use the data with the firm’s permission to analyse JCR metrics. “It would be optimal to have a system which the scientific community can use,” says Bertuzzi cautiously when asked about this.

And Bertuzzi acknowledges that journals have different levels of prestige, meaning an element of stereotypical judgement based on where you publish would arise even if the JIF were not misused. But scientists should be able to consider which journal suits the community they want to reach, rather than thinking “let’s start from the top [impact-factor] journal and work our way down,” he says. “The best of all possible outcomes would be a cultural change where papers are evaluated for their own scientific merit.”


  1. Report this comment

    R. Valentin Florian said:

    Indeed, scientific publications should be assessed on the basis of their content rather than on the basis of the journal in which the publication is published. However, this means that each member of a committee that makes decisions about funding, hiring, tenure, or promotion should read thoroughly all relevant publications of the scientists that are evaluated, and be able to assess objectively their content. This is simply not feasible for a variety of practical reasons, including the limited time that committee members have, and the fact that the people that are available for participating in a particular committee might not be among the most relevant experts in the core field of the publications that are evaluated. In countries with relatively small or developing research communities there simply might not exist unbiased experts in the core field of the evaluated publications, while routinely involving foreign reviewers might not feasible. This is why the use of the impact factor has thrived: the impact factor allows committee members to delegate part of their evaluation on the assessment performed by the 2-3 reviewers that initially accepted the publication. The problem is that the impact factor is a very weak and indirect estimation of the true relevance of a particular paper.

    Committee members should instead delegate their evaluation to all of the true experts in the core field of the assessed publication, who might have read that publication anyway during their routine research activities. Each scientist reads thoroughly, on average, about 88 scientific articles per year, and the evaluative information that scientists can provide about these articles is currently lost. Aggregating in an online database reviews or ratings on the publications that scientists read anyhow can provide important information that can revolutionize the evaluation processes that support funding or hiring decisions.
    For this to work, scientists should publicly share ratings and reviews of the papers they read anyway.

    Spending 5 minutes to rate a paper that has just been read would save a couple of hours for each committee member who is later tasked to evaluate that paper, for which of the several committees that assess that paper.

    You may already start sharing ratings and reviews of the papers that you read on Epistemio, a website that I have founded, at .

    You may read more about this at (R. V. Florian (2012), Aggregating post-publication peer reviews and ratings. Frontiers in Computational Neuroscience, 6 (31).). You may rate or review this paper at .

  2. Report this comment

    Mike Taylor said:

    “Nature Publishing Group, which publishes this blog, has not signed DORA […] DORA makes 18 recommendations […] Broadly, these involve phasing out journal-level metrics in favour of article-level ones, being transparent and straightforward about metric assessments and judging by scientific content rather than publication metrics where possible.”

    Dear Nature Publishing Group,

    If you are in favour of science, then you are in favour of these recommendations. Please don’t be dissuaded from adding your influential voice by pickiness about the details. In the end, publishers are going to be seen as either signed up to Dora or not: be on the right side of that divide.

  3. Report this comment

    Donald Forsdyke said:

    Certainly a step in the right direction. But it comes from those heavily involved in contemporary research. As Jevons (1973) noted, “asking researchers about research evaluation is like asking a bird about aerodynamics.” Sadly the San Francisco Declaration proposers do not recognize that they first have to do their homework, not just express off-the-top-of-the-head prescriptions. And if they are too busy, then they have to recognize the discipline that, either directly or indirectly, should be informing them – the discipline of the History of Science.

    For example, take the establishment of the US National Institutes of Health in the 1940s. Look at the buildings at Bethesda. The biggest and busiest at the outset should have been a large Institute for the History of Science, with a mandate to study how past research discoveries were made, thus facilitating some degree of rationality in funding future discoveries. Around this large building would have been lesser buildings dedicated to Infectious Disease, Cancer, Heart Disease, etc.. Over the years, these institutes would have grown in size and the Institute for the History of Science might even have shrunk.

    Instead, the cart was put before the horse. Even at this late hour there is still no Institute for the History of Science. For more please see my Peer Review Webpages .

  4. Report this comment

    marc dubert said:

    science does not progress when one promotes people who specialize in generating repetitious least-publishable units and/or in knowing best how to game the bureaucracy and how to please failed-ph.d.-kingmakers at major journals, the mafiosos at NIH/NSF study sessions, and their peers by citing them or giving them openings for further fassade publications.

    science progresses when important scientific breakthroughs are made.

    therefore whatever contributes to increasing the probability of such breakthroughs is an important contribution to science (this includes onerous teaching beyond the textbook).

    i propose to evaluate scientists and their output according to how many established theories (or “scientific” fads) they have refuted, how many seminal hypotheses and crucial new questions they have proposed, how many breakthrough new methods they have developed, etc.

    5, 10, and 15 years after the ph.d., the scientist would write his/her own explicitly argued and heavily footnoted evaluation describing his vision and merits as breakthrough thinker and scientist by commenting explicitly on his established-theory refutations, novel hypotheses and questions proposed, breakthrough new methods developed, etc., and by contrasting everything to how things were before his work.

    the factuality of the listed results and presented context and the relevance that the evaluated person attributes to the topics and results mentioned in the self-evaluation would then be critiqued by

    a) a group of experts recommended and justified by the evaluated person and

    b) a group of international experts chosen by a panel of national experts themselves chosen by the country’s professional organization.

    (this could be refined of course; the most important thing is to avoid both invidious and crony reviewing).

    the two reviews would then be exchanged between groups and contradictions would be eliminated.

    those who would come up empty-handed would start performing more and more work for others who have delivered breakthrough work in the past (and the technical training of such ““support specialists” would be augmented and everybody would get paid the same to avoid careerists).

    these “specialists” would also carry out work for starting postdocs.

    say one would start working 25%, 50%, 75%, 100% for others, after coming up empty-handed after 5, 10, 15, and 20 years….

    of course, after delivering an important breakthrough (“hard work” and a “steady output” do to qualify as such), one would regain all of the “lost” ground (quotation signs because it must be a nightmare to have to feign that one is a creative scientist when one is not, especially if one is paid the same either away).

Comments are closed.