Peer-to-Peer

A metric for measuring peer-review performance

A guest post from Willy Aspinall

Department of Earth Sciences, Bristol University, Bristol BS8 1RJ UK.

The Nature Editorial (‘Experts still needed’, Nature 457, 7-8 (2009); free to access online) and Harnad’s related Correspondence item on research performance metrics in the 2008 Research Assessment Exercise (Nature 457, 785; 2009) prompt me to suggest that an additional, complementary metric is needed which would measure the accomplishments of research scientists who act as peer-reviewers for journals.

Good reviewing is very time-consuming and, in some ways, just as challenging as authoring an original research paper; time spent doing this well is time removed from one’s own research work. Indeed, the thoughts and comments of a good referee can often represent a fundamental contribution to the science as well as the quality of a published paper, and this input should be recognized, and measured (the American Geophysical Union regularly celebrates ‘excellence in reviewing’ with citations by its journal editors). It is probably fair to say also that tangible good performance in refereeing usually begets ever more requests to review even more manuscripts, with further incursions on the diligent and proficient scientist’s time.

Perhaps a metric for this essential scientific activity of peer-reviewing might be constructed by summing the number of papers refereed by the individual scientist per year, each review being multiplied by the Impact Factor of the journal concerned. As refereeing is usually a solo activity, a metric for this skill, and for the related professional commitment, would be less prey to the shortcomings of performance measurement associated with metrics that attempt to gauge multi-author citations, for instance. Combining a ‘refereeing metric’ with other citation-related metrics to obtain a more comprehensive performance score for an individual scientist should not be an insuperable problem – and this measure can be pooled, as indicated in the Nature Editorial, with expert evaluation.

Willy Aspinall

Comments

  1. bernd kochanowski said:

    “As refereeing is usually a solo activity,[…]”

    When I was working as a young doctor at an microbiological institute the full professor usually let the reviews write by us young ones. I’d assume he was/is not the only one.

  2. Bjoern Brembs said:

    Obviously, the reputation structure in science is hopelessly out of date. I agree that adding the huge amount of reviewing every scientist performs in his/her career to their reputation is long overdue. I commend the author for pushing this idea.

    However, multiplying Thomson Reuter’s Impact Factor in any such measure might prove counterproductive.

    1. The impact factor is a discredited measure: it’s negotiable, irreproducible and an inadequate statistical measure.

    2. Creating an incentive only to do good reviews for high-IF journals will skew the overall reviewing process.

    On a personal note, I must also object to my reviews for journals which decide not to play the IF game somehow counting less in this metric: I always try to deliver the best review I can get, irrespective of the journal.

    Therefore, I wholeheartedly support measuring peer-review and adding it to our reputations structure. However, skewing the metric by factoring some JournalRank in the metric is not a good idea. It’s an even worse idea if a discredited measure such as the IF is used.

    Report this comment Cancel report
    Your details

    Please confirm the words below

    In order to reduce spamming, this process ensures you are a real person and not an automated program.

  3. Massimo Pinto said:

    I agree with Bjoern. There should not be any encouragement to review a manuscript for a high impact factor journal over allegedly “minor” journals.

    I would hope to see manuscript refeereing acknowledged in a non-metric, non-measurable way, possibly via letters of reference made by journal editors?

    Driving refereeing in the seastorm of metrics is a dangerous route. What would happen next, metrics for those who referee Project Grants?

  4. Willy Aspinall said:

    I’m happy this post created some discussion. Bernd may be right that I am naive: in my own fields of interest, I have never asked a colleague to help on a review, nor have I ever been asked by one. If the practise he describes is as prevalent as he implies, it surprises me! However, I do subscribe to his reservations about anonymous reviewing, and do not withhold my own identity when this option is offered. Like Bjoern and Massimo, I too try to do the best job on any paper, for whatever journal, because it should represent making a (small) and responsible contribution to science; at the moment, however, the contributions of the papers, and the authors, continue to be scored by suspect metrics, but the poor peer-reviewer does not get even this flawed measure of reputation approval. Massimo’s other point about perhaps utilising some non-metric acknowledgment, say by journal editor reference, is an interesting one. As I noted earlier, the AGU acknowledges “Excellence in Reviewing” and doubtless this enhances the reputations of those so applauded, but it carries the risk that all that journal’s other reviewers are somehow branded less than “excellent”.

  5. G P S Raghava said:

    Their is an interesting article “Is citation a good criteria”,

    recently published in “Nature India”. In this article “Citation”

    commonly used criteria used for evaluating performance of scientist/

    departments/nation has been discussed in detail. This article also

    discussed h-index, g-index and imact factor of journals. For detail

    see following link

    Article link: https://www.nature.com/nindia/2009/090515/full/nindia.2009.133.html

Comments are closed.