A guest post from Willy Aspinall
Department of Earth Sciences, Bristol University, Bristol BS8 1RJ UK.
The Nature Editorial (‘Experts still needed’, Nature 457, 7-8 (2009); free to access online) and Harnad’s related Correspondence item on research performance metrics in the 2008 Research Assessment Exercise (Nature 457, 785; 2009) prompt me to suggest that an additional, complementary metric is needed which would measure the accomplishments of research scientists who act as peer-reviewers for journals.
Good reviewing is very time-consuming and, in some ways, just as challenging as authoring an original research paper; time spent doing this well is time removed from one’s own research work. Indeed, the thoughts and comments of a good referee can often represent a fundamental contribution to the science as well as the quality of a published paper, and this input should be recognized, and measured (the American Geophysical Union regularly celebrates ‘excellence in reviewing’ with citations by its journal editors). It is probably fair to say also that tangible good performance in refereeing usually begets ever more requests to review even more manuscripts, with further incursions on the diligent and proficient scientist’s time.
Perhaps a metric for this essential scientific activity of peer-reviewing might be constructed by summing the number of papers refereed by the individual scientist per year, each review being multiplied by the Impact Factor of the journal concerned. As refereeing is usually a solo activity, a metric for this skill, and for the related professional commitment, would be less prey to the shortcomings of performance measurement associated with metrics that attempt to gauge multi-author citations, for instance. Combining a ‘refereeing metric’ with other citation-related metrics to obtain a more comprehensive performance score for an individual scientist should not be an insuperable problem – and this measure can be pooled, as indicated in the Nature Editorial, with expert evaluation.