Let’s stop playing with numbers, suggests Cheng-Cai Zhang of Aix-Marseille Université and Laboratoire de Chimie Bactérienne-CNRS in the latest EMBO Reports (10, 664; 2009).
He writes: “It is becoming increasingly fashionable to play with numbers, or letters representing numbers (for example, h and w), to measure the performance of a scientist or a scientific journal. Developing algorithms to calculate such numbers is becoming a science in itself, with each author claiming that his or her metrics measure better than others…… let us go back to the basic question: is it possible to measure the contribution, even relative, of a scientist in any particular field without bias by relying on metrics? The answer is no…… the solution is the review process, conducted by peers in both the funding and publishing systems, which already has the most essential role in assessing scientific quality and thus advancing science. Who among us has ever relied solely, or even mainly on indexes or citations to help us make a decision when reviewing a project or a manuscript submitted to a journal? I would argue that none of us rely on this type of data at all. It is therefore time to stop these futile efforts in searching for a magic number—which does not exist, by the way—and instead to rely on and trust the judgement of our peers to measure the scientific achievements of a scientist or the relevance of a journal.”