Citation-based quality metrics were discussed on Nautilus earlier this year. One of those, the h (for highly cited) index, was covered recently in a News story, and is the subject of two Correspondence letters in the current issue of Nature.
Michael C. Wendel of Washington University Medical School writes (Nature 449, 403; 2007):
The h-index (the number n of a researcher’s papers that have received at least n citations) may paint a more objective picture of productivity than some metrics, as your News story ‘Achievement index climbs the ranks’ (Nature 448, 737; 2007) points out. But for all such metrics, context is critical.
Many citations are used simply to flesh out a paper’s introduction, having no real significance to the work. Citations are also sometimes made in a negative context, or to fraudulent or retracted publications. Other confounding factors include the practice of ‘gratuitous authorship’ and the so-called ‘Matthew effect’, whereby well-established researchers and projects are cited disproportionately more often than those that are less widely known. Finally, bibliometrics do not compensate for the well-known citation bias that favours review articles.
Clint D. Kelly and Michael D. Jennions of Australian National University, write (Nature 449, 403; 2007):
The h-index seems to be breaking away from the bibliometric pack, in the race to become a favoured measure of scientific performance (‘Achievement index climbs the ranks’ Nature 448, 737; 2007). However, if the h-index is to become an assessment tool commonly used by university administrators and government bureaucrats, those using it should be aware of its pitfalls.
As noted in your News story, tallying how many papers a researcher publishes (their productivity) gives undue merit to those who publish many inconsequential papers. But at least for ecologists and evolutionary biologists, the h-index is highly correlated with productivity.
This is worrisome, because the h-index is easily misconstrued as an equitable measure of research quality. We offer two examples.
First, female ecologists and evolutionary biologists publish fewer papers than their male counterparts, and they have significantly lower h-indices. Should administrators therefore conclude that men are better researchers? No. The gender difference vanishes if we control for productivity. It seems unlikely that this phenomenon is restricted to ecology and evolution.
Second, the h-index increases with age and using the ratio of the two can be problematic. Therefore, reliably comparing the performance of younger researchers with older ones is difficult.
Your views are welcome.