Nautilus

Measures for measures

Citation analysis can loom large in a scientist’s career. On pages 1003-1004 of the 21-28 December 2006 issue of Nature, Sune Lehmann, Andrew Jackson and Benny Lautrup compare commonly used measures of author quality. The mean number of citations per paper emerges as a better indicator than the more complex Hirsch index; a third method, the number of papers published per year, measures industry rather than ability. Careful citation analyses are useful, but Lehmann et al. caution that institutions often place too much faith in decisions reached by algorithm, use poor methodology or rely on inferior data sets.

Read the full text of the Commentary here (subscription or site licence required).

We welcome comments on this Commentary and on citation-based quality measures in general.

Comments

  1. Report this comment

    Boris Egloff said:

    Consider the following example: Researcher A and Researcher B both apply for a job ten years after their Ph.D. Researcher A has published 100 papers, 50 of which were each cited 40 times and the other 50 20 times. Researcher B has published 50 papers, each of which was cited 40 times. The performance of both researchers can be quantified as an index of scientific quality using three simple measures. (a) Number of papers per year: Researcher A = 10, Researcher B = 5. (b) h-index: Researcher A = 40, Researcher B = 40. © Mean citation count per paper: Researcher A = 30, Researcher B = 40.

    Which researcher would you hire? According to the analyses presented by Lehmann, Jackson and Lautrup (Nature 444, 1003-1004; 2006), Researcher B should be hired because the mean citation count is the best measure of scientific quality, easily outperforming the other two measures. We suspect that not everyone would readily agree with their conclusion – we, in any case, do not (neither with respect to our example nor in general).

    How then, do they arrive at this surprising conclusion? We believe that there is at least a trace of circular reasoning inherent in their analyses. The crucial statistic in their model is the conditional probability, P(i/α), that a paper by an author with an estimated scientific quality of α will have i citations. According to their statistical model, a measure of scientific excellence m is the more accurate the more strongly the conditional probabilities P(i/α) depend on the scientific quality α estimated by m. The analyses revealed that P(i/α) is strongly dependent on α when α is estimated by the mean citation rate, somewhat dependent on α when α is estimated by the h-index, and independent of α when α is quantified as the number of papers per year. These results do not, however, support the conclusion that the mean citation rate is the best measure, but rather simply reflect the fact that the number of citations i of any given paper is naturally more strongly associated with the author’s mean citation rate than with the other two indices.

    The search for an optimal measure of scientific excellence is certainly an important one. Nonetheless, the criteria used to judge the quality of different measures should be uncontaminated by the measures themselves. Explicitly, the number of citations may not be used to judge the accuracy of measures based on citation records. Instead, we should estimate the validity of different measures by comparing them with external criteria for scientific excellence (for some suggestions, see Haggbloom et al., Review of General Psychology, 6, 139-152, 2002).

    By the way, adopting mean citation count per paper as the index of scientific quality would lead to a decrease in published papers, with authors worrying about reducing their index by publishing less spectacular and consequently less cited studies. At least this side effect may be considered desirable.

    Boris Egloff, Mitja D. Back, Stefan C. Schmukle

    Department of Psychology, University of Leipzig, Seeburgstr. 14-20, 04103 Leipzig, Germany

  2. Report this comment

    Frederick Sachs said:

    I analyzed the role of funding in scientific productivity using th step deriving force of the NIH budget doubling. Of course, using the number of publications, roughly nothing happened, showing the system seems saturated in money. I would like to get some help in analysis of “quality”.

    Like apple pie and motherhood, we are all in favour of health-related research. In the US, this is funded mostly by the National Institutes of Health (NIH). The Clinton administration doubled the NIH budget over five years starting in 1999, providing a unique test of social spending policies. Did information appear at twice the former rate? No.

    How can one measure scientific output? The simplest index is the number of publications per year. Many factors can influence the numbers including varied journal coverage by different databases. However, we have controls. The US numbers can be compared to those of other countries where the budget did not double, and the number of biological publications in the US can be compared to that in other research fields such as physics and chemistry where the budget did not double. The total number of publications in PUBMED, NIH’s publication database, yielded a maximum of 7% annual increase between 1999 and 2005 (data available on request), far from doubling. The ISI databases with various keywords, yielded a maximum increase of 13%, but chemistry increased 7%. Various key words such as “DNA” and “neuron” showed much smaller increases in productivity. Some of the increase in numbers is due simply to increased coverage of more journals by the databases.

    Comparing other countries to the US for biologically relevant articles, ISI reveals that doubling the budget had no significant influence on the number of biological publications (data available on request). China showed a massive increase in scientific productivity on both topics in all years.

    This is not to suggest that NIH is funding bad science, it isn’t. The US produces more scientific publications than any other country, but NIH research is clearly saturated in money. More than additional money, we need a change in NIH spending policies.

    The US is losing American scientists. There is little incentive to enter science. In the case of NIH, the average age of a person to obtain their first research grant is 43. There is about a 90% chance of rejection for each grant application, and this disincentive drives many smart students out of science. Furthermore, university tenure is often decided by NIH funding. If you have grants you get tenure; without grants you don’t. Yet it is the young investigators that are the most productive. Nobel prizes are awarded to old guys who did good work as young guys.

    Currently, NIH grant review panels have to compare applications from beginning investigators to those of Nobel Prize winners and decide with whom to invest; clearly they will fund the senior applicants with the best track record. However, as the data above shows, these funded labs cannot functionally absorb any more money. NIH doesn’t need more money, but a change in its distribution.

    Apart from speaking about the value of basic research, it is difficult for me, an NIH-funded scientist, to argue before Congress that the NIH should get more money. There are innumerable ways for the NIH to cut back on inefficient investments in the university welfare system, and they need to do so if the US is to stay ahead in health research.

  3. Report this comment

    BRAHAMA D. SHARMA, Ph.D.,C.Chem.,FRSC(life) said:

    A recent Nature News story reported that Indian Prime Minister Manmohan Singh has publicly criticised Indian science at the 94th Indian Science Congress on 3 Jan 2007, on the basis of briefings by the distinguished chemist C.N.R. Rao.

    The Nature report points out that Professor Rao’s measure of the number of publications by India as only 2.7 % of the world’s science publications, compared with China’s 6%, formed part of the Prime Minister’s argument. Professor Rao has been propagating this number game for a long time, together with highlighting the flight of academic researchers to industry and abroad.

    But this number game is not a scientific criterion for judging the state of science in India or anywhere. The fact remains, from the point of view those of us who are engaged in research in the United States, it is the quality of the publications emanating from India that are in need of much improvement, not their number.

    In this regard, Professor Rao prefers to cite the (erroneous) molecular structure of S4N4 reported by Raman spectroscopy over the (correct) one reported by single-crystal x-ray diffraction, because the former study had the name Raman associated with it, and hence an Indian connection. This is not the way to promote better science funding in India.

    Even in the United States, the theme of demanding better science funding to keep ahead of Europe is heard every year from the distinguished scientists. Fortunately, one has never heard of the use of percentage of world’s publications, as opposed to their quality, as a parameter for the basis of increased funding.

    Brahama D. Sharma

    Pismo Beach, California.

    Reference:

    Nature 445, 134-135 (11 January 2007) | doi:10.1038/445134b; Published online 10 January 2007

    Indian science is in decline, says prime minister

    K. S. Jayaraman, Bangalore

    Manmohan Singh calls for India’s scientists to raise their game in return for increased funding.

  4. Report this comment

    BRAHAMA D. SHARMA, Ph.D., C.Chem.,FRSC(life) said:

    The correspondence from Thomas F. Doring makes a valid point about citations criticising a paper being counted towards that paper’s citations.

    However, more disconcerting are deliberate attempts

    to not cite or to mis-cite a particular author, for example in textbooks.

    Unfortunately, when a respected authority cites an article, that article tends to be cited again and again by people who have not read the article beng cited, but have read only the article citing it, as pointed out by Dr Doring, and this is how myths can be perpetuated.

  5. Report this comment

    Talis Bachmann said:

    Virtually all acknowledged scientometric indexes purportedly showing impact and merit of scientists as based on citation and publication statistics have pooled publications that are single-authored and publications of collective authorship. Hardly anyone would deny collective nature and end-purpose of science. Standard impact indexes (Hirsch, 2005, PNAS; ISI Impact Factors based) as applied for evaluation of individual scientists inherently reflect this generally accepted attitude. However, individual scientific creativity may be somewhat obscured when there is no scientometric measures where strictly individual contribution to the world of ideas would be quantified. There are thousand ways how to become a co-author.

    The firsthand possibility could be calculation of two types of Hirsch indexes (h-indexes): the first one the standard measure, the second one where only single-authored papers and their citations are taken into account. A related, simple possibility could be extracting the number of citations a scientist has received for his/her single-authored articles and dividing this number by total citations he/she has received. In some rare cases the index would be 0.0 or, on the contrary, 1.0. Most will fall somewhere in between. This type of index may be termed simply the “Individual Impact Index”.

    To test whether this approach could have any merit, I have completed a small citation analysis of the 10 leading Estonian psychologists. First, they were ranked according to their approximated h-indexes. Secondly, h-indexes for only their single-authored articles were produced and a second ranking obtained. (Standard h-indexes varied between 21.0 and 3.0; h-indexes for single-authored papers varied between 9.0 and 1.0.) Rank order correlation between the two was rs = 0.287 (p=0.42, two-tailed, ns).

    Although individualistic styles in science are usually non laudatum, some simple measure showing strictly individual impact can provide no nonsense additional scientometric information.

    Talis Bachmann

    University of Tartu, Estonia

  6. Report this comment

    Thomas Döring said:

    As far as I understand only the shortfalls of visibility measures (i.e. various citation indices) have been discussed on this blog so far. And indeed, the list of drawbacks of citation-based measures can easily be extended (see for example Adam, D. 2002. Nature 415: 726-729; Kurmis, A. P. 2003. J. Bone Joint Surg. 85: 2449-2454). Why, however, do we concentrate so much on discussing visibility at all? Since in many cases visibility is a rather poor surrogate for assessing research quality, why don’t we just ask scientists to give their opinion on the quality of a paper directly? For example, the webpage http://www.CiteUlike.org allows publicly viewable rating of a paper and posting comments on it – although there, rating is only based on one parameter, reading priority. This platform could, however, relatively easily be developed into a powerful quality assessment tool, e.g. by adding a few more rating questions.

Comments are closed.