There are many different criteria that can be taken into account when judging the scientific success of individual researchers, but are some more meaningful than others? Nature Chemistry in its July Editorial (1, 251; 2009) is the latest to address this perennial question. (See, for example, this Nature Network forum on citation use and abuse.)
Nature Chemistry points out that the “basic currency of scientific communication is the journal article, and so it seems sensible to use this as a starting point for evaluating success in a given area. At first glance, this is a particularly attractive approach because we can boil down an individual’s publication record to cold hard numbers. For example, we can count how many papers someone has to their name and we can also count the number of times a specific article has been cited — or indeed how much an individual’s complete body of work has been cited. Moreover, the rise of the internet has made finding these numbers a fairly trivial task. But can we make meaningful comparisons?”
The Editorial identifies the fallacy of using journal ‘impact factors’ for this purpose, as well as flaws in alternative metrics that have been devised. Other suggested ‘success measures’ include the amount of funding a scientist can attract, recognition of peers (for example prizes and awards), and education.