I’m posting here a comment made by Thomas Doering to the earlier Nautilus entry “Measures for Measures”:
“As far as I understand only the shortfalls of visibility measures (i.e. various citation indices) have been discussed on this blog so far. And indeed, the list of drawbacks of citation-based measures can easily be extended (see for example Adam, D. 2002. Nature 415: 726-729; Kurmis, A. P. 2003. J. Bone Joint Surg. 85: 2449-2454). Why, however, do we concentrate so much on discussing visibility at all? Since in many cases visibility is a rather poor surrogate for assessing research quality, why don’t we just ask scientists to give their opinion on the quality of a paper directly? For example, the webpage www.CiteUlike.org allows publicly viewable rating of a paper and posting comments on it – although there, rating is only based on one parameter, reading priority. This platform could, however, relatively easily be developed into a powerful quality assessment tool, e.g. by adding a few more rating questions”
.
There are other websites in addition to CiteULike for this type of measure, for example Nature Publishing Group’s free resource Connotea, for organizing, tagging, sharing and ranking articles. I’ve made some author-related Connotea tags for readers of this blog (see left-hand vertical navigation bar), but the primary usefulness of Connotea is for sharing and ranking articles published in the scientific literature. As well as ranking based on “number of users who add the article to their library”, as mentioned by Dr Doering, Connotea has a note function to allow the user to add customised comments on the selected article. I agree it would be fascinating to encourage widespread use of these resources among scientists, to give a more qualitiative view of the literature than systems that use a “numbers-only” approach.