A metric for measuring peer-review performance

A guest post from Willy Aspinall

Department of Earth Sciences, Bristol University, Bristol BS8 1RJ UK.

The Nature Editorial (‘Experts still needed’, Nature 457, 7-8 (2009); free to access online) and Harnad’s related Correspondence item on research performance metrics in the 2008 Research Assessment Exercise (Nature 457, 785; 2009) prompt me to suggest that an additional, complementary metric is needed which would measure the accomplishments of research scientists who act as peer-reviewers for journals.

Good reviewing is very time-consuming and, in some ways, just as challenging as authoring an original research paper; time spent doing this well is time removed from one’s own research work. Indeed, the thoughts and comments of a good referee can often represent a fundamental contribution to the science as well as the quality of a published paper, and this input should be recognized, and measured (the American Geophysical Union regularly celebrates ‘excellence in reviewing’ with citations by its journal editors). It is probably fair to say also that tangible good performance in refereeing usually begets ever more requests to review even more manuscripts, with further incursions on the diligent and proficient scientist’s time.

Perhaps a metric for this essential scientific activity of peer-reviewing might be constructed by summing the number of papers refereed by the individual scientist per year, each review being multiplied by the Impact Factor of the journal concerned. As refereeing is usually a solo activity, a metric for this skill, and for the related professional commitment, would be less prey to the shortcomings of performance measurement associated with metrics that attempt to gauge multi-author citations, for instance. Combining a ‘refereeing metric’ with other citation-related metrics to obtain a more comprehensive performance score for an individual scientist should not be an insuperable problem – and this measure can be pooled, as indicated in the Nature Editorial, with expert evaluation.

Willy Aspinall

Nature Neuroscience experience with peer-review consortium

In 2008, the journal Nature Neuroscience joined a newly created community consortium aimed at making peer review more efficient by allowing reviews to be transferred between consortium journals. In its current (April) issue, the editors look back at their experience with the Neuroscience Peer Review Consortium over the past year (Nature Neuroscience 12, 363; 2009).

Journals in the Neuroscience Peer Review Consortium (NPRC) offer authors whose papers are no longer under consideration at a journal an opportunity to transfer reviews of their manuscipts when submitting their paper to another consortium journal. After a year, Nature Neuroscience‘s experience is similar to that of other journals in the consortium, with only a handful of papers being transferred from Nature Neuroscience to another consortium journal.

Similar to the Nature journals’ transfer system, the NPRC system is voluntary for authors and referees. Editors at one journal only know that a paper was reviewed elsewhere if the author chooses to inform them. At Nature Neuroscience, the editors ask referees for permission to release their identities whenever authors ask for their papers to be transferred to another consortium journal. If a reviewer declines to participate, the reviews (comments to authors only) are transferred anonymously.

All the transfers from Nature Neuroscience to date have been to the Journal of Neuroscience, and represent less than 1% of manuscripts that are eventually rejected after review. However, for the papers that were eventually published in the Journal of Neuroscience, the authors reported that the paper had been expedited. Even in the cases where new referees were solicited, authors felt that transferring the reviews from Nature Neuroscience had saved them time and effort.

No papers have been transferred to Nature Neuroscience from other consortium journals.

The Nature Neuroscience editors ask why so few authors are using the NPRC option. They conclude: “Authors may simply not be aware of NPRC or may not know what journals participate in it. Transfer rates may pick up as more authors learn of the consortium. At Nature Neuroscience, we have noticed an increase in the number of referees that state in comments to the editors whether they wish for their identities to be released to other consortium journals or not, suggesting a growing awareness of the NPRC.

It could also be that there are not that many papers that lend themselves well to this process. Many of our authors who have had papers rejected may prefer to take their chances with new referees at another journal, rather than making substantial revisions in response to the concerns raised by our referees. Certainly, our authors appear to be more conservative when deciding to transfer their reviews, preferentially choosing to utilize the NPRC transfer option when the reviewers reject the paper on conceptual grounds and not for technical reasons.

Another factor that influences the success of the transfer is whether the referees allow the release of their identities to receiving consortium journals. Previous reviews are clearly less useful to the receiving journal if the editors do not know who the reviewers were.”

Nature Neuroscience concludes that it is premature to gauge whether the system truly could save referees, authors and editors substantial time and effort. The editors encourage authors, referees and readers to share their views, either by email or by commenting here.

Online patient communities

Disease-orientated consumer online communities radically change the way in which individuals monitor their health, but they could also create new ways of testing treatments and speed patient recruitment into clinical trials. So starts the editorial in the September issue of Nature Biotechnology (26, 953; 2008). From the editorial:

“Several online communities for patients now offer a wealth of anecdotal and factual information about health, and tools for networking with like-minded individuals. The web sites are public, collaborative and simple to use. They are also starting to offer patients content that goes beyond what is available through existing gatekeeper-controlled healthcare infrastructures. Some even offer to host personal medical data, empowering patients to understand and manage their individual care in a manner that is powerful and disruptive to current medical practice. If these ‘user-generated healthcare’ data can be harnessed with data from conventional biomedical and clinical research, the benefits could extend beyond patients to payors, providers and the drug industry itself.”

User-generated communities discussed in the editorial include Daily Strength, Inspire and PatientsLikeMe, the last of which has more than 7.000 registered users. PatientsLikeMe is different from other services in that patients record data about themselves and share it in an open environment. The editorial continues: “Using standardized metrics provided on the site, patients can log their symptoms, severity and progression, and drug regimens and dosages, together with the efficacy and side effects. All the data is then neatly displayed in bar graphs and progress curves. Patients can thus rapidly identify others with similar ailments in similar stages of disease. They can use the wisdom of the crowd to learn which treatments work and which don’t. This is particularly useful for patients with rare conditions (and their physicians) who might not otherwise encounter comparable sufferers.” The editorial goes on to discuss some of the benefits and disadvangates of this patient-driven form of “peer review”.

Collaborative writing and editing at Citizendium

Cross-posted at Nautlius:

Biology Week, an online “open house” for biologists, biology students and other interested people, begins today (22 September) on Citizendium, a ‘next-generation’ wiki encyclopedia started by Wikipedia co-founder Larry Sanger. (See this Peer to Peer post for a brief comparison of online encyclopaedias.)

From the Citizendium announcement: "during this week, biologists and anyone interested in the topic are invited to test the Citizendium system. Editors and authors from the project’s Biology Workgroup will be on hand to meet and greet new people on the wiki. “I strongly believe that the Citizendium system will be appealing to many scientists and scholars,” said Sanger. “Many of them just need to give it a try. Biology Week is an excuse for biologists to try out the system together.” Gareth Leng, a professor of Experimental Physiology at the University of Edinburgh, and Citizendium author and editor, described the project: ‘Our role will not be to tell readers what opinions they should hold, but to give them the means to decide, rationally, for themselves. The role of experts is critical—not to impose opinions, but to support accuracy in reporting and citing information’. "

The Citizendium, or “citizens’ compendium”, uses the same software as Wikipedia and is a public-expert hybrid project to produce a general reference resource. The community encourages general public participation, but makes a low-key, guiding role for experts. It also requires real names and asks contributors to sign a “social contract.” As a result, the project is said to be vandalism-free and, despite its youth (its public launch was just 18 months ago), has steadily added more than 8,000 articles.

Further information:

Citizendium website and press release about this project.

Biology Week homepage.

Sample article: Life, said to demonstrate the success of the collaborative-editing system.

(Thank you to Shirley Wu for alerting me to this project.)

WikiGenes, an evolving scientific tool

“”https://www.wikigenes.org/“>WikiGenes is the first wiki system to combine the collaborative and largely altruistic possibilities of wikis with explicit authorship. In view of the extraordinary success of ”https://en.wikipedia.org/wiki/Wikipedia">Wikipedia there remains no doubt about the potential of collaborative publishing, yet its adoption in science has been limited." So writes Robert Hofmann of MIT in a Perspective article in the September edition of Nature Genetics (40, 1047-1051; 2008) about this “dynamic collaborative knowledge base for the life sciences that provides authors with due credit and that can evolve via continual revision and traditional peer review into a rigorous scientific tool.” From the article:

In WikiGenes, authorship tracking technology enables users to directly identify the source of every word. This was not possible in first generation wikis, although authorship is essential to acknowledge contributors and to appraise the reliability of information. On the basis of clear authorship attribution, users can rate each other, and a self-regulating reputation system can be implemented. This is useful to address quality maintenance and the problem of editing conflicts, which used to depend on slow and theoretically refutable top-down decisions. To facilitate contribution and unambiguous use of scientific language, WikiGenes enables editing of articles in their final layout and citation of scientific terminology and references through integrated database and ontology lookups. All contributions to WikiGenes will be open access.

The full article can be read and edited here, at the WikiGenes website.

Making best use of interrelated information

On the topic of the ‘data deluge’, Sarah Kemmitt notes at Nature Network that the UK Government has opted for an increasingly used technique (see, for example, Elsevier’s Grand Challenge) to scope ideas for a strategy for how to make best use of interrelated information.

Sarah refers to the British Cabinet Office’s Power of Information Taskforce project ‘Show Us a Better Way’, which is asking for suggestions to develop better ways to publish the vast swathes of non-personal information that the government collects and creates, using the incentive of a competition (here is a BBC article about the initiative). From the Show Us a Better Way website:

Ever been frustrated that you can’t find out something that ought to be easy to find? Ever been baffled by league tables or ‘performance indicators’? Do you think that better use of public information could improve health, education, justice or society at large?

The UK Government wants to hear your ideas for new products that could improve the way public information is communicated. The Power of Information Taskforce is running a competition on the Government’s behalf, and we have a £20,000 prize fund to develop the best ideas to the next level. You can see the type of thing we are are looking for here.

To show they are serious, the Government is making available gigabytes of new or previously invisible public information especially for people to use in this competition.

Yesterday (8 July), a week after the competition was announced, 150 ideas had been submitted. Sarah finds it interesting that both business and government are realizing that the ‘power of the crowd’ and offering a prize may be a cost-effective way of harnessing innovative ideas around postmodern challenges. Your views are welcome at her Nature Network post.

Sarah is part of the British Library TalkScience team, and is a co-founder of the Nature Network group Scientific Researchers and Web 2.0: Social Not-working? All are welcome to join the group and contribute to the conversation, in advance of a meeting in September for a focused discussion of the topic.

Trustworthiness of online encyclopaedias

In its July Editorial Wouldn’t you like to know?, Nature Physics (4, 505; 2008) asks how much of the mass of information available online in encyclopaedic form can be trusted. The Editorial discusses various sources: Wikipedia, of course; Citizendium (with its associated Eduzendium); Scholarpedia ; and a brief mention of Encyclopaedia Britannica, which has just begun experimenting with user-generated input (although not noted in the Editorial).

Scholarpedia is the most recent of these resources, and says of itself that it “feels and looks like Wikipedia – the free encyclopedia that anyone can edit. Indeed, both are powered by the same program – MediaWiki. Both allow visitors to review and modify articles simply by clicking on the edit this article link.” Scholarpedia is said to differ from Wikipedia in that each article is written by an expert (invited or elected by the public); anonymously peer reviewed to ensure accurate and reliable information; and has a curator – typically its author — who is responsible for its content and who has to approve any proposed modifications. The website claims that, by this method, “while the initial authorship and review processes are similar to a print journal so that Scholarpedia articles could be cited, they are not frozen and outdated, but dynamic, subject to an ongoing process of improvement moderated by their curators. This allows Scholarpedia to be up-to-date, yet maintain the highest quality of content.”

The Nature Physics verdict? “Expert authorship and curatorship of free online information are indeed welcome. If scientists embrace Scholarpedia, then perhaps the opportunity to make sure that their own favourite area is well represented in its pages — as well as the possibility of citations — will prove sufficient incentive to the hard-pressed experts. The potential is huge, and so is the challenge.”

Nature Precedings and open review, one year on

Today, 18 June, is the first anniversary of Nature Precedings, where researchers can post their unpublished manuscripts, presentations, posters, white papers, technical papers, supplementary findings and other scientific documents, which can all be “peer-reviewed” online by anyone in the scientific community. (The website was available before June 2007 in ‘beta’ form.) Santosh Patnaik, a user who periodically tracks Nature Precedings at the Nature Network Nature Precedings forum, estimates that the 500th document will be uploaded some time in the next two weeks.

Because the code for Nature Precedings is freely available, Dr Patnaik has mined some data to chart the growth of the website. His results are presented here, in graphical form. The number of posters and presentations, common when the site first launched, is now barely increasing, whereas the number of manuscript uploads has grown at a steady rate over the past year. The most popular discipline, perhaps unsurprisingly, is bioinformatics, although most other disciplines are also becoming more popular, particularly neurosience, ‘evolution and ecology’, and chemistry. (For those interested in statistics, Dr Patnaik has also estimated the productivity of Nature Precedings authors.)

One aspect of this type of open peer-review is that discussion is not limited to the English language, even though the language of uploaded documents is in English. The vast majority of comments are, however, in English: here is an example of constructive review in the neuroscience field, Nature Precedings style. There are many other examples: the most active discussions are here, but one can also filter by subject area.

Update: Hilary Spencer and the Nature Precedings team provide a one-year perspective at Nature Network.

Scientific discourse 2.0: Second Life

Stephen T Huang, Maged N Kamel Boulos and Robert P Dellavalle write an article in the June issue of EMBO Reports (9, 496-499; 2008) with the title: Scientific discourse 2.0. Will your next poster session be in Second Life?

From the article:

Certainly, peer-reviewed literature and scientific meetings in the physical world will remain the main modes of distributing scientific information and informal communication. Yet, communication through virtual-world technology might become a useful supplement to the traditional discourse. The particular strengths of this technology include: its potential to share, review and comment on information, both with the public and one’s peers; options that allow users to create and develop unique objects, and presentations to educate and inform others and to display data; and, last but not least, the time and cost of bringing people together within and across disciplines can be reduced.

As with any new technology, there are issues that could have an impact on the usefulness of online communication and its acceptance within the scientific community. Scientists who rely on peer-reviewed data for their work might find Web 2.0’s lack of proofreading unacceptable to document research findings. However, we should explore the existing and potential applications of virtual communication for unique ways to discuss ideas, answer questions, educate and debate. Our ability to understand what we can accomplish in online worlds depends on our collective experience with the technology. The more scientists and clinicians who work with and comprehend the applications of virtual worlds for their respective research fields, the sooner we will realize how this technology can be best applied. The next step is to invite ourselves into these online realms, experiment with what they have to offer, and see where our exploration and creativity takes us.

(Stephen T. Huang is at Northwestern University’s Feinberg School of Medicine in Chicago, Illinois, USA; Maged N. Kamel Boulos is at the University of Plymouth, UK; Robert P. Dellavalle is at the Denver Veteran’s Affairs Medical Center and the University of Colorado Denver, Denver, Colorado, USA.)

Why there is not much online discussion of neuroscience research

Noah Gray, an editor at Nature Neuroscience, asks at Action Potential blog why neuroscientists are passing on the seemingly golden opportunity to communicate with one another online, for example on published articles at a journal website, or in an online journal club. Many have expressed opinions about reasons for this reticence: Noah links to some articles in his well-argued post, and you can read other thoughts (and find links to some of the same articles) at Nature Network (for example at Gobbledygook and at Flags and Lollipops).

Here, I’d like to highlight a response by “Michael” in the comment thread to Noah’s Action Potential post, as I believe this summary covers many, if not all, of the reasons why scientists in a discipline tend not to post comments on, and discuss, scientific research papers online. “Michael” writes:

I think there are a number of reasons as to why “Web 2.0” has not played much of a role in discussing neuroscience research. In no particular order:

1) Inefficiency: If I want to know why somebody used buffer x for their biochemistry experiments, or why they didn’t do control experiment xyz I email the authors, or use the phone if I know them. Why post it in the comment section of their paper, and wait for 5 weeks until they bother to check? And why does everybody else need to know about it?

2) Lack of dialogue: Commenting forums are poor venues for true dialogue. If you analyse the comment sections for the more popular entries, either here or elsewhere (for example the New York Times) there rarely is a true back-and-forth of ideas. All too often it’s 50 comments trying to be as witty as possible, with few people attempting to follow and respond to what others have written. There are rare cases where a small number of like-minded people post thoughtful comments at just the right rate to allow for a meaningful discussion. Without some type of moderation, it will remain the exception. The dynamics of a true group discussion, be it at a Journal Club or at your poster, are often taken for granted, but can’t be easily replicated online.

3) Who is commenting: It’s fine to have a democracy of opinions: in science, I don’t care so much about it. The people I want to hear/read often choose not to comment, whereas others who have nothing to say keep posting.

4) The Fear Factor: This one is obvious, and is why scientists have lab meetings or face-to-face Journal Clubs. It’s also why attempts to discuss papers online haven’t quite lived up to expectation. Ideally, we want to be honest in our opinion of a paper, but we are also human and don’t want to suffer the consequences of bruising the ego of a potential reviewer or search committee member. Staying anonymous is not the solution, since that makes it difficult for everybody else to properly evaluate the comment. After all, it does matter who is doing the criticizing.

5) Speed: Even the liveliest online discussion of a paper will drag on over hours or days. If a paper grabs my attention I will discuss at a lab meeting or Journal Club and over the course of one hour we will have thoroughly dissected it. Our attention span only lasts so long. Again, the online dynamics of who logs in at what time don’t allow for a true discourse that leads to some sort of resolution.