In peer review, not all peers are equal: so writes John Timmer on ars technica, reporting on a limited study of the quality of peer-review on the journal Annals of Emergency Medicine. Although the study was limited to one journal and hence discipline, the dataset consisted of 306 reviewers who had completed more than 3,000 reviews.
“Satisfactory reviews were compared to unsatisfactory ones, and excellent reviews were separately compared to average. For both of these cases, correlations were tested with a number of factors thought to improve reviewing skills (participation in grant reviews, years of experience, prior coursework in critical appraisal, etc.). These factors were tested both individually and in a multivariable analysis, which should eliminate any confounding factors.
In news that may be disturbing for journal editors everywhere, very few factors leapt out as having a consistent and significant correlation with the quality of a review, although some factors did have strong correlations in individual tests. The only positive factors linked to quality of reviews were age (younger reviewers were better) and working at an academic hospital. Ironically, service on an Institutional Review Board, which evaluates and approves experiments on humans, consistently correlated with lower-quality peer reviews. Even these factors, however, were only slightly better than random at predicting review quality.”
Dr Timmer concludes: “There are some clear limitations to this study, given that it applies to a single journal with content that’s exclusively medical. But the researchers who performed it note that it may be difficult to extend these studies to other fields, as many journals don’t even have a mechanism for evaluating or tracking the quality of reviewers. Given the importance of peer review to the entire scientific enterprise, their strongest conclusion is that more needs to be done to track and evaluate the process in order to ensure that the body of published information is as reliable as possible.”