How Nature selects papers for publication

nature18feb.jpg

This is a shortened version of an editorial in Nature ( 463, 850; 2010 ; free to read online).

One myth that never seems to die is that Nature‘s editors seek to inflate the journal’s impact factor by sifting through submitted papers (some 16,000 last year) in search of those that promise a high citation rate. We don’t. Not only is it difficult to predict what a paper’s citation performance will be, but citations are an unreliable measure of importance. Take two papers in synthetic organic chemistry, both published in June 2006. One, ‘Control of four stereocentres in a triple cascade organocatalytic reaction’ (D. Enders et al. Nature 441, 861–863; 2006), had acquired 182 citations by late 2009, and was the fourth most cited chemistry paper that we published that year. Another, ‘Synthesis and structural analysis of 2-quinuclidonium tetrafluoroborate’ (K. Tani and B. M. Stoltz Nature 441, 731–734; 2006), had acquired 13 citations over the same period. Yet the latter paper was highlighted as an outstanding achievement in Chemical and Engineering News, the magazine of the American Chemical Society.

Indeed, the papers we publish with citations in the tens greatly outnumber those in the 100s, although it is the latter that dominate our impact factor. We are proud of our full spectrum.

Another long-standing myth is that we allow one negative referee to determine the rejection of a paper. On the contrary, there were several occasions last year when all the referees were underwhelmed by a paper, yet we published it on the basis of our own estimation of its worth. That internal assessment has always been central to our role; Nature has never had an editorial board. Our editors spend several weeks a year in scientific meetings and labs, and are constantly reading the literature. Papers selected for review are seen by two or more referees. The number of referees is greater for multidisciplinary papers. We act on any technical concerns and we value the referees’ opinions about a paper’s potential significance or lack thereof. But we make the final call on the basis of criteria such as the paper’s depth of mechanistic insight, or its value as a data resource or in enabling applications of an innovative technique.

At the same time, we operate on the strict principle that our decisions are not influenced by the identity or location of any author. Almost all our papers have multiple authors, often from several countries. And we commonly reject papers whose authors happen to include distinguished or ‘hot’ scientists.

Yet another myth is that we rely on a small number of privileged referees in any given discipline. In fact, we used nearly 5,400 referees last year, and are constantly recruiting more — especially younger researchers with hands-on expertise in newer techniques. We use referees from around the scientifically developed world, whether or not they have published papers with us, and avoid those with a track record of slow response. And in highly competitive areas, we will usually follow authors’ requests and our own judgement in avoiding referees with known conflicts of interest.

Myths about journals will continue to proliferate. We can only attempt to ensure that the processes characterized above remain as robust and objective as possible, in our perpetual quest to deliver to our readers the best science that we can muster.

Nature Neuroscience on gaps in ethical oversight of research

NNeuroFeb10.gif

Although institutional review boards are important ethical gatekeepers of human patient research, there is little data to evaluate their effectiveness. More coordination and a more transparent decision-making process is critical if review boards are to make appropriate and consistent decisions – so says the Editorial in this month’s (February) issue of Nature Neuroscience (13, 141; 2010). From the Editorial:

“An ethical overview is meant to be more than just another bureaucratic hurdle in doing research; it is a guarantee that all research is held to certain minimum standards and, particularly for human patient research, it is an assurance that the participants’ welfare is being looked after and that the risk to them is minimized. However, there is very little oversight of how well this overview meets its stated aims, especially for human research. Moreover, what little data exists points to some worrying inconsistencies; a study that submitted a mock functional magnetic resonance imaging human neuroimaging protocol to 43 Canadian review ethics boards found that the protocol was unconditionally approved by 3 boards, approved conditionally by 10 and rejected by 30. Given the increasingly knotty ethical challenges that neuroscience advances present, it is critical that we try to improve this situation by encouraging review boards to make their decision-making process more open and by encouraging greater cross-talk between different ethical review boards……

What is urgently needed is some real data on how the current process is working. Providing a searchable database of current protocols of the sort already provided for clinical studies would be a good first step by providing guidance to local review boards about decisions made on comparable cases, while still retaining the flexibility required to make case-by-case decisions. It would also highlight decisions that differ from the norm. Along with greater cross-talk between local ethical review boards, such publicly available information would also help reassure the public that ethical review is indeed doing what it sets out to do, by ensuring the welfare of subjects while advancing our knowledge of how the human brain works.”

Nature Neuroscience journal website.

Cite well, says Nature Chemical Biology

NChemBio Feb 10.gif

Scientists need to devote more attention to the citation lists of scientific papers—the connectivity and usefulness of the scientific literature depend upon it. The February Editorial in Nature Chemical Biology ( 6, 79; 2009) explores how “citations of published work link together the concepts, technologies and advances that define scientific disciplines. Though information technology and databases have helped us to better manage the expanding scientific literature, the quality of our citation maps still hinges on the quality of the bibliographic information contained in each published paper. Because article citations are increasingly used as metrics of researcher productivity, the citation record also affects individual scientists and their institutions. As a result, all participants in the scientific publication process need to ensure that the citation network of the scientific literature is as complete and accurate as possible.”

The Editorial goes on to discuss the factors that stand in the way of good citation practices, and explains how the journal ensures that the reference lists in the papers it pubishes are accurate and balanced. But although editors can help, authors are ultimately responsible for the work they cite in their papers, ensuring appropriateness, transparency and accuracy. Yet "the responsibility for maintaining and enhancing the citation network of a discipline resides with all participants: authors, referees, editors and database managers. Thoughtful attention during the writing and review processes remains the first and best approach for ensuring citation quality and the appropriate assignment of credit in published papers. Yet new publishing and database tools that lead us to an interactive multidimensional scientific literature will become essential.

As publishers move toward integrating functionality such as real-time commenting on published papers and creating ‘living manuscripts’ that preserve the snapshot of a research area through the lens of a published paper, while permitting forward and backward linking, the scientific literature is poised to become a richer environment that will support future scientific progress."

Nature Chemical Biology journal website.

Nature Chemical Biology guide to authors.

The Nature journals’ publication policies.

EMBO reports working for the clampdown

journal_cover.gif

Howy Jacobs lampoons the pervasive spread of time management and organization in academic research in his Editorial in the December issue of EMBO reports (10, 1281; 2009). “Following its successful trial in the university system during the past three years, time-management reporting is now to be extended to all citizens of the European Union (EU). The university trial concerned the apportionment of working time between different activities and was required by EU legislation to implement the so-called ‘full-cost model’ to all externally funded research in the academic sector. In essence, this enabled university finance departments to guarantee that external funding was correctly used for specified projects, and not for more general tasks such as thinking, deleting spam e-mails or the online submission and verification of time-management data. The new system, as outlined in the EU’s Non-Working Time Directive, will track citizens’ use of their time outside the workplace. Its aim is to ensure that public funds are properly used for the purposes intended. For example, if x% of the state budget is being spent on dental services, and this represents a fraction y of total national spending on dentistry, including the production, marketing and sale of toothpaste, then citizens should be spending (x/y)% of their non-working time visiting the dentist, brushing and flossing their teeth or managing their dentures.” Read on at EMBO reports to discover where all this leads.

EMBO reports asks “Is the end in cite?”

In a Correspondence to EMBO reports ( 10, 1186; 2009), Mark Patterson asks how we can avoid Howy Jacobs’s “light-hearted nightmare scenario” of the future of citation-based metrics. Patterson, director of publishing at the Public Library of Science (PLoS), presents his own organization’s article-level metrics, as a better alternative to the journal-level metrics that are currently in most common use as research output measures. He writes: “Article-level data are not without their problems, and so it is important to interpret the data carefully. But, we believe that providing the data in the first place will inspire new ideas about how to assess research. Rather than limiting attention to the journal impact factor, it will be possible to ask sophisticated questions about the impact and influence of published research, and to obtain meaningful answers. For example, for a piece of research that is aimed at practitioners, we might want to know the extent to which it has actually changed practice—citation metrics probably would not be of much help in that case. And it should be possible to find work that only emerges with the passage of time as crucial for the development of a particular field.” Noting that the PLoS journals no longer promote impact factors at their website, Patterson concludes: “As alternatives begin to emerge, the primacy of the impact factor will be challenged. But this will only happen if other stakeholders also take a stand.”

EMBO reports vision of impact futures?

Everyone loves to hate citation metrics, but EMBO reports ( 10,1067; 2009) perhaps goes further than most in Howy Jacobs’s October editorial vision of where it all may lead, which starts:

Unalaska, 2045. The announcement by the government of the Pacific Union that it will start to tax academic scientists according to their Impact Factor (IF) points has unleashed a storm of controversy. As the field that has traditionally, and for more than half a century, led the citation ratings, molecular biologists consider themselves to be at the forefront of this battle against such a blatant attack on academic freedom.

In the latter half of the twentieth century, a trend began to emerge, initially in the former USA, where scientists were expected to raise a substantial proportion—eventually the entirety—of their salary from competitive research grants. In return, academic institutions freed their professors from the formal responsibility to teach, while recouping enormous financial benefits in the form of what were then called ‘overheads’. In the first decades of the present century, scientists and their personal financial advisors began to realize that this system made them, in effect, self-employed managers of small businesses.

The article continues at the EMBO reports journal website.

Visualization tools in estimating quality of scientific output

IN a Correspondence to EMBO reports (10, 800-803; 2009), Beatrix Groneberg-Kloft, David Quarcoo and Cristian Scutaru of the Free University Berlin and Humboldt University, Berlin, describe a combination of scientometric tools and new visualizing techniques such as density equalizing mapping to show that research in the European Union has developed well so far this decade. Despite static levels of research spending as a percentage of the gross domestic product (GDP), the authors write that success of European science should not only be measured in terms of ‘work force’ and spending, but also in terms of its actual output—that is, publications.

The authors report that the total number of publications from European research groups in all journals listed in PubMed increased between 2000 and 2006, an increase of 49.37%. The growth in scientific publications ranges from 24.09% in Finland, to 37.02% in the UK and 44.09% in Germany, to immense increases of 162.12% in Portugal and 402.70% in Lithuania, correlating significantly with the GDP of each country.

As well as this quantitative marker, the authors used citation indices—total citation numbers or average citation per publication—from the Web of Science (WoS) database as qualitative or semi-qualitative parameters. They analysed the data for the total number of articles from a specific country; the total number of citations for a specific country; the average number of citations per published item for countries with at least 30 published articles; and bilateral research cooperations.

After presenting a number of maps, charts and tables, the authors conclude that scientometric tools combined with visualizing techniques can track and analyse scientific progress, displaying the results in an easily accessible manner.

Only one way to measure scientific achievement

Let’s stop playing with numbers, suggests Cheng-Cai Zhang of Aix-Marseille Université and Laboratoire de Chimie Bactérienne-CNRS in the latest EMBO Reports (10, 664; 2009).

He writes: “It is becoming increasingly fashionable to play with numbers, or letters representing numbers (for example, h and w), to measure the performance of a scientist or a scientific journal. Developing algorithms to calculate such numbers is becoming a science in itself, with each author claiming that his or her metrics measure better than others…… let us go back to the basic question: is it possible to measure the contribution, even relative, of a scientist in any particular field without bias by relying on metrics? The answer is no…… the solution is the review process, conducted by peers in both the funding and publishing systems, which already has the most essential role in assessing scientific quality and thus advancing science. Who among us has ever relied solely, or even mainly on indexes or citations to help us make a decision when reviewing a project or a manuscript submitted to a journal? I would argue that none of us rely on this type of data at all. It is therefore time to stop these futile efforts in searching for a magic number—which does not exist, by the way—and instead to rely on and trust the judgement of our peers to measure the scientific achievements of a scientist or the relevance of a journal.”

Nature Chemistry on judging scientific success

There are many different criteria that can be taken into account when judging the scientific success of individual researchers, but are some more meaningful than others? Nature Chemistry in its July Editorial (1, 251; 2009) is the latest to address this perennial question. (See, for example, this Nature Network forum on citation use and abuse.)

Nature Chemistry points out that the “basic currency of scientific communication is the journal article, and so it seems sensible to use this as a starting point for evaluating success in a given area. At first glance, this is a particularly attractive approach because we can boil down an individual’s publication record to cold hard numbers. For example, we can count how many papers someone has to their name and we can also count the number of times a specific article has been cited — or indeed how much an individual’s complete body of work has been cited. Moreover, the rise of the internet has made finding these numbers a fairly trivial task. But can we make meaningful comparisons?”

The Editorial identifies the fallacy of using journal ‘impact factors’ for this purpose, as well as flaws in alternative metrics that have been devised. Other suggested ‘success measures’ include the amount of funding a scientist can attract, recognition of peers (for example prizes and awards), and education.

Nature Chemistry: journal homepage.

About Nature Chemistry.

Nature Chemistry guide to authors.

Previous Nautilus posts on quality measures.

Previous Nautilus posts on citation analysis.

Nature journals’ impact factors for 2008

Thomson Reuters have just announced the 2008 Impact Factors. Nature is the top journal in the multidisciplinary science category by all Thomson Reuters’ new metrics: 5 year Impact Factor, Eigenfactor and article influence score. It is also the top of all journals in the Journal of Citation Reports (Thomson Reuters, 2009) listing (n=6,598) by Eigenfactor score. Here are the 2008 Impact Factors for the Nature journals that publish primary research:

Nature 31.434

Nature Biotechnology 22.297

Nature Cell Biology 17.774

Nature Chemical Biology 14.612

Nature Chemistry N/A

Nature Genetics 30.259

Nature Geoscience N/A

Nature Immunology 25.113

Nature Medicine 27.553

Nature Materials 23.132

Nature Methods 13.651

Nature Neuroscience 14.164

Nature Nanotechnology 20.571

Nature Photonics 24.982

Nature Physics 16.821

Nature Struct Molec Biol. 10.987