What we publish

The editors of a scientific journal have an editorial prerogative to publish articles that fall under the editorial scope of the journal as they see it. But defining this scope in a way that is clear to those outside the editorial team can be difficult and any definition can become dated as science and the journal evolve. Here we discuss the scope of Nature Methods.

As stated in our Guide to Authors, Nature Methods publishes “novel methods and significant improvements to tried-and-tested basic research techniques in the life sciences.” We broadly define “research techniques” as methodological procedures, biological or synthetic reagents, computational algorithms, software tools, instrumentation and other technologies.

The phrase “basic research” in the sentence above is key and, as explained in April’s Editorial, methods intended for later stage research applied to the clinic, drug discovery or industrial processes are generally considered outside our scope. These applications are often classified as biotechnology and thus are probably more appropriate for Nature Biotechnology or, if clinically oriented, a Technical Report in Nature Medicine.

But as April’s Editorial acknowledges, method and tool developments can be relevant for both basic research and more ‘downstream’ applications. This requires us to be continuously walking an editorial tightrope between them. As circumstances change and fields develop we may need to adjust how we apply our editorial scope.

As also stated in our Guide to Authors, Nature Methods is targeted at “academic and industry researchers actively involved in laboratory practice.” The phrase “laboratory practice” is intended to indicate that the journal generally only publishes methods for work that occurs in a research laboratory environment. On occasion, we may consider compelling work that doesn’t fall under the typical definition of laboratory research, particularly in areas like ecology where the basic research environment extends beyond the confines of a brick and mortar lab. An example was our publication of The Metatron: an experimental system to study dispersal and metaecosystems for terrestrial organisms.

We are constantly reassessing our editorial scope and can work with authors to adapt the presentation of work that might otherwise be considered out of scope if we feel it is sufficiently compelling, relevant to our readership and can be appropriately presented as important for basic research. We are happy to respond to presubmission inquiries submitted via our manuscript submission system asking about the appropriateness of the scope of a proposed manuscript. But if a manuscript is already written please submit the full manuscript as a regular submission and don’t worry about formatting it to fit Nature Methods’ article style. This will allow us to make a more informed decision and format can be dealt with later in the event we proceed towards publication. If a manuscript is clearly out of scope we will endeavor to provide a fast decision.

Finally, if there is an area of basic biological research that you feel is underrepresented in Nature Methods but should be of substantial interest to our readership, please let us know. For example, we published virtually no computational methods for the first several years of the journal but they now represent a substantial fraction of our articles. As we strive to serve our readers we want to avoid falling into a pattern of publishing research limited to a few areas, but our success in doing this depends heavily on receiving submissions from a broad range of research areas and we encourage the wider basic research community to consider Nature Methods even if we haven’t yet published much, or anything in a particular area.

Here there be software

Software plays an important role in scientific research, and published studies increasingly rely on custom software code developed by authors. This calls for better transparency in research articles and improved access to the software and code itself.

This month in Nature Methods and on methagora we revisit issues regarding software reporting and availability first raised exactly seven years ago in our March 2007 Editorial “Social software“. Our March 2014 Editorial updates and expands on these editorial policies and a blog post provides details of our guidelines for custom algorithms and software reported in Nature Methods research papers. We encourage researchers to read these, particularly those considering submitting a research manuscript using or reporting custom software to us. We also hope that publicizing our editorial policies might aid other journals in thinking about how to handle algorithms and software associated with research they publish.

Of course, these efforts are only one small part of what needs to be done to improve access to and use of scientific research software. As can be seen by our somewhat complex guidelines, it is difficult to establish simple rules that are sensible and fair for all cases and all communities. Community participation will be essential for refining and improving how software is handled.

Nature Methods currently relies on the use of Supplementary Software zip files for authors to supply the software and code underlying research articles. This isn’t pretty but it fulfills our basic needs. For example, 50% of the research articles in our March issue contain Supplementary Software files. But better methods are needed to archive and document code and assign provenance.

An important initiative in this regard is the “Code as a research object” project that is a collaboration between Mozilla Science Lab, Github and figshare that seeks to “better integrate code and scientific software into the scholarly workflow.” The aim is to create citable endpoints for the exact code used in particular studies. [Full disclosure: figshare is a product of Digital Science which, like Nature Methods, is part of Macmillan Publishers.]

The project is still in its early stages and follows on the similar but broader Research Object community project. Similarly, GigaScience and F1000Research are experimenting with archiving code and pipelines with DOIs.

We applaud these efforts and encourage the broader research community to participate in them. The current discussion about what is needed for code reuse (announced on the ScienceLab blog) and going on in a thread at Github would greatly benefit from more input by researchers who don’t consider themselves code jockeys.

There are many sophisticated and powerful things that could be done in an ideal world to facilitate code exposure and reuse, but the situation at the great majority of journals is so underdeveloped and the needs so acute that even small flexible steps forward will have a positive impact. Most important is for facilities to be put in place that allow and encourage the entire community to move forward, not just a small portion of it.

Guidelines for algorithms and software in Nature Methods

A large proportion of original research published in Nature Methods relies to varying degress on custom algorithms and software developed by the authors. Here we provide guidance on our relevant material sharing and reporting policies.

Nature Methods first outlined our material sharing and reporting standards for algorithms and software in a March 2007 Editorial. Now, after seven years of experience applying those policies we updated and expanded on them in our March 2014 Editorial. On this page we provide more detailed guidelines for authors submitting manuscripts containing unpublished algorithms and software they created. We are posting this information here because we’d like these guidelines to evolve and we want input from our communities on how they think this should happen. Please comment below and let us know your thoughts. We will update this document as our policies change.

Manuscripts published in Nature Methods include methods and tools in which algorithms and software represent an increasingly important methodological component. However, the degree to which they are central to the reported methodology can vary considerably. The algorithm or tool may be the entire motivation for publishing the work or it may be ancillary to it. Additionally, the methodology may be a novel algorithm of value in and of itself but a coded implementation is still necessary for the authors to show that it works as expected. Finally, the software tool may implement existing algorithms in a user-friendly form to deliver high value functionality of substantial general interest. Because of this wide variety it is inappropriate to enforce one-size-fits-all standards for algorithms and software reported in Nature Methods. The guidelines below represent our current editorial position on software reporting and release.

Client-side Software
This is software that is installed and used on a personal computer and not intended to be accessed remotely as a web service. It can be entirely stand-alone on a commonly available operating system (Windows, Mac OS X, or *nix) or can require the user to have a popular software platform installed (MATLAB or LabVIEW). In all cases, but particularly when using MATLAB or LabVIEW, all platform versions and software dependencies must be detailed in the supplied documentation.

At Submission

  • If the custom algorithm/software is central to the method and has not been reported previously in a published research paper it must be supplied by the authors in a usable form including one or more of the following.
    1. Source code
    2. Complete pseudocode
    3. Full mathematical description of the algorithm
    4. Compiled standalone software

    We strongly urge that full source code be provided. A compiled executable alone is not sufficient but may be required if the tool is intended to be of wide general use. Final acceptable forms of release of the algorithm, software and code will be determined by the editor after consultation with referees. This decision will be influenced by the editorial motivation for publishing the work (i.e. high novelty, satisfies wide general need, etc).

  • If the software is ancillary to the methodology being reported or is a routine implementation of obvious processes, such as microscope control software or analyses that are otherwise adequately described, the software need not be supplied to reviewers at submission but final release requirements may change in the course of the review process.
  • Supplied source code or software must be accompanied by documentation sufficient for a typical user to compile, install and use the software. Depending on the nature of the software tool, how central it is to the manuscript and our editorial motivation for considering the work, the minimum documentation may be a simple readme file or a full manual in PDF format.
  • If appropriate, sample data known to work on the software should be provided along with the expected output. Referees are encouraged to try and use the tool to analyze their own data.
  • The software and associated files may be supplied for reviewers as either:
    1. A single Supplementary Software zip file up to 200 MB in size
    2. Four DVDs to be mailed to the reviewers.
  • Any restrictions on the availability of software or code used to implement novel algorithms must be specified at the time of submission. Editors will decide whether any restrictions are acceptable in consultation with the reviewers. If some restrictions are deemed acceptable, they must be clearly explained in the methods section of the manuscript. Authors must supply all information needed for the reviewers to properly evaluate the software or code. If the motivation of the submitted manuscript is to provide a useful tool, rather than report a new algorithmic development, there should be no substantial restrictions on software or code availability.
  • We encourage authors to provide a license with the software or code.
  • A narrative description of key algorithmic components should be provided in the main text. Extensive equations, pseudocode or snippets of source code should be confined to the Online Methods or a Supplementary Note.

At Acceptance

  • If the software is central to the methodology and non-obvious, the source code should be provided in a Supplementary Software zip file as described above so that readers can easily access the exact code used to obtain the results in the paper. There are some possible exceptions:
    1. If the author’s institution requires a user to accept a license agreement or if the author has other reasonable grounds for not providing the source code as Supplementary Software, it may be acceptable for the author to host source code on an institutional server and require that users fill out an online form and agree to a license before downloading the software. In this instance the software must have version numbering and a link to the version used in the work must be provided in the manuscript.
    2. In some situations it may be permissible for authors to supply only compiled software as Supplementary Software but the source code to academic users upon email request. Details of availability must be clearly stated in the manuscript.
    3. It is not acceptable to make software and code available by email request only.

  • If the software or code isn’t the main tool/method being reported in the manuscript the authors may provide a note in the readme file of the Supplementary Software cautioning users that the code is unsupported and not intended for general use. In this case it is permissible that the software or code be made available only by email request but the authors must state this availability in the manuscript.
  • Regardless of how the software is made available, the code supplied with the manuscript must be identical to that used to obtain the data in the paper. An exception can be made for changes that don’t alter the processing of input data. The authors may however provide a link to access new versions of the software.
  • We strongly encourage authors to include a license with all published software and code.
  • We encourage authors to provide macros for recording the software version and parameter settings during analyses or to integrate this functionality into the software itself.

Web Tools/Resources
These represent a special class of software that many times can’t be expected to follow the same guidelines outlined above. This is particularly true if the web tool or resource is being supplied as a service and has few, if any, novel computational aspects to it. The only end-user requirement for web tools is that they be freely accessible with any modern web browser.

Nature.com provides a proxy server for reviewers to access web tools and resources anonymously.

At Submission

  • The authors must supply a working link and any necessary log in information.
  • Any unpublished algorithms central to the operation of the tool should be supplied in forms a), c) or d) detailed above.

At Acceptance

  • The authors should supply written confirmation that they will keep the website and tool operating and freely accessible for the foreseeable future.

The dos and don’ts of communicating with editors and reviewers

Some thoughts and advice from the editors at Nature Methods on communicating with us and our reviewers, particularly on matters of disagreement.

In the over nine years that we at Nature Methods have been interacting with authors and reviewers we have experienced a great variety of communication strategies. Some work well…others don’t. In our October Editorial we discuss how neglecting to word criticism productively can undermine the value of the criticism and short-circuit this critical aspect of scientific discourse.

In the three posts that follow we provide practical advice for communicating with editors and reviewers during three critical steps of the publication process. These are: the cover letter, the rebuttal letter and the appeal letter. We hope you find these guides useful and encourage readers to comment on the points made and suggest dos and don’ts of their own.

How to write a cover letter
How to write a rebuttal letter
How to write an appeal letter

Update: It has been suggested that we write a dos and don’ts for reviewers. We agree this could be just as useful for improving the peer review process, possibly more so, and hope to be able to provide this soon.

How to write a cover letter

Part one of our 3-part series on the dos and don’ts of communicating with editors and reviewers.

A good cover letter is a crucial part of the manuscript submission package to Nature Methods. It is not simply an archaic form of communication that is becoming obsolete in a digital world; rather, it should be viewed as an opportunity to convey many important pieces of information about a paper to the editors.

Manuscripts submitted to Nature Methods must first pass an editorial evaluation stage, but as professional editors, we are not experts in every scientific field that the journal covers. Providing context for the paper in a cover letter not only can help the editors reach a quicker decision but also can sometimes tip the balance in favor of sending a borderline paper out for peer review.

Here are some practical tips for potential authors.

The DOs:

  • Do give a brief, largely non-technical summary of the method. Explain how it will have an impact and why the method and its applications will be interesting to a broad biological audience. This can include more forward-looking information about potential future applications that authors may be reticent to share with reviewers or readers of their manuscript. Such a summary is especially crucial for highly technical papers, where the chance that the advance may not be fully appreciated by the editors is often higher.
  • Do put the work in context. Briefly explain the novelty and the specific advances over previous work but be realistic about what the method can and cannot achieve. Many authors are hesitant to compare their work to previous methods for fear that it will appear to reviewers that they are putting down the contributions of other researchers. But editors may not be aware of the nuances of various approaches to address a methodological problem and are more likely to reject a paper without peer review when the advance over previous work is not clear. Authors should not hesitate to discuss freely in the cover letter why they believe method is an advance (most ideally, backed up with strong performance characteristics in the manuscript!).
  • Do suggest referees. If the editors decide to send the paper for peer review, providing a list of potential referees, their email addresses, and a very short description of their expertise, can help the editor assign referees more rapidly. Of course, whether the editor decides to use any of the suggested referees is up to him or her. This is also the place to list researchers that you believe should be excluded from reviewing the paper. (Please note that the names of excluded reviewers should also be included in the relevant field of the online submission form.) The editors will honor your exclusion list as long as you don’t exclude more than five people; if you exclude everyone relevant in a scientific field such that the review process will not be productive or fair, the editor may ask you to shorten the list.
  • Do tell us about any related work from your group under consideration or in press elsewhere. Explain how it relates, and include copies of the related manuscripts with your submission.
  • Do mention any unusual circumstances. For example, known competition with another group’s paper, co-submission to Nature Methods planned with another group, or co-submission of a related results paper to another NPG journal, etc.
  • Do mention if you have previously discussed the work with an editor. As editors, we meet a lot of researchers at conferences and lab visits and many papers are pitched to us. A brief mention of when and where such a conversation occurred can help jog the memory of why we invited the authors to submit it in the first place.

The DON’Ts:

  • Don’t simply reiterate that you have submitted a paper to us and/or copy and paste the title and abstract of the paper. The cover letter should be viewed as an opportunity to present useful meta-information about the paper, and not tossed off simply as a submission requirement.
  • Don’t go on for pages about what the paper is about and summarize all of your results. The editor will always read the paper itself so long cover letters are usually redundant. A one-page cover letter in almost all cases is sufficient.
  • Don’t use highly technical jargon and acronyms. Explaining the advance in a general manner can go a long way in helping the editors reach a quicker decision; cover letters that are largely unreadable are of no help to the editors.
  • Don’t overhype or over-interpret. While a description of why the method will advance the field is definitely appreciated, obvious overstatements about the impact or reach of the work do not help and can even reflect poorly on the authors’ judgment of the needs of a field.
  • Don’t assume that going on about your scientific reputation or endorsements from others in the field will sway us. This is not pertinent to our editorial decision. Our decisions are based on whether we think the paper will be a good editorial fit for the journal, not on the laurels of the authors or because someone important in the field suggested that they submit the work to Nature Methods

And finally, a minor editorial pet peeve:

  • Don’t address your cover letter to “Dear Sir.” This is antiquated language, not to mention often incorrect, given that two-thirds of Nature Methods’ editors are women. Stick to the gender-neutral “Dear Editor” in cases where you are not addressing a specific editor.

Don’t miss parts 2 and 3 of this series of posts covering rebuttal letters and appeal letters. We encourage questions, comments and feedback below. The editors will do their best to answer any questions you have.

How to write a rebuttal letter

A well written rebuttal letter is critical in any resubmission. 

Once the initial reaction, be that joy, anger or frustration,  to receiving feedback from editors and reviewers about one’s work has subsided, it’s time for our authors to make one of two decisions:  continue to go after a Nature Method paper  or take their work to another journal.

A realistic look at how the reviewers’ requests can be met will go a long way in helping to determine whether a revision is likely successful and to avoid a futile resubmission.

If authors want to resubmit in cases where the editorial decision was negative, and referees were critical and asked for a lot of additional information, the first step, before embarking on any revision, should be an appeal (see the post on “How to write an appeal letter” for more details) and rebuttal letter to the editor to discuss whether a proposed list of additional information is likely to address the referees concerns.

Authors who receive a positive editorial decision and who are confident that they can address the reviewers’ points nevertheless have to submit a rebuttal letter with their revision.

The rebuttal letter is an author’s chance to directly reply to the reviewers, announce plans to improve the work, clear up misunderstandings or defend aspects of the work. How it is written can make a big difference in whether or not an appeal is granted and how the reviewers judge the revision.

The DOs:

  • Do acknowledge that the reviewers spent a substantial amount of time looking over the paper – rebuttal letters that thank the referees for their time and comments set a positive tone and ensure that the exchange takes place on a productive footing.
  • Do acknowledge that a misunderstanding may be due to poor presentation on your part, not lack of expertise on the reviewers’,  and phrase your reply accordingly,  taking the opportunity to clarify.
  • Do copy the full text of each reviewer’s comments in your rebuttal and reply to every concern raised by each reviewer immediately after each point in a concise manner that clearly states how you plan to address it (experimentally or editorially) or point to data that already addresses it which the reviewer appears to have missed.
  • If you cannot address a point at all, explain why not.
  • Do number the comments or at least break them into paragraphs, and use different fonts or text colors to distinguish the reviewer comments and your reply, rather than write a single reply to an entire review in summary form.
  • Do include relevant citations with full references or dois so they can be easily looked up, rather than just cite by First Author et al.
  • Do include pertinent new data as embedded figures, tables, or attachments,   indicate where in the manuscript you added the information; give page numbers, figure panels, Supplementary material etc., so editors and reviewers don’t have to go on a search for the new data. If any of this information will not be included in the revised paper explain why not.
  • Do be succinct and to the point and avoid epic discourses.  In the case where more than one referee has raised the same concern, it’s best to cite “see response to point 2 from Reviewer #1”, for example.
  • Do remember that each reviewer sees all comments and your replies so be equally respectful to all.

The DON’Ts:

  • Don’t vent or accuse the reviewers of bias or incompetence. We have read countless times that “ ref 2 is lacking expertise and completely misses the point” etc. and one wonders what the goal of such blanket statements is. They serve no productive purpose and instead potentially bias all referees, even the positive ones, against the work.
  • Don’t plead that for personal or monetary reasons critically important experiments can’t be performed. While we hear the plight of underfunded labs we don’t make exceptions for these reasons.
  • Don’t ignore specific requests by referees without comment and selectively only answer a few queries.
  • Don’t rephrase a referees’ point to give it a slightly different meaning that you can more easily address.

Don’t miss parts 1 and 3 of this series of posts covering cover letters and appeal letters. We encourage questions, comments and feedback below. The editors will do their best to answer any questions you have.

How to write an appeal letter

Although usually unsuccessful, a strong appeal letter can be an important tool for authors.

Rejection is never easy. You’ve put long weeks, months, maybe even years of work into a project that you think is perfect to publish in Nature Methods, so your feelings of disappointment, anger, frustration or self-doubt are completely understandable. Your first instinct might be to hit “reply” and send an angry email to the editor. But your best bet is to take some time to cool off, then move on and submit the paper elsewhere. If you are convinced, however, that a serious error has been made or that you can fully address the specific criticisms raised by the editors or by referees, then you may send a constructive appeal letter to the editor.

Nature Methods has different types of rejections, with or without peer review. There are outright rejections (which represent the vast majority), and then there are those rejections where the editor indicates that a manuscript could be reconsidered if the authors can address specific shortcomings. The editorial decision process is of course a subjective and imperfect one. Appeals, however, are usually unsuccessful. Those that are successful are those where the authors make a strong case for reconsideration, typically by proposing to add new data that will strengthen the application or demonstrate how the work is a strong advance over existing methods.

Be aware that appeals are necessarily given a lower priority than manuscripts still under consideration. Decisions on appeals can therefore take a considerable amount of time and the majority of appeals are turned down. Decisions are usually only reversed if the editors can be convinced that the decision to reject was a serious mistake, if the authors can add a substantial amount of data to address certain shortcomings, or if a negative referee is found to have made serious errors or show specific evidence of bias.

An appeal letter is not the same as a rebuttal letter to referees (see the related post, “How to write a rebuttal letter”). An appeal letter is only read by the editors, so sensitive information not meant to be seen by the referees can be included.

Here are some things that do and don’t work when writing an appeal letter.

The DOs:

  • Do consider whether you have a good case for appealing that is worth investing time in the process. By editorial policy, appeals must take second place to new submissions. This means that it can take as long as several weeks for the editors to discuss an appeal, possibly get input from referees, and reach a decision. Unless your case is very strong, it will save you precious time by accepting the editorial decision and submitting the manuscript elsewhere.
  • Do clearly explain the reasons why you disagree with the decision to reject. In some successful cases, authors provide new information, not apparent from the original submission, explaining how the method will have a strong impact on a broad audience. Ideally such information would have been included in the cover letter with the original submission (see the post, “How to write a cover letter”), which can help avoid the need for a lengthy appeals process for a manuscript that is otherwise a good candidate for peer review.
  • Do explain how you plan to rectify any major shortcomings pointed out by the editor or by the referees. If you are willing to add data to the paper to address the shortcomings, explain what this data is and what it shows. If you have figures or tables prepared, include them with your appeal letter. (However, don’t yet rewrite your manuscript – since most appeals are turned down, this is usually just a waste of your time.) If you have a valid reason for not including such data, explain why not. 
  • Do include a separate point-by-point rebuttal letter to referees to assist the editors in reaching a decision (see the post on “How to write a rebuttal letter”). If the editors feel that a rebuttal letter is required to help them reach a decision, and one is not included, they will request one of you.
  • Do provide evidence for any accusations of referee bias. Describe in specific terms why you believe a referee is biased or has made technical errors in their review. In our experience, it is extremely rare that ALL of the referees of a paper would be biased or misjudge its impact. Don’t try to guess who the referees were (you will most often be wrong). In cases where one set of negative referee comments is far out of line with others that are generally positive, we often will consult with the positive referees to determine whether the dissenting referees’ concerns are serious and how they should be addressed.

The DON’Ts:

  • Don’t do anything in the heat of the moment but take some time to cool down and consider whether you would be better off resubmitting elsewhere.
  • Don’t simply reaffirm the importance of the work, write “we think you are making a mistake” or urge us to send a manuscript out for peer review without providing any justification. Appeal letters lacking a good justification will not convince us to change our minds.
  • Don’t try to bribe us with promises of high citations. While of course we hope for high citations for each research paper we publish, citation potential is far from being the most important editorial consideration (and it cannot be accurately predicted, anyway). Papers must first meet our standards of methodological novelty and potential community interest and impact.
  • Don’t assume that the paper must be of interest to us because we have previously published a similar paper. Editorial standards are constantly evolving, and the methodological novelty may be compromised by our previous publication. Additionally, we strive to publish a variety of novel methods across fields, so we must consider what is currently in our pipeline.
  • Don’t bash previous work. As editors, we want to know how a new method addresses certain shortcomings or significantly expands the applicability of a previously published method, but this discussion should be fair and balanced. Don’t simply say “the previous method doesn’t work,” explain why, and ideally provide experimental evidence. Providing a detailed comparison to previous methods in your paper in the first place can help avoid the need for a lengthy appeals process for a manuscript that is otherwise a good candidate for peer review at Nature Methods.
  • Don’t expect us to be swayed by your scientific reputation. While it is informative to give some background of your expertise in a field, we make decisions based on the fit of a paper with our journal in terms of scope, novelty and potential impact, not simply because the work comes from a good lab. The fact that you have coauthored papers in high-impact journals will not lead us to reverse our decision.
  •  Don’t rely on “celebrity endorsements”. It is good to hear that a leader in the field has read and likes your paper, or that 50 people came to view your poster at a conference. But if we feel that the paper is editorially not a good fit for Nature Methods, this is unlikely to make us change our minds about rejection.
  •  Don’t insult the intelligence or competence of the editors or referees. We know that rejections are upsetting and can often seem unfair. But personal attacks and bullying could compromise your success in an otherwise promising appeal situation.
  •  Don’t appeal every decision. Remember the old saying, “you’ve got to pick your battles.”

Don’t miss parts 1 and 2 of this series of posts covering cover letters and rebuttal letters. We encourage questions, comments and feedback below. The editors will do their best to answer any questions you have.

A retraction resulting from cell line contamination

After nine years in print, Nature Methods today published its first retraction; one that could have been prevented by cell line authentication. What does this mean for journal-mandated cell line testing?

Gliomasphere image

Two-photon fluorescence image of live primary gliomasphere from retracted manuscript.

In a Nature Methods paper published in 2010, Ivan Radovanovic and colleagues described a method to isolate cancer-initiating cells in human glioma without the need for molecular markers. Based on morphology and on a green autofluorescence, the authors reported they could use FACS to sort cancer-initiating cells from gliomasphere cultures (which had been derived from primary tumors). They also detected autofluorescence in cells from fresh glioma specimens, but at a much lower level.

Cells from the autofluorescent fraction could self renew clonogenically in vitro and were tumorigenic when transplanted into mouse brains, the authors reported, and in both cases performed better than non-autofluorescent cells from the rest of the culture or tissue. The origin of this autofluorescent signal was not understood at the time. The authors speculated it may have been related to the unique metabolism of the cancer-initiating cells.

It turns out that most of the primary gliomasphere lines (7 out of 10) were contaminated with HEK cells expressing GFP, leading to retraction of the paper. Using short-tandem-repeat (STR) profiling of two of the lines the authors determined that the contamination occurred over the course of culture in the lab: samples taken from early passages match the original tissue from which the lines were derived, but later passages no longer do so.

It is hardly surprising that the first retraction in Nature Methods is due to cell line contamination, a well acknowledged problem. A 2009 Editorial in Nature pointed to the disturbing results of cell testing by repositories which indicated that 18-36% of cultures were misidentified. It called on repositories to authenticate all of their lines, and for major funders to provide testing support to grantees. At that point funders could require cell line validation for investigators to retain funding, and Nature would require that all immortalized lines used in a paper were verified before publication. Unfortunately, it is now 2013 and we are still far from this goal.

But progress is being made. Community-based efforts are alerting researchers to this problem and providing resources to help them avoid being misled by erroneous results caused by cell line contamination. A 2012 Correspondence in Nature by John R. Masters on behalf of the International Cell Line Authentication Committee (ICLAC) pointed to the following resources available to researchers:

Please go to the ICLAC website for the most recent version of each of these documents.

Meanwhile in early 2013, at the publication end of the process, the Nature journals published coordinated editorials announcing a reproducibility initiative and stating that “…authors will need to […] provide precise characterization of key reagents that may be subject to biological variability, such as cell lines and antibodies.” In practice, the Nature journals are currently requiring all authors to state whether or not testing was done but are only requiring testing in cases where it makes particular sense.

Advocates for mandatory testing have cogent arguments for a uniform mandatory testing policy. First, it would avoid sending a confusing message; second, researchers can’t be certain that cell identity or mycoplasma contamination aren’t affecting results; and finally, continued publication of inaccurate species and tissue designations of misidentified cell lines continues to propagate misinformation.

In the work described in the retracted 2010 manuscript from Radovanovic and colleagues mandatory testing would certainly have been beneficial. However, for probably the majority of work published by Nature Methods there is no question that testing would have no impact on the reported results. For example, in 2011 and 2012 we published at least 17 manuscripts reporting new fluorescence microscopy methods and using imaging data from cell lines to assess the performance of the techniques in measuring fundamental cell properties such as the appearance and width of actin or microtubule filaments, membrane vesicles or other universal cellular structures. Cell line identity and even mycoplasma contamination would not impact the efficacy or conclusions of these measurements. This same situation exists for the validation and testing of many methods in other research disciplines such as proteomics, genomics and biophysics.

Even if these labs should be doing cell validation and mycoplasma testing as a matter of course as part of proper cell culture procedure, mandating that all these studies include such testing as a requirement for publication is unjustified.

But clearly even our most recent efforts at improving compliance with good testing practice will not be sufficient to eliminate cell contamination as a problem in work published in Nature journals. A possible solution may be to require testing by default but authors would be permitted to argue why, in their case, testing is clearly unnecessary. Editors (possibly with reviewer input) would be the final arbiters and would need to ensure that although the lines must be named and sourced, no species or tissue identifiers should be included in the manuscript in the absence of proper validation.

Technology development labs or others that only use cell lines for purposes distinct from biological investigation could continue to avoid testing. But any lab that might potentially use their cell lines to obtain biological results would know that they should institute a proper testing regimen or risk their work not being publishable in a Nature journal.

At this point this is only an idea based on our experience at Nature Methods. We encourage the community to comment and let us know what they think.

Reporting standards to enhance article reproducibility

Beginning May 1st Nature Methods will be requiring authors of manuscripts being sent back to peer review to fill out a checklist to disclose technical and statistical information about their submission.

The May Editorial briefly describes why we are using this checklist and provides some details of what is included. Authors can find the checklist that Nature Methods will be using at https://www.nature.com/nmeth/pdf/sm_checklist.pdf and there is a link to it on the journal homepage. Our checklist is identical to that of most of the other Nature journals except for an added item asking authors to “Identify all custom software or scripts that were required to implement the methodology being described and where in the procedures each was used.” Based on feedback we have received, a missing software or script seems to be the item most often mentioned by people commenting on challenges in reproducing a method we have published. This reporting requirment is an important step in trying to address this deficiency.

We expect that the addition of these reporting requirements will elict some grumbling by authors. But based on the experience of Nature Neuroscience, which has been requiring authors to fill out a methods checklist before even the first round of review, we expect authors will come to appreciate the role it serves.

The checklist is only one part of the efforts the Nature journals are making to improve reproducibility. The other journals are also removing formal limits on the length of the methods section. But since Nature Methods has long had no limits on the length of our Methods section, the checklist is the most prominant change for us and our authors.

The May issue also contains other articles relevant to reproducibility. The Correspondence section has a discussion about analyzing the reproducibility of animal experiments. And the May Technology Feature discusses reproducibility in quantitative PCR, a methodology that has suffered from serious problems in this regard due to poor experimental technique and reporting.

For those not tired of reproducibility at this point Nature also has a Special Focus on Challenges in irreproducible research.

As has been said in the editorials on the subject, this is only a first step toward improving the reproducibility of our published research and we welcome feedback from the community on our efforts.

Nature journals provide a CC license for community experiments

Nature Methods has long been an advocate of the value of community experiments (or competitions/challenges) to assess and compare the performance of algorithms and software tools. In 2008 we discussed the value of these competitions and advocated that they also be used to assess the performance of less widely used algorithms such as those used for single particle tracking. Such an experiment for assessing single particle tracking was run in 2012, although the results are still awaiting publication.

Publication of such work has often been confined to more specialized journals but in 2012 Nature Methods started publishing manuscripts emanating from these competitions with a manuscript assessing the performance of gene regulatory network inference methods based on results of one of the DREAM5 challenges.

In recognition of the profound value such challenges provide to the wider scientific community the Nature journals will now be publishing manuscripts describing the results of these challenges under a Creative Commons attribution-noncommercial-share alike unported license. This is the same license we use for publishing first genome papers, standards papers and white papers. The first example of this is an Analysis article published in Nature Methods yesterday describing the results of the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment.

Publication of such community experiments will necessarily be highly selective and likely increasingly so as such challenges become more prevalent, as illustrated by the explosion in the number of Grand Challenges in Medical Image Analysis. But these community experiments provide invaluable information on the performance of methods that are otherwise difficult to objectively compare. We hope that the potential for publication in a Nature journal and the open access provided by a creative commons license helps encourage broader participation in these efforts and visibility of the results.

Update: February 12
We just published another manuscript describing a community experiment. This Analysis article presents the results of the first FlowCAP challenge that assessed the performance of flow cytometry automated analysis methods.