What we have learnt testing container-platforms for peer review and publication of code

A year ago now, we launched a trial to test the use of cloud-container platforms for peer review and publication of code at several Nature journals. The trial phase of this initiative is now officially over, and we would like to share the experience and outcomes, and provide an overview of what comes next.

What problem are we trying to solve?

Our guiding principle is that when new code is central to the main claims made in the paper, it is imperative that the code meets the same quality and reproducibility standards as the paper itself. This means the code needs to be properly documented, evaluated by experts so that it is functional (ie peer reviewed), and permanently identified and accessible at the time of publication to ensure the reproducibility of the results (these same principles apply to other research objects like data and protocols, but those were not the focus in this particular trial).

Nature Methods adopted the practice of ‘peer reviewing code’ for software papers in 2007 (editorial). Under this practice, editors require authors to submit the source code, a test dataset and details of installation, and ask the reviewers to install and test-run the code during peer review. This form of peer review is highly time consuming for authors, editors and reviewers, but it is also necessary. It is not uncommon for reviewers to point to basic flaws in the instructions or files that could render the code completely unusable.

Over a year ago, we partnered with Code Ocean, a Docker-based platform that allows authors to deposit code and data and enables users to run the code on the cloud with the set parameters to reproduce the results, or execute the code with new input values.  Together we developed a set of workflows and basic functionality of the platform that enables authors to upload the code and data associated with their submission, and reviewers to access the platform anonymously during peer review (see figure, reproduced from reference 1).

The trial was meant to evaluate if such a platform would provide:

  • A service to authors by assisting them in depositing the code and data and compiling them in an open, executable-based platform.
  • A service to reviewers by making code peer review easier (as easy as clicking a button). Reviewers can evaluate the code in the cloud using computing time that we provide as a publisher, not their own.
  • A service to readers by providing the code associated with the paper in a way that is properly identified, documented and supplied in a publically accessible platform that allows running, reusing and repurposing the code.

The trial was optional for authors at the three participating journals (Nature Methods, Nature Biotechnology and Nature Machine Intelligence) and we tracked feedback from authors and reviewers, author opt in rates and user-engagement metrics.

What have we learnt? Results!

Over 95 papers have now participated in the trial and more than 20 published papers are providing open, verified, properly documented and cited code using the technology.

Despite the additional work that authors need to do upfront when they sign up to the trial, we’ve seen large author uptake, with 54% of authors across all journals opting in to participate. Importantly, our reviewers actively engage with the platform. Capsules have received an average of 34 views via the private links provided to the reviewers. Approximately half of the reviewers signed up and duplicated the capsules, a requirement for running the code. Each reviewer that signs up runs the code 1.3 times, on average.  Importantly, peer review of code in this manner has surfaced problems with some manuscripts that would have led to the ‘irreproducibility’ of the code and the results.

Upon publication, we provide the links to access the code and data in a ‘Code Availability’ statement of the paper, which is provided openly to all readers regardless of access status.

We are looking at ways to improve the workflows and experience by providing better information, an easier workflow for editors and authors and better ways of surfacing code that is shared openly and peer reviewed through the use of badges.

We have been very pleased to see that the high standards we are applying to ensure open science and reproducibility of code in our papers has been noted, as we’ve received very positive feedback about the initiative from authors, reviewers and the science community. You can read more about the initiative and the results in the below editorials and in the Science Editor piece that we recently published.

Science Editor: Three approaches to support reproducible research

Nature Biotechnology: ‘Changing coding culture

Nature Machine Intelligence: ‘Sharing high expectations

Nature Methods: ‘Easing the burden of code review

What’s next?

Given the positive effects we’ve seen so far, we will continue the current practice at the journals. We also want to learn how the workflow would scale and to test it on more scientific disciplines so we have added Nature, Nature Protocols and BMC Bioinformatics to the trial.

A huge thanks to our authors, editors and reviewers who have engaged with us in this journey, we couldn’t have done it without you! We hope that this initiative, alongside others that promote data and protocol sharing, will help us develop our articles to live by the promise of more open and reproducible science.

References

1. Pastrana, E., Kousta, S. & Swaminathan, S. Three Approaches to Support Reproducible Research. Science Editor

AUTHORS: This guest blog comes from Erika Pastrana, Editorial Director for the Nature Research Journals and Sowmya Swaminathan, Head of Editorial Policy and Research Integrity at Nature Research.

Are you aware of gender bias in peer review?

This guest blog comes from Elizabeth Moylan, Senior Editor, Peer Review Strategy and Innovation BMC at Springer Nature, and Elisa de Ranieri, Head of Editorial Process and Data Analytics, Nature journals at Springer Nature.

Last month, Dina Balabanova, (Associate Professor in Health Systems Policy at the London School of Hygiene and Tropical Medicine (LSHTM); Section Editor for BMC Health Services Research) and Jamie Lundine (Research Fellow at LSHTM), hosted a workshop at LSHTM to discuss gender equality in peer review. The specific aim was to discuss ways to address women’s equal participation in the peer review process as authors, peer reviewers and editors in health journals. The workshop was attended by a diverse group of people with a range of backgrounds and experience including PhD students, researchers, editors, publishers and funders.

Dina set the scene by referring to one of the main messages from the Fourth Global Symposium on Health Systems Research that we can “learn lessons both from poor and rich nations to address the inequities that exist in all communities”. This was with respect to building strong health systems which protect the poor and promote equity. The message struck a chord for Dina, not only in terms of her own experience in health systems and policy research, but also in terms of parallel issues with respect to gender in peer review.

Gender bias in journals

Gender bias is a potential issue for journals across a range of different fields, not just health journals. A recent study of gender bias by Markus Helmer and colleagues on the Frontiers family of journals (which disclose the names of the handling editor and reviewers on published articles) found that while for some journals the proportion of women as authors could be as high as 48%, on average only 38% of peer reviewers are women and only 28% of editors are women. Of course, the final proportion of women who were named as peer reviewers may not have been representative of the proportion of women initially invited to peer review, but it seems that women are underrepresented in the peer review process, especially at editor level.

Jamie explained some fascinating insights she had obtained by interviewing a range of editors for their thoughts on gender. Many of the editors she spoke with appeared unaware of any gender bias and felt that their editorial boards were gender balanced (when in fact they were not!). So how widespread is the problem? And how can we fix it?

From an individual journal’s perspective, sadly we are mostly in the dark, as the majority of journals do not collect data on sex, let alone gender. But given the evidence so far that change is needed, how can journals help promote women’s equal participation in peer review – as authors, reviewers and editors?

An intense sharing of ideas followed, facilitated by group discussion and consensus-building to see if we could agree which activities could have most impact, and which could be most feasible. But a first step could be for journals to actually collect some gender statistics for authors, peer reviewers and editors. It’s also heartening to see that something as simple as suggesting to authors that they can help the journal improve the diversity of its reviewer pool by including women (as well as young scientists, and members of other under-represented groups) as their “suggested reviewers” can have a positive effect.

We shall be taking back these, and other, ideas to Springer Nature for further discussion with colleagues. Thank you Dina and Jamie, and the facilitators Eleanor Hutchinson and Keti Glonti for a truly thought-provoking day. We look forward to seeing what we can collectively do to make a difference.

Increasing transparency in peer review

As part of Springer Nature plans to celebrate the theme of Peer Review Week 2017 “Transparency in peer review”, we organised an event for researchers to discuss what transparency in peer review means to them and ways this might be achieved. The event on September 15th was kindly hosted by University College London with over 70 researchers attending including students, post-docs and professors.

We kicked off with two talks from editors of the Nature Research journals on the publication processes at the Nature titles. Luke Fleet, Senior Editor at Nature Physics, set the scene by introducing the Nature Research portfolio and provided tips for how to select the right journal for submission – importantly, think about your audience – and how to prepare a submission. He described the editorial and peer review process at the Nature titles and the role of the Editor. Alicia Newton, Senior Editor at Nature Geoscience (pictured above), emphasised the key role of peer review, noting both its limitations and its benefits and focusing on ethical considerations. She also shared tips for how to peer review a manuscript for those starting out in peer review and flagged a new free course on peer review delivered by Nature Masterclasses.

Luke Fleet

Luke Fleet

The talks were followed by a panel discussion on transparency in peer review, with editors from Springer Nature, Alexia-Ileana Zaromytidou (Chief Editor at Nature Cell Biology), Andrew Cosgrove (Senior Editor at Genome Biology) and Elizabeth Moylan (Senior Editor for Research Integrity at BioMed Central) and academics Carolina Herrera, (Senior Post-doctoral Fellow, Imperial College London), and Mete Atature (Professor of Physics, University of Cambridge). Together they represented the views of all “actors” in the publication process – publishers, editors, reviewers, authors and readers. The panel was moderated by Elisa De Ranieri (Head of Editorial Process and Data Analytics, Nature journals).

When exploring what transparency means in the context of peer review, it was clear from the start that for some, transparency is not necessarily the answer to all of peer review’s potential problems, because it cannot fully address implicit (or explicit) bias. This was a point Carolina, as an early-career researcher, felt strongly about as she thinks that the innovative work of more junior researchers might be subject to different evaluation that that of established investigators. May be double-blind peer review has a role to play here?

In contrast, for Mete, revealing transparency in authorship is essential. He felt it was not possible to evaluate a piece of work in isolation from its context – who the authors are, what work they have done in the past, what equipment and materials they have to hand. Mete pushed the discussion beyond transparency, to remind everyone that peer review is based on the willingness of the community to make it work as a constructive process that improves the literature, and thus ultimately it does not matter what precise model of peer review is adopted as long as the community is behind it. Andrew too reiterated that any initiative taken by journals needs to be the result of an interaction with the community of authors, reviewers and readers. Perhaps a “one-size-fits-all” approach is not going to meet the needs of the different research communities.

Alexia pointed out that transparency means different things to different people, including releasing the reviewer reports alongside the paper either with or without reviewer identities. She mentioned that Nature and Nature Communications are experimenting in this direction. For Alexia, transparency also means opening up the editorial process, for example by providing detailed explanations in editorial decisions to authors, disclosing information about the expertise of reviewers, and providing feedback on decisions to reviewers as well, which is what Nature Cell Biology is doing.

Andrew explained that Genome Biology had started a trial on transparent peer review to coincide with Peer Review Week, in which the journal is going to share the reviewer reports and the authors’ response alongside publication of the article. It will be entirely voluntary if reviewers wish to reveal their identities but report content will be shown. For Andrew, transparency broadly means all actors in peer review becoming more open about what people are doing and when, as this might help to tackle issues such as fraud.Transparent peer review is already a feature of some journals’ processes at Springer Nature, including Nature Communications, which has been offering this option for submissions since January 2016.

Elizabeth argued that the open peer review initiative where reports are signed and accompany publication (as practiced on 70 BMC journals) is the most transparent form of peer review. This makes editors and reviewers more accountable, and leads to more constructive reports. However, as Mete and Carolina pointed out, the additional responsibility is not something that all reviewers will be comfortable in taking on board, without having to spend even more time on reviewing tasks. Elizabeth agreed that there are different rates of acceptance in different research fields and acknowledged that from working with COPE (the Committee on Publication Ethics) transparency in peer review is not linked to a particular model of peer review as such, but that there is a trust and willingness for those engaged in the process to act transparently, i.e. journals have clear policies and individuals declare their conflicts of interests and respect the confidentiality of the process. COPE have a new flowchart on what to consider when asked to peer review and revised ethical guidelines for peer reviewers.

Of course, there are also other ways in which we can increase transparency in publishing, by being open to the research process as a whole and promoting reproducibility. Innovations that are increasing transparency in this respect, as Andrew explained, include Registered Reports. This is an article format in which the rationale for a study and the proposed methodology – the “study protocol” – are pre-registered with the journal and submitted for peer review before the research takes place (and data are collected). If the reviewers are satisfied that the research question is well-framed, and the methodology is appropriate, then the “Registered Report” is accepted in principle irrespective of the outcomes of the study. This helps reduce publication bias in only publishing interesting or positive outcomes.

In conclusion, it seems that the trend for increased transparency is set to stay, and the panel was confident that we are going to see more and more innovation in the next future.

We wish to thank Hide Kurebayashi and Andrew Fisher form University College London for their help in organising this event.

The three-year PhD program: good for students? Or too good to be true?

Calls to modernize the PhD to meet the demands of the job market are being answered by the introduction of a more streamlined three-year PhD program. But such changes are not necessarily in the best interests of students, say Alice Risely and Adam Cardilini

PhD students are the backbone of the research industry, often responsible for compiling precious datasets for their lab and learning the cutting-edge techniques required for analysis. But completing a PhD is hard, and getting harder as scientific standards creep steadily upwards. It takes over a year longer for current students to publish their first scientific paper than those 30 years ago because of the increasing data requirements of top journals. Across Europe and Australia, this is one reason why students are taking an average of four to six years (or longer) to complete their PhDs, despite candidature contracts usually being a maximum of four years, and government scholarships lasting at most three and a half years.

Delays in completion reflect badly on universities, and can threaten future funding. They can also threaten the job prospects of graduates, who are increasingly expected to have excellent time and project management skills for careers outside academia. In an attempt to combat lagging completion times and increase employability of graduates, universities are redesigning the PhD by rolling out three-year PhD programs. These shorter programs are intended to provide increased structural support to students, whilst also promoting broader and more applied skills required by non-academic employers. The catch is that these PhDs must be completed within three years, unless the student faces project delays that were unequivocally beyond their control. But is the three-year PhD program really in the best interests of all, or even most, students?

It will be harder to get PhD extensions under the new model.

It will be harder to get PhD extensions under the new model.

Continue reading

Ten top science career tips for 2017

Top tens are very much a theme of the last issue of Nature for 2016. They include images of the year, 10 people who made a mark in science this year, and a review of the year in science. Naturejobs also gets into the “listicle” spirit by trawling through a year of articles to bring you our ten top career tips (and a few more thrown in for good measure) for the coming year.

1. Want to learn how to design an experiment or analyse data? Training is there if you look.

nj7622-703a-i1Scientific irreproducibility — the inability to repeat others’ experiments and reach the same conclusion — is a growing concern.

Much blame is placed on weak experimental and analytical practices that cause researchers to inadvertently favour exciting hypotheses.

Monya Baker reports.

In a separate post for Naturejobs, Monya runs through some of the statistical tools she discovered as part of her research. Continue reading

What makes a great peer reviewer? Tips from Nature Research editors

By Leonie Mueck (Senior Editor, Nature), Alicia Newton (Senior Editor, Nature Geoscience) and Sebastien Thuault (Senior Editor, Nature Neuroscience)

Peer review is at the heart of high-quality academic publishing, and every editor at Nature Research is grateful for the service of the reviewers who carefully scrutinise every research paper we publish. While SpotOn – taking place this Saturday 5 November in London – is exploring what peer review might look like in 2030, we recognise that many peer reviewers need advice and training now, too.

Here, we share a few tips for reviewers, novice and seasoned alike, that we collated with the help of other Nature Research editors.

What makes a helpful review?

It’s difficult to generalise what makes reviews particularly helpful, since each manuscript has different strengths and weaknesses. But the best reviews have two things in common:

  1. They substantiate every statement, be it about a weakness in the methodology or about the significance of the result for the scientific community, with detailed arguments.
  2. They maintain a constructive tone, even if there is a lot to criticise about the paper in question.

One core task of the reviewer is to make sure that the manuscript is technically sound and the claims sufficiently supported by the presented data. This includes checking whether the reporting is transparent so that other groups have enough information at hand to – in principle – repeat the experiments or analysis and ensure that the methodology is appropriate.

The editor will often recruit reviewers who are experts in a specific method employed in a paper. Hence it’s absolutely fine – even often intended – if a reviewer can only thoroughly assess the technical correctness of one aspect of the study.  Reviewers can explicitly state the limits of their technical expertise in their report.

Reviewers’ second core task is to provide specific context on the significance of the results and interpretations reported in the paper. These comments should be well substantiated and justified so that the editor can assess whether the manuscript meets the editorial criteria of their journal.

peerreviewblog

{credit}Getty Images/iStockphoto{/credit}

Questions to consider are:

  • How is the finding relevant to your community or your field more broadly?
  • Is this going to be one of the most influential papers in the field this year?
  • Does the paper represent a substantial technological achievement or an important community resource?
  • Are there broader implications for public policy, such as climate regulations or public health?

Saying no

When they ask you to review a paper, editors don’t expect you to say yes every time. If you’re busy and don’t think that you’ll do a good job in the allotted time, tell the editor. If possible, suggest a well-qualified colleague who could help. There may also be cases where it would be a conflict of interest for you to take on a review. Reasons to decline reviewing a paper can include: if you’re collaborating with one of the authors, if you have a paper that is in direct competition with this work or if you have financial interests that could be viewed as influencing your report.  If you are unsure if something constitutes a conflict of interest, talk to the editor first.

How can I become a peer reviewer?

With an explosion in the amount of papers published each year, there are plenty of opportunities to become a peer reviewer.

You can contact the editors of a journal that matches your expertise and let them know you are willing to help, or find them at a conference.

As a trainee, you could ask your adviser if you can review a paper under their supervision. And more senior scientists are likewise encouraged to ask their students and postdocs to contribute to a review, after checking with the editor.

Finally, maintaining a clear online record of who you are and what you do is key.  Ensure that your homepage is up to date and includes keywords that describe what you do, ideally in English as well as your institution’s primary language.  Services like Google Scholar and ORCID also help you maintain an easily discoverable record of what you’ve published.

Improving training

SpotOn

Is there training you wish you received early in your career? Interested in ways to make reviewing easier? You can join in the conversation at SpotOn 2016. You can still register here.

Resources that might help you include BioMed Central’s guidance for reviewers assessing papers in medical disciplines and Nature Research’s more general masterclass on peer reviewing.

You can leave comments below, or reach us on Twitter on @LeonieMueck or @g_ruber (Alicia Newton).

Peer review: Nature’s variations

Heike-LangenbergBy Heike Langenberg, Chief Editor, Nature Geoscience

The week from 19 to 25 September 2016 marks the second round of Peer Review Week with the theme of “Recognition for Review”. The topic is obviously close to our hearts at Nature Research: after all, peer review is much of what we do.

We greatly appreciate the role our reviewers play in the publication process, and we try to help them to convert the time and effort they spend on scrutinising and often improving our authors’ papers into professional recognition.

For their personal use—for example in job applications or career negotiations—reviewers have been able to download a certified record of all their reviewing activities across the Nature-branded titles for several years now. We hope that this information is helpful in our reviewers’ career development.

We also offer a free online subscription to a Nature-branded journal for one year to those who have either reviewed three or more manuscripts across all Nature-branded titles in a year, or to referees who have been nominated by our editors for the outstanding quality of their reviews.

Peer Review Week Recognition for Review

But we are always looking into making peer review better, and offering more choice. In response to popular demand in reader surveys, we have introduced the option of double-blind peer review. Initiated at Nature Geoscience and Nature Climate Change in June 2013, authors are now able to opt to be anonymous to referees—just as referees are usually anonymous to authors—at all the Nature-branded journals since February 2015. Providing this option is now part of our routine workflow.

Uptake has been essentially stable since this option was extended to all Nature-branded journals. It seems that ignorance of the double-blind option is not the reason that the proportion of authors who choose it for their own manuscript is substantially lower than those who support the idea in reader surveys. In the social sciences, double-blind peer review is more widespread, and we note that in journals with a significant social science component, such as Nature Climate Change, Nature Energy and Nature Human Behaviour, acceptance is higher than in most journals.

The next step in this particular direction could be triple-blind peer review: in this version, editors, too, do not know the authors’ identities when they record their initial impression of a paper. This offering is but a glimpse in our eye at this stage, largely for administrative reasons, but we are thinking about it. Unconscious biases may lurk in everyone’s brain, even in the most conscientious editors, and they shouldn’t make a difference. We would like to make sure that papers submitted to the Nature-branded journals are judged by their scientific content only. Nothing else.

Celebrating Peer Review Week at Springer Nature

By Steven Inchcoombe, Chief Publishing Officer, Springer Nature

Peer review is at the heart of the research process. Academics generously dedicate hours of their week, to examine each other’s work, offer much-valued constructive criticism and improve the published science (or maths, or social science, etc.). Reviews take time, but peer review is mostly anonymous, meaning it is difficult for reviewers’ colleagues, publishers, institutions or funders to recognise it properly.

Of course, peer review has its faults. Regardless, it is the best system we have right now for maintaining high standards and accuracy. In an age when information is everywhere, plagiarism is sadly too common and a stamp of quality is highly valued, peer review is still celebrated as a kitemark for rigor.

peer-review-week

To celebrate peer review, a group of organizations including Springer Nature is working collaboratively to plan a week of activities and events. Today marks the first day of Peer Review Week 2016. This year’s theme is Recognition for Review, exploring all aspects of how those participating in review activity – in publishing, grant review, conference submissions, promotion and tenure, and more – could be better recognized for their contribution.

At Springer Nature we’re constantly looking to improve our peer review systems, and to find new and better ways of recognizing peer reviewers for their hard work. Our existing methods of recognition might take the form of monetary reward in the case of monographs, or incentives such as free subscriptions or discounts on Article Processing Charges. The methods of recognition researchers most commonly ask for are those which simply acknowledge the name of the reviewer, as the process is so often anonymous.

In a recent survey completed by 3886 of our reviewers, only 26% of reviewers agreed that they would like to be paid. Many expressed concerns that a monetary reward would introduce bias into the process. 67% of reviewers believed they should receive non-monetary compensation, and, rather inspirationally, 68% agreed that the knowledge of the contribution they have made to the body of scientific research is enough compensation for their time as a reviewer, confirming what we’ve always thought: academics are a generous lot.

We believe their work shouldn’t go unnoticed. Many of our journals publish lists celebrating our most frequent reviewers. At BioMed Central, 70 of our journals offer open peer review, encouraging transparency. Open peer review is also a valuable educational resource for training future peer reviewers. In the last year, BioMed Central have published over 40,000 open peer review reports, allowing 24,000 peer reviewers to be recognized for their contribution to research.

Nature Editor-in-Chief Philip Campbell writes a letter of thanks to anyone that has peer-reviewed three papers or more for the combined Nature Research portfolio. Since 2015 all of the Nature-branded research journals have offered authors the option to choose double-blind peer review. In 2016 we have additionally piloted the following initiatives: optional publication of peer-reviewer reports in Nature Communications; optional publication of peer-reviewer identities in Nature; optional transfer of peer-reviewer reports and identities from Nature Communications to other selected Springer Nature academic journals.

Another way we’re experimenting is through partnering with Publons, a network of over 75,000 experts showing their commitment to speeding up science through superior peer review. Publons is a free service for academics that lets you effortlessly track, verify and showcase your peer review activity across the world’s journals. This month we’ve started a Publons pilot across 13 of our journals. We’re also proud sponsors of their Sentinels of Science Awards which celebrate frequent peer reviewers.

We have two more pilots launching this week, experimenting with two very different types of peer review; one from BioMed Central is exploring removing potential bias from the system, the other from Springer experimenting with a new type of recognition. Watch this space for more information!

And finally, to all our reviewers around the world: in case we haven’t said it recently, thank you from the team at Springer Nature.

How not to respond to reviewers: Eight simple tips

Responding to reviewer reports is a key part of publishing academic work in peer reviewed journals. But if you’ve received mixed reviews of a paper or are publishing for the first time, where do you start?

This piece was republished from Sophie Lewis’ blog.

My first attempt at publishing a paper was a breeze. A collaborator was asked to contribute to a special issue and offered me the opportunity to lead the paper. I was a PhD student at the time, and spent two months visiting her lab overseas and writing. By the end of my visit, I’d carved out a draft that I left behind for comments. After a bunch of emails and several rounds of revisions over the next month, we were ready to submit.

Flickr/AJ Cann, CC BY-SA

Flickr/AJ Cann, CC BY-SA

Continue reading