Panel: reproducibility issues in 2D heterostructure research

On 21st July 2020, editors from various Nature journals who handle 2D materials research gathered in an online discussion with Joshua Lui (UC Riverside) , Cecilia Mattevi (Imperial College London) and Matthew Yankowitz (University of Washington)  to discuss the reproducibility issues currently faced by the 2D materials research community.

Since the isolation of graphene in 2004, and then other 2D transition metal dichalcogenide materials, research in 2D materials has been flourishing. Yet as new materials are discovered and characterized, it is increasingly clear to the community that one of the key growing pains of this field is the issue of reproducibility. 

2D materials are notoriously sensitive to growth conditions, processing techniques and even sustained  measurement over multiple thermal cycles. A huge range of different factors can affect the physical properties of a 2D heterostructure including twist angle, strain, defects, external crystal alignment and pressure, all of which are near impossible to replicate between samples. 

The event kicked off with a presentation by Joshua Lui who spoke about his work on optical studies of twisted bilayer systems. He showed multiple data sets indicating moire trions and commented that although qualitatively the results were similar, the detailed features varied even between different parts of the same sample. 

Then came a talk by Cecilia Mattevi on growing 2D crystals. In the growth process, the specific reactor architecture is important, and often equipment is “home-made” by a research group. She suggested that using standardised, commercially available equipment would reduce variability. Another important consideration is the type of raw ingredients (precursors) used and a deep understanding of how this affects the samples remains an open question. 

This was followed by Matthew Yankowitz who spoke about electronic measurements on 2D heterostructures. He emphasised that although some results may be difficult to reproduce between samples, the signatures of some phenomena (such as the quantum anomalous Hall effect) are so clear that even seeing it in one device consists of a useful result. He even commented that in some cases he could accurately guess which group the measurements were from just by looking at a plot of the data!

After the talks, a lively conversation moderated by Giulia Pacchioni (Nature Reviews Materials) began. 

Unsurprisingly, a hot topic of discussion was twisted bilayers. It emerged that not only is it difficult to know how many samples out of a batch are suitably twisted to display moiré characteristics, but even measuring the precise twist angle remains a big challenge. Samples are also inhomogenous and often different parts of the same sample will show variable results.It is often difficult to compare results from different groups, as there is no standard technique for characterising the interface between two layers.  

So what can the community do?

Lui calls for increased dialogue between experimentalists, theorists and journal editors. Often theorists consider information like the twist angle as being trivial, rather than an almost impossible value to calculate. Not only can this make collaboration difficult, but can also lead to misunderstandings during the peer review process. 

Mattevi proposes that information about the exact precursors used in a growth process is included in the methods section of a paper. This would enable easier comparison between different samples and encourage more standardised characterisation methods for samples. 

Yankowitz advocates for providing detailed supplementary information with each paper, including measurements between *all* sets of contacts in a given bilayer device in order to characterise the twist angle homogeneity – not just those with the interesting results.  

Reproducibility is at the heart of science, but in this budding field, insisting on strictly reproducible results over a statistically significant number of devices or samples would hamper many results from being published and shared. Researchers and editors both agree that this would greatly slow down the growth of the field.

Instead, as editors, we commit to being more mindful of the issue of reproducibility and encouraging greater transparency when results are presented. It’s okay if only one device worked, but please do tell the reader (and also mention how many you tried!). If the single working device consequently broke after a few days of measurement, that is also precious advice for the next researcher reading the paper and trying to replicate something similar. 

We hope that by increasing transparency in the presentation of results and encouraging honest dialogue between research groups, we can support researchers as they work on these tricky, yet fascinating, new materials. 

 

Journals test the Materials Design Analysis Reporting (MDAR) checklist

Reproduced from summary presentation, available at https://osf.io/znq64 

This guest blog comes from Sowmya Swaminathan, Head of Editorial Policy and Research Integrity at Nature Research.

We are pleased to share results from a pilot with 13 journals that tested the Materials Design Analysis Reporting (MDAR) checklist, a minimum standards reporting checklist for the life sciences.

The MDAR framework, a minimum standards reporting framework for life sciences, was designed to provide a harmonizing principle for reporting requirements currently in use at various journals. It is meant to be flexible to adapt to various journal policies, and provides two levels of reporting stringency, a minimum recommendation and a best practice recommendation. The checklist was designed as an optional instrument to help adoption of this reporting framework. A statement of task can be found here.

We are very grateful to the 13 participating journals and platforms – BMC Microbiology, Ecology & Evolution, eLife, EMBO journals, Epigenetics, F1000R, Molecular Cancer Therapeutics, Microbiology Open, PeerJ, PLOS Biology, PNAS, Science, Scientific Reports – for testing the MDAR checklist.

The pilot had two main goals: first, to understand whether the checklist was accessible and useful for authors and editors to help comply with journal policy and second, to understand whether the elements within the checklist are clearly conveyed so as to help fulfil policy expectations. In total, 211 authors across participating journals tested the checklist and provided their feedback. Participating journal teams screened 289 manuscripts using the checklist and 89 of these manuscripts were subject to a dual-assessment by independent reviewers, which allowed us to determine inter-assessor agreement, and thus clarity of specific items on the checklist.

We are encouraged to find that about 80% of authors and editors found the checklist useful to different degrees and that the majority of participating editors reported only a small increase in manuscript processing time as a result of using the checklist. While participating authors and editors did not identify major gaps in the requirements covered in the checklist, the feedback from authors and editors and the inter-assessor agreement results have given us a better understanding of areas in the checklist and elaboration document where the language is unclear and needs to be improved.

We are making the draft MDAR Framework, MDAR Checklist and MDAR Elaboration document and the pilot datasets available here. This work was also recently presented at the NASEM workshop on Enhancing Scientific Reproducibility through Transparent Reporting (slides available here).

We are currently gathering feedback on the MDAR framework, checklist and elaboration document from a broad group of about 40-50 experts on transparency and reproducibility. Based on the feedback from the pilot and the expert input, we anticipate revising all three MDAR outputs by the end of 2019.

We are sharing this update on the work of the Minimum Standards Working Group through coordinated posts on member platforms. If you would like more information about our work and progress, please contact Veronique Kiermer and Sowmya Swaminathan.

On behalf of the “minimal standards” working group:
Karen Chambers (Wiley)
Andy Collings (eLife)
Chris Graf (Wiley)
Veronique Kiermer (Public Library of Science; vkiermer@plos.org)
David Mellor (Center for Open Science)
Malcolm Macleod (University of Edinburgh)
Sowmya Swaminathan (Nature Research/Springer Nature; s.swaminathan@us.nature.com)
Deborah Sweet (Cell Press/Elsevier)
Valda Vinson (Science/AAAS)

TechBlog: Git: The reproducibility tool scientists love to hate

{credit}PLOS Comput Biol, 12, e1004668 (2016){/credit}

Early in his graduate career, John Blischak found himself creating figures for his advisor’s grant application.

Blischak was using the programming language R to generate the figures, and as he iterated and optimized his code, he ran into a familiar problem: Determined not to lose his work, he gave each new version a different filename — analysis_1, analysis_2, and so on, for instance — but failed to document how they had evolved.

“I had no idea what had changed between them,” says Blischak, who now is a postdoctoral scholar at the University of Chicago. “If the professor were to come back and say, ‘which version did you use to create this figure?’ I would have had no idea.”

Later, while attending a workshop on basic research computing skills, he discovered a better approach: Git.

Continue reading

Walking the walk: how the scientific community is embracing open data

Open data is the new normal, says Anastasia Greenberg.

Lots of people connected in hexagon pattern sharing data

The 2017 Better Science through Better Data event in London, UK, hosted by Springer Nature and Wellcome, was a full day exposé of emerging open data practices, tools, strategies, and policies. Among the potential benefits of open data are replicability, reproducibility, and reusability. While open data is a relatively new hype, some evidence suggests that open data does indeed increase reproducibility.

Continue reading

Five things you can do today to make tomorrow’s research open

Early career researchers have an essential role to play in the move towards open research, says #SciData17 writing competition winner Sarah Lemprière.

Data-sharing-naturejobs-blog

Continue reading

Turning scientific scrutiny on science itself

A proactive approach could help researchers contribute to solving many of the problems they encounter in academia

Naturejobs journalism competition winner Jiska van der Reest

microscope-385364_1920

Continue reading

To improve reproducibility, listen to graduate students and postdocs

The National Institutes of Health (NIH) should implement a national exit interview portal to collect feedback from mentees on their experiences.

Funding agencies should not penalize poor performers; instead they should reward good mentorship, says Ahmed Alkhateeb

Continue reading

Ask not what you can do for open data; ask what open data can do for you

Mathias Astell, marketing manager for Scientific Data and Scientific Reports, outlines the benefits of open research data and provides some tips and tools researchers can use to make their data more open.

It has been shown that research articles receive more citations when they have their underlying data openly linked to them. With this in mind, it’s time to consider not just the ideological reasons for making research data open, but the selfish benefits of openly sharing data that all researchers can (and should) be taking advantage of.

mat1

This infographic can be downloaded under a CC-BY licence here

And as an increasing number of funders mandate data sharing, and publishers start implementing more consistent data policies at their journals, it is worth seriously considering how and why you should make the research data you generate more openly available. Continue reading

TechBlog: My digital toolbox: Lorena Barba

Lorena Barba; © Eleanor Kaufman 2013.

Lorena Barba, a mechanical and aerospace engineer at George Washington University in Washington, DC, has long championed research reproducibility. In January, she traveled to Chile to run a weeklong course on reproducible research computing; the month before, she was awarded a 2016 Leamer-Rosenthal Prize, which celebrates those “working to forward the values of openness and transparency in research.” Here, she talks about flying snakes, “repro-packs,” and copyright.

Continue reading

#scidata16: Boost research and avoid embarrassing retractions by working openly and reproducibly

Experiments fail to be reproduced, research data from others is hard to come by, and steps between data and figure are described as ‘here, a miracle happens’.

Speakers at the Publishing Better Science through Better Data (#scidata16) conference addressed these issues and more.

Publishing Better Science through Better Data journalism competition winner Réka Nagy.

Most research happens behind closed doors, and the results can only be gleaned once they’ve been published. The raw data that lead to results, however, are rarely made public, and the steps taken to get from data to figures in a publication is not always clear, which has led to the reproducibility crisis currently facing research. It’s clear that something needs to be done to address this, and the ever-inventive collective mind of science is finding inventive solutions.

Network_Visualization

The steps taken to get from data to figures in a publication is not always clear {credit}SlvrKy/Wikimedia Commons CC-BY-SA-4.0 {/credit}

Continue reading