On 21st July 2020, editors from various Nature journals who handle 2D materials research gathered in an online discussion with Joshua Lui (UC Riverside) , Cecilia Mattevi (Imperial College London) and Matthew Yankowitz (University of Washington) to discuss the reproducibility issues currently faced by the 2D materials research community.
Since the isolation of graphene in 2004, and then other 2D transition metal dichalcogenide materials, research in 2D materials has been flourishing. Yet as new materials are discovered and characterized, it is increasingly clear to the community that one of the key growing pains of this field is the issue of reproducibility.
2D materials are notoriously sensitive to growth conditions, processing techniques and even sustained measurement over multiple thermal cycles. A huge range of different factors can affect the physical properties of a 2D heterostructure including twist angle, strain, defects, external crystal alignment and pressure, all of which are near impossible to replicate between samples.
The event kicked off with a presentation by Joshua Lui who spoke about his work on optical studies of twisted bilayer systems. He showed multiple data sets indicating moire trions and commented that although qualitatively the results were similar, the detailed features varied even between different parts of the same sample.
Then came a talk by Cecilia Mattevi on growing 2D crystals. In the growth process, the specific reactor architecture is important, and often equipment is “home-made” by a research group. She suggested that using standardised, commercially available equipment would reduce variability. Another important consideration is the type of raw ingredients (precursors) used and a deep understanding of how this affects the samples remains an open question.
This was followed by Matthew Yankowitz who spoke about electronic measurements on 2D heterostructures. He emphasised that although some results may be difficult to reproduce between samples, the signatures of some phenomena (such as the quantum anomalous Hall effect) are so clear that even seeing it in one device consists of a useful result. He even commented that in some cases he could accurately guess which group the measurements were from just by looking at a plot of the data!
After the talks, a lively conversation moderated by Giulia Pacchioni (Nature Reviews Materials) began.
Unsurprisingly, a hot topic of discussion was twisted bilayers. It emerged that not only is it difficult to know how many samples out of a batch are suitably twisted to display moiré characteristics, but even measuring the precise twist angle remains a big challenge. Samples are also inhomogenous and often different parts of the same sample will show variable results.It is often difficult to compare results from different groups, as there is no standard technique for characterising the interface between two layers.
So what can the community do?
Lui calls for increased dialogue between experimentalists, theorists and journal editors. Often theorists consider information like the twist angle as being trivial, rather than an almost impossible value to calculate. Not only can this make collaboration difficult, but can also lead to misunderstandings during the peer review process.
Mattevi proposes that information about the exact precursors used in a growth process is included in the methods section of a paper. This would enable easier comparison between different samples and encourage more standardised characterisation methods for samples.
Yankowitz advocates for providing detailed supplementary information with each paper, including measurements between *all* sets of contacts in a given bilayer device in order to characterise the twist angle homogeneity – not just those with the interesting results.
Reproducibility is at the heart of science, but in this budding field, insisting on strictly reproducible results over a statistically significant number of devices or samples would hamper many results from being published and shared. Researchers and editors both agree that this would greatly slow down the growth of the field.
Instead, as editors, we commit to being more mindful of the issue of reproducibility and encouraging greater transparency when results are presented. It’s okay if only one device worked, but please do tell the reader (and also mention how many you tried!). If the single working device consequently broke after a few days of measurement, that is also precious advice for the next researcher reading the paper and trying to replicate something similar.
We hope that by increasing transparency in the presentation of results and encouraging honest dialogue between research groups, we can support researchers as they work on these tricky, yet fascinating, new materials.