A retraction resulting from cell line contamination

After nine years in print, Nature Methods today published its first retraction; one that could have been prevented by cell line authentication. What does this mean for journal-mandated cell line testing?

Gliomasphere image

Two-photon fluorescence image of live primary gliomasphere from retracted manuscript.

In a Nature Methods paper published in 2010, Ivan Radovanovic and colleagues described a method to isolate cancer-initiating cells in human glioma without the need for molecular markers. Based on morphology and on a green autofluorescence, the authors reported they could use FACS to sort cancer-initiating cells from gliomasphere cultures (which had been derived from primary tumors). They also detected autofluorescence in cells from fresh glioma specimens, but at a much lower level.

Cells from the autofluorescent fraction could self renew clonogenically in vitro and were tumorigenic when transplanted into mouse brains, the authors reported, and in both cases performed better than non-autofluorescent cells from the rest of the culture or tissue. The origin of this autofluorescent signal was not understood at the time. The authors speculated it may have been related to the unique metabolism of the cancer-initiating cells.

It turns out that most of the primary gliomasphere lines (7 out of 10) were contaminated with HEK cells expressing GFP, leading to retraction of the paper. Using short-tandem-repeat (STR) profiling of two of the lines the authors determined that the contamination occurred over the course of culture in the lab: samples taken from early passages match the original tissue from which the lines were derived, but later passages no longer do so.

It is hardly surprising that the first retraction in Nature Methods is due to cell line contamination, a well acknowledged problem. A 2009 Editorial in Nature pointed to the disturbing results of cell testing by repositories which indicated that 18-36% of cultures were misidentified. It called on repositories to authenticate all of their lines, and for major funders to provide testing support to grantees. At that point funders could require cell line validation for investigators to retain funding, and Nature would require that all immortalized lines used in a paper were verified before publication. Unfortunately, it is now 2013 and we are still far from this goal.

But progress is being made. Community-based efforts are alerting researchers to this problem and providing resources to help them avoid being misled by erroneous results caused by cell line contamination. A 2012 Correspondence in Nature by John R. Masters on behalf of the International Cell Line Authentication Committee (ICLAC) pointed to the following resources available to researchers:

Please go to the ICLAC website for the most recent version of each of these documents.

Meanwhile in early 2013, at the publication end of the process, the Nature journals published coordinated editorials announcing a reproducibility initiative and stating that “…authors will need to […] provide precise characterization of key reagents that may be subject to biological variability, such as cell lines and antibodies.” In practice, the Nature journals are currently requiring all authors to state whether or not testing was done but are only requiring testing in cases where it makes particular sense.

Advocates for mandatory testing have cogent arguments for a uniform mandatory testing policy. First, it would avoid sending a confusing message; second, researchers can’t be certain that cell identity or mycoplasma contamination aren’t affecting results; and finally, continued publication of inaccurate species and tissue designations of misidentified cell lines continues to propagate misinformation.

In the work described in the retracted 2010 manuscript from Radovanovic and colleagues mandatory testing would certainly have been beneficial. However, for probably the majority of work published by Nature Methods there is no question that testing would have no impact on the reported results. For example, in 2011 and 2012 we published at least 17 manuscripts reporting new fluorescence microscopy methods and using imaging data from cell lines to assess the performance of the techniques in measuring fundamental cell properties such as the appearance and width of actin or microtubule filaments, membrane vesicles or other universal cellular structures. Cell line identity and even mycoplasma contamination would not impact the efficacy or conclusions of these measurements. This same situation exists for the validation and testing of many methods in other research disciplines such as proteomics, genomics and biophysics.

Even if these labs should be doing cell validation and mycoplasma testing as a matter of course as part of proper cell culture procedure, mandating that all these studies include such testing as a requirement for publication is unjustified.

But clearly even our most recent efforts at improving compliance with good testing practice will not be sufficient to eliminate cell contamination as a problem in work published in Nature journals. A possible solution may be to require testing by default but authors would be permitted to argue why, in their case, testing is clearly unnecessary. Editors (possibly with reviewer input) would be the final arbiters and would need to ensure that although the lines must be named and sourced, no species or tissue identifiers should be included in the manuscript in the absence of proper validation.

Technology development labs or others that only use cell lines for purposes distinct from biological investigation could continue to avoid testing. But any lab that might potentially use their cell lines to obtain biological results would know that they should institute a proper testing regimen or risk their work not being publishable in a Nature journal.

At this point this is only an idea based on our experience at Nature Methods. We encourage the community to comment and let us know what they think.

Promoting shared hardware design

Now is the time to move open-source hardware development into basic research labs.

Having convinced airport security to haul a suspicious looking briefcase packed with hardware on board, Pete Pitrone, an imaging specialist in the group of Pavel Tomancak, headed for South Africa. His aim was to introduce young students to the parts, many manufactured at his own institute, in the hope they would assemble them into a sophisticated working microscope. It was a symbolic step to demonstrate the potential for building new tools in laboratories and beyond.

The OpenSPIM microscope-in-a-briefcase.
Credit: Vineeth Surendranath

Manufacturing has gained an appealing image of late. In his State of the Union Address in February, US president Barack Obama announced the creation of three manufacturing hubs modeled after an institute in Youngstown, Ohio. His comments referenced the ability to innovate quickly with additive manufacturing, which relies on digital design and 3D printing: relatively recent improvements which have changed the way that physical objects and devices are made, and helped to open up the design process.

Taking advantage of these developments at the grassroots level is an enthusiastic crop of do-it-yourself ‘builders’ or ‘hackers’ who are promoting a culture of shared design and open innovation, and have spawned a movement towards open-source hardware. Analogous to open-source software, open-source hardware licenses prevent the patenting of hardware designs or physical objects, and require comprehensive and freely accessible design and instructions to allow anyone to build the same device. One working definition and a helpful list of considerations is provided by the Open Source Hardware Association.

The advent of cheap 3D printers such as the RepRap and MakerBot have made it easy, cheap and relatively fast to turn digital designs into objects. 3D printing involves the layered deposition of a heated polymer through a precisely positioned moving extruder. Open-source electronics that can be used to control hardware with software from the likes of Arduino and Raspberry Pi are also making it easier to manufacture sophisticated devices.

In our July editorial, we argue that basic research shares the values of openness and reproducibility embodied by open-source hardware. Beyond making research tools easier and cheaper to build and replicate, developing devices in an open-source environment can actually speed innovation by encouraging community feedback early in development. This can make the work that goes into extensive documentation and robustness testing worthwhile for the individual research group.

Open-source differs from traditional design in its focus:

  • open-source tools are specifically designed for others to build and modify them
  • open-source tools must include extensive documentation, including parts lists, any related software code (also published as open-source) and design files
  • the focus on reproducibility encourages simple, streamlined design
  • modularity and integration are ultimate goals

Many in the design field have said that this focus actually improves designs and promotes the broadest uptake.

Applied technologies like photovoltaics and hardware infrastructure for cloud computing are investing in open-source approaches, but there are currently very few examples of open-source hardware in basic research. OpenPCR publishes the designs for a thermal cycler to conduct PCR, which can be purchased as an inexpensive kit and assembled by hand. Some labs simply use 3D printing to generate teaching models and basic equipment like test tube racks (e.g. the DeRisi lab).

pic2

Pitrone teaching South African students how to assemble an OpenSPIM scope.
Credit: P. Tomancak

The July issue of Nature Methods includes two leading examples of academic efforts in this direction, OpenSpim and OpenSpinMicroscopy, that include detailed designs for light-sheet microscopes that can take 3D movies of living things. Parts for the OpenSPIM scope were hiding in the briefcase en route to an EMBO course organized by Musa Mhlanga along with Freddy Frischknecht and Jost Enninga in Pretoria. A highlight according to Pavel Tomancak was to watch talented high school students assemble and successfully operate the scope in under two hours. The availability of design details and the focus on making it possible to build are a model of accessibility that has ramifications in teaching and outreach, encouraging many others to play around with the hardware.

Hardware innovation is a critical part of the technological advances that drive science. To carry out experiments, many research laboratories need tools that are simply not available, cost too much or will take too long to develop commercially. An open-source approach can lower the barriers to adopting, disseminating and ultimately improving tools for research.

 

Serial dilution woes

A recent report adds further evidence that assays relying on serial dilution and tip-based dispensing could be a source of irreproducibility, particularly in pharmacological assays.

A few days after I wrote the methagora entry below about our efforts to improve the reproducibility of published research, somebody pointed out a paper published last week in PLOS ONE that compared the results of automated serial dilution and plastic tip-based dispensing using a robotic sample processor to results obtained by an acoustics-based liquid dispenser. The latter is a technique using sound for noncontact liquid dispensing and is implemented in instruments such as those sold by Labcyte Inc., the employer of one of the authors on the manuscript. The dose-response data comparing the results of these two liquid handling methods, however, was previously published in patents by AstraZeneca on pyrimidine derivatives for inhibiting Eph receptors. The AstraZeneca results showed that data obtained on the 14 reported compounds via acoustic dispensing showed activities that were 1.5 to 276.5 times higher than data coming from serial dilution and tip-based dispensing.

What the PLOS ONE authors added to this story, besides promoting the research results to the press, was the computation of pharmacophores based solely on the two sets of activity data. The pharmacophore computed from the acoustic data was structurally similar to pharmacophores computed from x-ray crystallography data (for example, all these compounds contained hydrophobic binding domains) and was able to predict the activity of subsequent chemicals. In contrast, the pharmacophore computed from the serial dilution and tip-based dispensing data was very different, contained no hydrophobic domains, and was non-predictive.

What should one make of this? Well, it seems logical that hydrophobic domains could influence the results of serial dilution and dispensing through plastic tips via adsorptive or other effects. As one person commenting on the PLOS ONE paper states, such effects have been well documented and proper analytical technique calls for experiments to detect them.

This all reminds me of marketing for HP’s high performance dispenser that also forgoes serial dilution and instead uses inkjet printing technology to dispense undiluted reagents, presumably also via acoustics. HP promotes the increased reliability of this technique for generating dose-response curves but they don’t highlight the kind of effect documented by the authors of the PLOS ONE paper.

If these results are indicative of differences observed between these two types of liquid dispensing it seems that drug companies must be aware of them and are adapting their assays and protocols as necessary. But even if this is the case, there appears to be little evidence that academic researchers are worried about this.

In theory, one can certainly see the appeal of contactless dispensing but more hard data is needed to draw firm conclusions. This will require extensive side-by-side testing of different sample dispensing methods with many different compounds.

At a minimum, researchers need to be cognizant of this potential problem and report how they dispensed their reagents when reporting results from these kinds of pharmacological assays. Better yet, they should repeat key experiments on different days and with different equipment.

Update: I just found out that Derek Lowe has a nice post about this paper over at In the Pipeline

Reporting standards to enhance article reproducibility

Beginning May 1st Nature Methods will be requiring authors of manuscripts being sent back to peer review to fill out a checklist to disclose technical and statistical information about their submission.

The May Editorial briefly describes why we are using this checklist and provides some details of what is included. Authors can find the checklist that Nature Methods will be using at https://www.nature.com/nmeth/pdf/sm_checklist.pdf and there is a link to it on the journal homepage. Our checklist is identical to that of most of the other Nature journals except for an added item asking authors to “Identify all custom software or scripts that were required to implement the methodology being described and where in the procedures each was used.” Based on feedback we have received, a missing software or script seems to be the item most often mentioned by people commenting on challenges in reproducing a method we have published. This reporting requirment is an important step in trying to address this deficiency.

We expect that the addition of these reporting requirements will elict some grumbling by authors. But based on the experience of Nature Neuroscience, which has been requiring authors to fill out a methods checklist before even the first round of review, we expect authors will come to appreciate the role it serves.

The checklist is only one part of the efforts the Nature journals are making to improve reproducibility. The other journals are also removing formal limits on the length of the methods section. But since Nature Methods has long had no limits on the length of our Methods section, the checklist is the most prominant change for us and our authors.

The May issue also contains other articles relevant to reproducibility. The Correspondence section has a discussion about analyzing the reproducibility of animal experiments. And the May Technology Feature discusses reproducibility in quantitative PCR, a methodology that has suffered from serious problems in this regard due to poor experimental technique and reporting.

For those not tired of reproducibility at this point Nature also has a Special Focus on Challenges in irreproducible research.

As has been said in the editorials on the subject, this is only a first step toward improving the reproducibility of our published research and we welcome feedback from the community on our efforts.