Some resources and tools related to noncoding RNAs

In ‘Meet some code-breakers of noncoding RNAs,’ the technology feature in the February issue of Nature Methods, we speak with a few scientists about the path ahead in methods for characterize the noncoding RNAs.

With their input, we compiled a list of some of resources and tools in this field.

We can gladly include additional resources. Please comment on this page. You can also tweet us: @naturemethods or @metricausa

Some resources and tools related to noncoding RNAs:

 

Resource Description Publication
DASHR Database of small human noncoding RNAs

Leung, Y.Y et al DASHR:database of small human noncoding RNAs. Nucleic Acids Res. 44:D216-22. (2016)

FANTOM CAT Functional Annotation of the mammalian genome (FANTOM) is an international consortium.

This resource is an atlas of human long noncoding RNAs with accurate 5’ ends

 

 

Chung-Chau, H. et al Annotation of noncoding transcripts for example to find functional lncRNAs that show an effect on global expression after knockout/knockdown Nature 543,  199–204  (2017).

Okazaki, Y. et al.Analysis of the mouse transcriptome based on functional annotation of 60,770 full-length cDNAs.
420(6915):563-73 (2002).

Gencode Resource about human and mouse noncoding RNAs, drawing on data generated by the Encyclopedia of DNA Elements (ENCODE) consortium.Information about the noncoding RNA species and their annotations are here Harrow J, et al. GENCODE: The reference human genome annotation for The ENCODE ProjectGenome Research doi: 10.1101/gr.135350.111. (2012)
LNCipedia Database of annotations of  functional long noncoding RNAs manually curated from the scientific literature Clark MB, et al. lncRNAdb: a reference database for long noncoding RNAs. Nucleic Acids Res 39: D146-151 (2011).
 lncRNAdb  Database of annotations of  functional long noncoding RNAs manually curated from the scientific literature Amaral, P.P et al lncRNAdb: a reference database for long noncoding RNAs. Nucleic Acids Res 39: D146-151.(2011).
lncRNAWiki A Wiki to encourage community-based curation of human long noncoding RNAs. Ma, L et al. LncRNAWiki: harnessing community knowledge in collaborative curation of human long non-coding RNAs Nucleic Acids Research43, D1, p. Pages D187–D192, (2015).

 

lncRNAtor A portal for long noncoding RNA with information such as expression profiles and coding potential. Data sources include TCGA, GEO, ENCODE and modENCODE. Park, C. et al. lncRNAtor: a comprehensive resource for functional investigation of long non-coding RNAs. Bioinformatics. 30(17):2480-5. (2014).
MINTbase Database of tRNA fragments from 11,000 people and 32 cancer types Pliatsika, V.et al. Nucleic Acids Res. 46, D1, D152–D159 (2018).
miRBase Database of published miRNA sequences and annotations Griffiths-Jones S. et al. Nucleic Acids Res. 36, D154-158 (2008).
miRDip A resource with human data; for finding microRNAs that target a gene; or genes targeted by a microRNA Tokar, T. et al mirDIP 4.1- integrative database of human microRNA target predictions, Nucleic Acids Res. 46(D1):D360-D370. (2018).
miRGeneDB A database of validated and anotated human microRNA genes Fromm, B. et al et al. MirGene DB2.0: the curated microRNA GeneDatabase, manuscript in bioarXiv. doi: https://doi.org/10.1101/258749
Noncode A noncoding RNA database with information from 17 species especially long noncoding RNAs. The information is mined from the scientific literature and data resources such as lncRNAdb, and lncipedia.

It includes links to literature about tools such as ncFANs for functional annotation of lncRNAs,

Liu C, et al. NONCODE: an integrated knowledge database of non-coding RNAs. Nucleic Acids Research, 2005, 33 (Database issue):D112-D115. (2005)
Regulome resources and data  Resources and data from the Center for Personal Dynamic Regulomes, including the ATAQ-Seq protocol and transcriptional landscape data from 13 cell types from healthy people and 3 cell types from people afflicted by leukemia. Corces MR, et al. Lineage-specific and single-cell chromatin accessibility charts human hematopoiesis and leukemia evolution. Nature Genetics  48(10):1193-203 (2016).
RNA central Resource hosted at the European Bioinformatics Institute that draws on a number of other database resources, such as

LncBase

This resource includes, for example, a database of experimentally supported miRNA:gene interactions and analysis tools and pipelines such as for miRNA pathway analysis

snOPY

snoRNA orthological gene database with information abut snoRNAs, snoRNA gene loci and target RNAs.

TarBase

Manually curated experimentally validated miRNA-gene interactions

 

 Tools 
miRDeep
miRDeep2
Tools for miRNA identification from RNA-seq data An, J et al miRDeep*: an integrated application tool for miRNA identification from RNA sequencing data.Nucleic Acids Res.41(2):727-37 (2013).

Friedländer MR et al. miRDeep2 accurately identifies known and hundreds of novel microRNA genes in seven animal clades. Nucleic Acids Res. 40(1):37-52. (2012)

 MiRNA prediction tool   miRNA prediction Miranda, KC et al. A pattern-based method for the identification of MicroRNA binding sites and their corresponding heteroduplexes Cell126, 1203-1217, (2006).
 OASIS  Small non-coding RNA detection and expression analysis tool Capece, V. et al. Oasis: online analysis of small RNA deep sequencing data. Bioinformatics 31, 2205–2207 (2015).
Datasets
Analysis of 13 cell types; expression of primate and tissue-specific microRNAs Human miRNAs, their targets, and visualization of the loci on the human genome browser Londin, E, et al. Analysis of 13 cell types reveals evidence for the expression of numerous novel primate- and tissue-specific microRNAs Proc. Natl. Acad. Sci.U.S.A. 112(10):E1106-15. (2015).

Sources: H. Chang, Stanford University School of Medicine; Rory Johnson, University of Bern, E. Marshall, BC Cancer Agency; M. Turner, Babraham Institute; U. Ohler, Max Delbrück Center for Molecular Medicine; I. Rigoutsos, Philadelphia University + Thomas Jefferson University; Nature Research.

 

 

A celebration of cryo-EM

Here at Nature Methods, we were quite excited yesterday to wake up to the news that the Nobel Prize in Chemistry had been awarded to Jacques Dubochet, Joachim Frank, and Richard Henderson for their seminal developments in cryo-electron microscopy (better known as cryo-EM) which now enable high-resolution biomolecule structure determination. This is a technique we have been watching closely since 2013, when the first papers (including one of our own) realizing the capability of near-atomic-resolution structure determination with cryo-EM were published.

Though much of the excitement about cryo-EM is quite recent, the Nobel Prize is a good reminder to us all that the essential foundations of this technology were laid decades ago. We celebrated such developments, both old and new, in our 2015 Method of the Year issue featuring cryo-EM.

To commemorate this well-deserved Nobel Prize, Nature Research presents an editorially curated collection of papers published in our pages – including methods and protocols, biological results generated using cryo-EM technology, and reviews, news and comment. Check it out!

XFEL projects, tools, data portals

Earlier this year, the EuXFEL’s first laser beam reached the ‘hutch’.

Earlier this year, the EuXFEL’s first laser beam
reached the ‘hutch’.{credit}Jessica Mancuso{/credit}

As of September 1, the European X-ray free-electron laser (EuXFEL) is ready for the research community’s experiments; the user page is here.

In the September issue of Nature Methodswe present some of the experimental ideas researchers are exploring in that facility.

Other XFELs are operational or in the works: FERMI facilityLinac Coherent Light Source at Stanford (LCLS)Pohang Accelerator Laboratory (PAL) X-ray Free-Electron Laser, SPring-8 Angstrom Compact Free ElectronLaser (SACLA)Swiss Free-Electron Laser (SwissFEL).

One day, there might even be a XFEL that fits on a table-top (see below). The day is already here when scientists need to analyze mountains of XFEL-data. The EuXFEL will likely make those mountains grow in height. There are tools for that and likely more tools to come (see far below for a list of some tools).

Tabletop XFEL

To complement the large XFEL facilities, a number of research groups are developing benchtop XFELs9,10. Such projects involve miniaturization of all aspects of the technology, including the accelerator. Some groups explore ways to pass a laser through plasma to produce bright, high-energy, short-pulsed beams. Separately, some researchers use a terahertz generator, which can provide sufficiently high pulse energies, says Franz Kärtner, a physicist at the University of Hamburg who also holds an appointment at MIT. His team, along with Petra Fromme of Arizona State University, is developing such a tabletop XFEL instrument.

The scientists would like to use the instrument for coherent diffractive imaging and spectroscopy experiments on photosystem II, a protein complex involved in photosynthesis. Their compact XFEL approach, which will use a terahertz generator, lasers and nonlinear optics, is calculated to achieve photon energies between 10 and 12 keV, hard X-rays that can be harnessed for imaging at atomic resolution, says Kärtner.

Although this compact XFEL will generate fewer photons per shot than a large-scale FEL—106 to 109 photons per shot as opposed to 1012 photons or more—the machine will be able to produce very short pulses, on the order of 0.5 femtoseconds. That is 10–100 times shorter than current FEL pulses. And if that comes to be, says Kärtner, the peak power of the instrument may be almost on par with that of an XFEL.

A terahertz accelerator module for a table-top XFEL in the making

A terahertz accelerator module for a table-top XFEL in the making
{credit}DESY/Heiner Müller-Elsner{/credit}

In the instrument’s terahertz-driven accelerator there will be acceleration gradients between 500 MV/m to 1GV/m. It’s this high frequency that helps to compress electron bunches over short distances and that will let the developers to use compact electron guns. In this fashion, they will be able to shoot a coherent electron beam directly from a gun and emit an X-ray beam much like a FEL, says Kärtner.

Tabletop XFELs fill an important experimental gap between Röntgen’s X-ray tube and the large-scale FELs. “There is nothing in between,” says Kärtner. That’s akin to a situation in optical science in which researchers need to choose between a light bulb and a large-scale optical laser such as the one at the National Ignition Facility at Lawrence Livermore National Laboratory. If the developers of the compact XFEL succeed at packing enough photons into each shot, the instrument will have potential applications in many fields, he says, including enhanced characterization of materials or higher-resolution medical and structural biology imaging.

  1.  Kneip, et al. Nat. Phys. 6, 980–983 (2010).
  2.  Kärtner, X. et al. Nucl. Instrum. Methods Phys. Res. A 829, 24–29 (2016)

Data mountains

XFEL-based experiments produce mountains of data. At EuXFEL, there are two two-dimensional pixel detectors, which will each deliver 10-40 gigabytes of data every second of an experiment.

Experimental data will be housed in the facility’s online systems and then moved to offline disk-based systems also at the facility where researchers can access and analyze it, says Filipe Maia, a biophysicist at the Uppsala University.

The data torrent makes for “a daunting problem,” says Maia, “and currently there’s clearly a lack of user friendly tools.” This issue is a general trend and not unique to XFEL-based research, but researchers are getting better at handling datasets, which happens also as they familiarize themselves with the increasingly available tools. After publication, he hopes the XFEL-data will be transmitted to an online repository to share it with the community. One such resource is the Coherent X-ray Imaging Data Bank (CXIDB), which he built.


Here are some tools for analyzing and managing XFEL data                             

 Resource Description
CASS-CFEL-ASG Suite of tools for real-time monitoring of XFEL experiments, data analysis and visualization, raw data correction, crystal hit finding.
cctbx.xfel Suite of tools for processing measurements made during SFX experiments at an XFEL. Built on Computational Crystallographic Toolbox.
CrystFEL Software suite for processing SFX data.
 Cheetah Data analysis and high-throughput data reduction tools for SFX data.
 Condor Simulation of Flash X-ray imaging to help solve structures without needing crystallization
 Dragonfly  Software/algorithm for single-particle imaging with XFELs
 Hummingbird  Real-time monitoring of XFEL experiments
 Hawk  Package for analyzing and phasing diffraction patterns from single particle-based experiments
 IOTA Spot-finding software for XFEL-based diffraction images. Part of the cctbx.xfel suite.
 OnDA  Real-time monitoring and data analysis of XFEL experiments
 psana  A data analysis framework at LCLS
 SACLA analysis  framework Real-time data processing pipeline at SACLA for serial femtosecond crystallography; it uses modified Cheetah and CrystFEL.
WavePropaGator Software framework for simulating XFEL experiments.
 XATOM  Software calculating and simulating X-ray atom interaction.
Part of the software package Xraypac.
 XMDYN  Simulation tool for modeling dynamics of matter that is exposed to high-intensity X-rays. Part of the software package Xraypac.
 Resources and Portals 
 Coherent X-ray Imaging Data Bank (CXIDB) A database for coherent X-ray imaging experiments.
 LCLS data  analysis  Data analysis resources at LCLS.
 Protein Data  Bank (PDB)  Data repository for protein structures.
 SIMEX  A project that aims to develop an experimental simulation platform for use at XFELs

Sources: Henry Chapman, DESY; Janos Hajdu, Filipe Maia, Uppsala University; Sébastien Boutet, LCLS

LCLS: Stanford Linac Coherent Light Source; Linac Coherent Light Source, SLAC National Accelerator Laboratory (formerly named Stanford Linear Accelerator Center)
SACLA: Spring-8 Angstrom Compact Free Electron Laser
SFX: Serial Femtosecond Crystallography
XFEL: X-ray free-electron laser

Computable sugars: some computational resources in glycoscience

Glycoscience is sweet science

Glycoscience is sweet science{credit}PhotoDisc/ Getty Images{/credit}

As glycoscience advances, labs will increasingly want to ask questions about glycosylation sites on a protein or the structure of a sugar, says Raja Mazumder, a bioinformatician at George Washington University. They might ask for example: are there glycosyltransferases that are expressed in liver but not in the heart, or, which ones are overexpressed by a factor of three in more than two cancers. Such questions require infrastructure building, he says, because right now there is no mechanism to allow such queries. But he and others are building such capabilities. Mazumder along with William York at the University of Georgia are starting to build a glycoscience informatics portal.

Mazumder wants to leverage existing ontologies in the developer community in order to build systems that can be queried on a large-scale. For example, Mazumder is working with Cathy Wu at Georgetown University, who is developing the Protein Ontology. Such ontologies are collected, for example, by the non-profit OBO Foundry. To allow flexible querying, the computational resources will draw on different ontologies; ones that relate to glycans, genes, proteins, tissues, diseases and more.

Ontologies are part the team’s effort to build application program interfaces (APIs) that expose the data in a given database to incoming queries. Given how complex sugars are, the informatics framework has to be well-organized for both human and machine-based querying, says Mazumder.

When using the resource, a researcher will receive results that also document the search process itself such as the version of the queried database. “You need to be able to tell where you got that information from,” says Mazumder. Tracking data provenance matters especially in an age when databases continuously integrate information emerging in the literature.

For the Food and Drug Administration, Mazumder is developing computational standards for high-throughput sequencing, which he wants to also apply to glycoscience. His ‘biocompute object’ captures the given computational workflow a lab might have used to generate results: the software used, the databases queried and their version, and identifiers of data inputs and outputs. These biocompute objects are intended to help regulatory scientists interpret submitted work. It can also help scientists generally see if, for example, the version of software they used worked as it should, says Mazumder.

Too often labs use computational tools without benchmarking them, says Mazumder. “It would be unthinkable for a wet-lab scientist to not have a positive and negative control,” he says.  In informatics, developers benchmark their software but users often do not have these habits. “They don’t even know: if I don’t find anything, is it because my software did not run well or not?”

As labs move to big data analysis in genomics and also, eventually, in glycoscience, this aspect is ever more important, says Mazumder. In his view, biocompute objects will help glycobiology researchers communicate with one another about their results, such as where on a protein they found a sugar with a given structure. More generally, it will help glycoscientists to have a better way to connect the available sugar resources as they pursue their questions of interest.


Here are some resources that glycoscientists can tap into:                             

 Category Resource Description
General resources and funding information
Transforming Glycoscience: A Roadmap for the Future Report by the National Research Council of the National Academies of Science
NIH Common Fund program in glycoscience  Funding opportunities from the NIH Common Fund program in glycoscience
A roadmap for Glycoscience In Europe by BBSRC, EGSF, European Science Foundation   Glycoscience roadmap for Europe
GlycoNet Resources related to glycoscience research in Canada, based at the University of Alberta where the Alberta Glycomics Centre is located
National Center for Functional Glycomics A Glycomics-related Biomedical Technology Resource Center based at Beth Israel Deaconess Medical Center, Harvard Medical School with resources on, for example, microarrays and microarray services, protocols, training and databases
Databases and  portals 
CAZy Carbohydrate-Active Enzymes, a database of enzyme families that degrade, modify or create glycosidic bonds
Consortium for Functional Glycomics Resources and glycoscience data. Part of the National Center for Functional Glycomics.
ExPASy Software tools and databases to simulate, predict and visualize glycans, glycoproteins and glycan-binding proteins
Glycan Library  A list of lipid-linked sequence-defined glycan probes
Glyco3D A portal for structural glycoscience
GlycoBase 3.2 A database of N– and O-linked glycan structures with HPLC, UPLC, exoglycosidase sequencing and mass spectrometry data
GlycoPattern Portal for glycan array experimental results from the Consortium for Functional Glycomics
Glycosciences.de Collection of databases and tools in glycoscience
GlyToucan Repository for glycan structures based in Japan
MatrixDB A database of experimental data of interactions by proteoglycans, polysaccharides and extracellular matrix proteins
Repository of Glyco-enzyme expression constructs University of Georgia Complex Carbohydrate Research Center repository for glyco-enzyme constructs
SugarBind A database of carbohydrate sequences to which bacteria, toxins and viruses adhere
UniCarbKB A resource curated by scientists in in five countries. It includes GlycoSuiteDB, a database of glycan structures; EUROCarbDB, an experimental and structural database and UniCarb-DB, a mass spec database of glycan structures
Software tools
CASPER Web-based tool to calculate NMR chemical shifts of oligo- and polysaccharides
Glycan Builder An online tool at ExPASy for predicting possible oligosaccharide structures on proteins
GlycoMiner/GlycoPattern Software tools to automatically identify mass spec spectra of N-glycopeptides
GlyMAP An online resource for mapping glyco-active enzymes
NetOGlyc Software tool for predicting O--glycosylation sites on proteins
SweetUnityMol Molecular visualization software

Sources: NIH, R. Mazumder, George Washington University; New England Biolabs, Thermo Fisher Scientific, Nature Research

Building OpenSPIM systems

Tuning reagents, software, or equipment is all in a day’s work in the lab. Building instruments from scratch, however, is a task more typical for physicists who might 3D print or machine the parts they need and then assemble them into the instrument they want. They might construct an instrument for a specific experiment or develop a design that helps hundreds of labs. That model could go on to be modified and hacked in a variety of ways.

In light-sheet microscopy, a sample is illuminated with a thin sheet of light and fluorescence is detected by a separate lens placed orthogonally to the excitation light.

In light-sheet microscopy, a sample is illuminated with a thin sheet of light and fluorescence is detected by a separate lens placed orthogonally to the excitation light. {credit}Vineeth Surendranath{/credit}

Build your own OpenSPIM system

Build your own OpenSPIM system{credit}Michael Weber, Peter Pitrone, Pavel Tomancak{/credit}

 

 

 

 

 

 

In microscopy, biologists as well as physicists and computer scientists are building the hardware and software they want and sharing the blue-prints with others.

Building an OpenSPIM model is not quite this fast, but this shows the parts needed for those who want to give it a try.

Here are some user experiences from the OpenSPIM community. You can read more in the December issue of Nature Methods.

Perspectives from users, builders and one-day-maybe OpenSPIM builders

From left to right: Tiago Pinheiro, Johanna Gassler, Radoslav Aleksandrov, Florian Vollrath et al.

 

An OpenSPIM community has evolved to address the needs of researchers setting out to build their own systems.

Johanna Gassler, Tiago Pinheiro, Florian Vollrath worked together during the European Molecular Biology Organization (EMBO) course on light-sheet microscopy in August. Separately, Johannes Girstmair at University College London built an OpenSPIM microscope.

Johanna Gassler, Tiago Pinheiro, Florian Vollrath and Radoslav Aleksandrov worked together during the European Molecular Biology Organization (EMBO) course on light-sheet microscopy in August. Separately, Johannes Girstmair at University College London built an OpenSPIM microscope.

Johanna Gassler

Johanna Gassler
PhD student in the lab of
Kikue Tachibana-Konwalski
Institute of Molecular Biotechnology
of the Austrian Academy of Sciences
Vienna, Austria {credit}Philippe Laissue{/credit}

 

 

Her thoughts on using light-sheet microscopy…

Gassler works with  mouse oocytes and early embryos in a lab that looks at many facets of how an oocyte transforms into a zygote after fertilization. She does live-cell imaging with confocal microscopy and phototoxicity is a constant concern. Light-sheet microscopy lets her take a closer look, especially in terms of temporal resolution, at the dynamic processes inside an egg or an early embryo without having the types of phototoxicity worries she would have with other forms of microscopy.

In her view, light-sheet microscopy is one of the most exciting technologies of the last decade, “and it is really great to be a scientist in a time where these systems are still in their developing stage and to see how fast progress is made. “

“When imaging samples with confocal microscopy one does not tend to think so much about the specific characteristics of your sample compared to your neighbors. You just use the same microscope to image both, of course imaging settings change, but the hardware doesn’t. When taking the route of building your own microscope like with OpenSPIM, one is way more flexible in what pieces of hardware you would like to add to improve the imaging of your sample specifically. This flexibility is a huge advantage of OpenSPIM, but also a disadvantage at the same time. “

“If an OpenSPIM is built for a special application and the group that used it moved or for some reason or other doesn’t use it anymore, then it is really hard to just use it for something very different. So in the worst case scenario the microscope would not be used anymore. Of course one could just use the parts of the old one to build a new one for a different application, but then you also need a person willing to do that. The movement of OpenSPIM is just starting to arrive in the minds of biologists, so attempting to build your own is still somewhat rare. That said, the light-sheet microscopy and OpenSPIM community make it really easy to start into this adventure.”

About those data mountains…

The data output of a light-sheet microscope is several order of magnitudes higher than in conventional microscopy, says Gassler, making it necessary to invest in data storage and to explore ways to immediately reduce the data size. That can be done by omitting unnecessary data right after imaging or even during imaging sessions. And, she says, “to get the most out of light-sheet microscopy as a biologist, it is very valuable to team up with physicists and computer scientists.”

Tiago Pinheiro PhD student in neuroscience and regenerative medicine in the lab of Andras Simon in the department of cell and molecular biology Karolinska Institute Stockholm, Sweden

Tiago Pinheiro
PhD student in neuroscience and regenerative medicine
in the lab of Andras Simon in the
department of cell and molecular biology
Karolinska Institute
Stockholm, Sweden
{credit}Benny Coyac{/credit}

 

 

What he likes about light-sheet microscopy…

In his work with fixed, cleared salamander brains to study dopamine neuron regeneration, Tiago Pinheiro likes the speed with which images can be captured with light-sheet microscopy. Here is a video he made of stitched and processed images he generated on the ZEISS Z.1 microscope of glial protein fibers in the brain of a developing salamander. The brain had been cleared with Advanced CUBIC.

What is also beneficial about light-sheet microscopy, says Pinheiro, is being able to rotate the sample with just the right orientation. With confocal microscopy and 3D mounted samples that is a big hurdle. It takes many hands and much time to image a brain slice by slice to then find the paths of neurite fibers from slice to slice.

The advantage of OpenSPIM, in Pinheiro’s view, is that he can do experiments instead of waiting for the rather overbooked commercial light-sheet microscopes. If you know the OpenSPIM works for your specific application, he says, then a scientist could build  several of these microscopes on a budget and speed up image acquisition for their experiment.

OpenSPIM suitcase

{credit}Vineeth Surendranath{/credit}

About being able to pack a microscope in a suitcase…

“It is great for education purposes,” says Pinheiro, who would love to have an OpenSPIM at Karolinska to show his colleagues and to make people aware of the potential of light sheet microscopy has and so they can see how it works. “As confocal microscopy made its way to every biology lab I am convinced light sheet microscopy will as well. A microscope in a suitcase is helping that happen.”

About the bigger scheme….

Research centers in biology and medicine have a growing need for staff with knowledge of physics and computing, says Pinheiro. A biologist can build an OpenSPIM after attending the EMBO course, as he has, but he or she will still need expertise at a home institution to trouble-shoot any issues such as assembly or software. More generally, he says not everyone will be able to take the course. But at the same time there is an urgent need to more quickly and extensively merge the fields of biology, computer science and physics. “I believe not doing that means falling behind in answering essential scientific questions in a better way,” he says.

Florian Vollrath Physicist, programmer Research associate in the imaging facility at the Max Planck Institute for Brain Research Frankfurt, Germany

Florian Vollrath
Physicist, programmer,
research associate in
the imaging facility at the
Max Planck Institute for Brain Research
Frankfurt, Germany

 

 

What he likes most about light-sheet microscopy…

Florian Vollrath helps scientists at the Max Planck Institute for Brain Research with their experiments and their data analysis. Vollrath and colleagues are ramping up to build a light-sheet microscope for the imaging facility. The model will have a different camera, stage and objectives than the basic OpenSPIM setup.

The institute mainly works with fixed samples where phototoxicity isn’t a problem but bleaching can be. What matter most about light sheet microscopy to him is its advanced measurement speed compared to confocal microscopes, he says. “Our dream is to image as fast as possible complete brains and being able to analyze their neuronal structure afterwards, without the need of slicing them in many pieces and imaging them one by one,” he says. Light sheet microscopes have a trade-off, their resolution is not as good as what can be achieved with confocal microscopes. “Our main question is now if it is still good enough.”

Being part of a community…

With an OpenSPIM community in place, it helps those with less or even no experience get on their way to working with light-sheet microscopy, says Vollrath. The open source software works, but it is not as advanced as the software in commercial systems, he says. It takes programming experience to adjust it if one wants to use components other than the ones on the OpenSPIM website.

Johannes Girstmair PhD student in the lab of Maximilian Telford in the department of genetics, evolution and environment University College London

Johannes Girstmair
PhD student
in the lab of Maximilian Telford
in the department of genetics, evolution
and environment
University College London{credit}Armin Märk{/credit}

 

 

 

About tapping into curiousity…

For biologists who are curious to get a start with OpenSPIM, Johannes Girstmair recommends taking one’s own samples to one of the around 70 OpenSPIM set-ups in labs around the world and finding someone who will “let you play around a little bit.”

About angles and speed …

“Speed does not always matter,” says Girstmair. It all depends on the question one is pursuing, he says. Speed matters with live imaging. For example he has looked at cellular behavior and cytoskeleton dynamics and tracked the nuclei of the developing embryos to create an early cell lineage. With a slow imaging system he can miss important information. He mainly uses one angle for time-lapse movies but time matters especially if someone is doing time-lapse live-imaging with multiple angles, “you don’t want to wait 2-3 min for each angle to be acquired simply because it would mean that with 5 angles you would need to wait almost 15 minutes per time-point,” he says. “A lot of development can happen in between.” And once the images from the previous angle are acquired, they risk not fitting well anymore with the acquired first angle. “That’s not good and might give you funny results once you fuse angles that are shifted in time quite a lot.”

He has also found that a faster, more smoothly running system can be better for the living embryos because a slow system may well delay the laser shutter, although he has not measured this, which means that the embryo might be exposed longer to the light-sheet, thereby increasing the chance of phototoxicity.  If you can make a system faster and it does not cost much to do so, why not do it, he says.

The configuration of the OpenSPIM model that Johannes Girstmair built.

The configuration of the OpenSPIM model that Johannes Girstmair built.{credit}J. Girstmair{/credit}

About alignment…

Aligning the light sheet with the focal plane of the detection objective is tricky because the acquisition chamber has to be water tight. That limits the possibility of moving the detection objective that could otherwise be moved forward and backward to align the light-sheet well, says Girstmair. “We can cheat a bit by using the large corner mirrors to align the light-sheet to the focal plane,” he says. Information about how he assembled and aligned his system and videos of continuous imaging experiments are in his BMC Developmental Biology paper.

Also, he says, there are ways to nudge the detection objective a bit forward and backward in a way that the O-rings can tolerate and which are used to make the detection objective watertight. “People have a little wheel for this purpose, which doesn’t seem to be super hard to install if somebody insists on this,” he says.

About some questions that tempt him…

Girstmair studies evo-devo questions using, for example, the polyclad flatworm Maritigrella crozieri. These lophotrochozoans, or Spiralians as they are sometimes called, are interesting because so many phyla, including the flatworms, show a very similar developmental pattern early on, which is called spiral cleavage. This likely ancestral cleavage program allows scientists to compare the development of different phyla even though they have branched millions of years ago. Most flatworms don’t have a stereotypic spiral cleavage nor do they exhibit a free-swimming larval stage as are found in other lophotrochozoan phyla. M. crozieri has both the very stereotypic spiral cleavage pattern and a free-swimming planktotrophic larval stage, says Girstmair, making these embryos a good starting point for comparative studies.

The polyclad flatworm Maritigrella crozieri imaged with different techniques

The polyclad flatworm Maritigrella crozieri imaged with different techniques{credit}J. Girstmair{/credit}

About needing a ‘Pavel’…

Pavel Tomancak, one of the co-founders of OpenSPIM, is a co-author on Girstmair’s paper about building OpenSPIM and using it to study Maritigrella. Tomancak’s presence might make his project look a little less like a do-it-yourself one. “Of course not everybody can have a ‘Pavel’ close by,” as he did, says Girstmair. But for starters they can travel to a lab with an OpenSPIM set-up and work there with their own samples.

“As for the assembly in London I really put everything together myself and more importantly hardware-configured the microscope myself,” says Girstmair. Several people offered plenty of advice, which is why, he says, they also deserve to be on the paper. They include Tomancak, former Tomancak lab member Peter Pitrone now a light-sheet microscopy consultant and Mette Handberg-Thorsager, a developmental biologist also in Dresden with whom Girstmair tested microinjection techniques.

For the OpenSPIM setup, Girstmair and his colleagues used some parts that differ from the basic set-up such as a multi- laser system, controller boxes and other components, which also meant there was “a lot more to learn and sometimes even get frustrated about,” he says.

This OpenSPIM image comes from a fixed embryo and imaged with multiple views. The nuclei of each cell are stained with the nucleic acid stain SytoxGreen. The first angle will also be the orientation of the 3D reconstructed embryo when all the different views are combined into a single image file using software called Fiji.

“I think the images made with the OpenSPIM are not particularly better than the confocal images,” says Girsstmair. The confocal images are crisper and have better resolution. But they can’t contain all the information contained in an image captured from multiple angles. Imaging at multiple angles is very difficult with a conventional confocal microscope due to the different ways specimens are mounted, he says.

Fixed specimens imaged with OpenSPIM are usually embedded in agarose and therefore keep their natural shape. “With the confocal I would try to squeeze a stained Mueller’s larva as much as possible in order to get the most out of the staining from a single view and thereby I also loose the specimen’s natural shape,” he says. When it comes to capturing the development of Maritigrella embryos, OpenSPIM is much better: it is faster and the embryos are exposed to much less light. Another advantage: the freely available software tools for 3D reconstructions.

OpenSPIM is a crowd-sourced movement propelled by the crowd, among them, these people:

OpenSPIM_developers and students

{credit}Vineeth Surendranath{/credit}

Peter Pitrone (top), is first author on the paper presenting OpenSPIM, Pavel Tomancak (third from the top), a researcher at Max Planck Institute (MPI) for Molecular Cell Biology and Genetics in Dresden co-developed OpenSPIM. The others in this photo are PhD students who took a course on OpenSPIM and who put together the OpenSPIM web site.

 

 

DIY Biolabs – and why they matter

When proponents of Do-it-yourself Biology  explain their motivation for getting involved in the movement  they often resort to colorful imagery. Take for example Patrick D’haeseleer who helps organize the Counter Culture Labs in the San Francisco Bay Area. He asks, “When the first village tamed fire, the neighboring village was freaking out. Should only the village elders be allowed to make fire or should we teach everybody?”  “Any new technology has risk, but it behooves us to have all citizens know how these technologies work and what the risks are. “ he continues, “ the technology needs to be democratized because it will dominate the 21th century.”

In our September editorial we encouraged people to look up  DIY Biolabs in their backyard and consider getting involved.

A recent editorial in Nature  also addresses the topic of Citizen scientists, focusing on sample collection and data analysis.  The authors raise the question how conflict of interest should be addressed and recommend full transparency about motives and ambitions of citizen scientists.

We agree that it is important to be upfront about one’s involvement in scientific endeavors, but motives conflicting with those of established scientists need not preclude participation in the scientific process.  As people learn more about methods and their potential,  they may change their position on certain issues, or, if not, they will  have a better grounded basis for what they belief.  Either outcome is a success.

Reflections on impact

In this month’s editorial, we reflect on the journal impact factor and its relationship to impact, especially for publishing methods papers.

Here are a few additional links that readers may find interesting:

The recent HEFCE  (Higher Education Funding Council for England) review of the use of metrics, including the journal impact factor, for research assessment can be found here.

The ASCB’s San Francisco Declaration on Research Assessment (DORA), from a few years ago, can be found here.

An editorial from Nature Materials analyzing the ability of the journal impact factor to predict median citations over 5 years can be found here.

Points of Significance is now free access

The Points of Significance column on statistics is now free access as part of a larger resource on statistics on nature.com

Stats Collection

{credit}Erin Dewalt{/credit}

When Nature Methods launched the Points of Significance column over a year ago we were hopeful that those biologists with a limited background in statistics, or who just needed a refresher, would find it accessible and useful for helping them improve the statistical rigor of their research. We have since received comments from researchers and educators in fields ranging from biology to meteorology who say they read the column regularly and use it in their courses. Hearing that the column has had a wider impact than we anticipated has been very encouraging and we hope the column continues for quite some time.

In the meantime, I’m very happy to say that the entire column is now freely available as part of the Statistics for biologists collection, a new free resource on nature.com that collects selected published articles on statistics from across the Nature family of journals. A blog post on Of Schemes and Memes by Veronique Kiermer (Director, Authors and Reviewers services) introduces this resource and discusses its role in NPG’s ongoing efforts to improve reproducibility in science.

Although the primary focus of the articles is on biological applications, researchers from many disciplines will find useful information in this collection. The Points of Significance articles have a dedicated page listing these articles in chronological order with a brief summary of each.

The Statistics for biologists collection will not be a static resource, but will be continuously updated with new content. In particular, each new Points of Significance article will be added as it is published. We have also tried to highlight content of a similar nature that other publishers have made freely available.

I’d like to thank Martin Krzywinski and Naomi Altman for their continued hard work on these columns, and our guest author Paul Blainey. Without their dedication and time the Points of Significance column would never have been this successful. Now it will be easier for even more people to benefit from these wonderful articles.