Reflections on impact

In this month’s editorial, we reflect on the journal impact factor and its relationship to impact, especially for publishing methods papers.

Here are a few additional links that readers may find interesting:

The recent HEFCE  (Higher Education Funding Council for England) review of the use of metrics, including the journal impact factor, for research assessment can be found here.

The ASCB’s San Francisco Declaration on Research Assessment (DORA), from a few years ago, can be found here.

An editorial from Nature Materials analyzing the ability of the journal impact factor to predict median citations over 5 years can be found here.

Ten years of Methods

Our tenth anniversary is an occasion to celebrate methods development!

In our Anniversary Issue, we highlight ten areas of methods development, among many candidates, that have had a lot of impact on biological research over the last decade. We also take the opportunity to look back at the papers we have published in some of these areas. We hope to add similar descriptions for all our ‘top-ten methods’ in coming months.

You can look back at the last ten years of Nature Methods in the following areas here:

Microbial sequencing

Super-resolution microscopy

Optogenetics in neuroscience

Light-sheet imaging

Mass-spectrometry based proteomics

High-throughput sequencing data analysis

Anniversary Issue Cover

Over the summer we asked for contributions from our readers for the cover of our tenth anniversary issue. We asked for images of the number “10” made using biological research tools and techniques. We were delighted to have many excellent submissions and to be able to use them all on the cover. Here is a bit more detail about these images.

Ke image

{credit}Yonggang Ke{/credit}

Yonggang Ke at Georgia Institute of Technology and Emory University sent us an image of DNA nanostructures. Ke and colleagues used DNA origami to generate two self-assembled 3D nanostructures, imaged them with transmission electron microscopy, and then assembled the images to form the number 10. The height of the final image is 120 nm.

 

 

 

 

Hogberg cover

{credit}Alan Shaw and Björn Högberg{/credit}

Alan Shaw and Björn Högberg at Karolinska Institutet also applied nanotechnology to the challenge. Building on their recently published Nanocalipers technique (Shaw et al, 2014) they displayed a ferritin protein as the “0” (instead of ephrin as in their published paper) and use DNA origami to generate a nanostructure in the form of a “1”.

 

 

 

DSCN0678-adjusted

{credit}Sandra Duffy{/credit}

Sandra Duffy at Griffith University based her image on indicators of cell viability. Cytotoxic compounds were added to mammalian cells in a 384-well microtiter plate, either in the shape of a 10 in one half of the plate, or to all wells outside the shape of a 10 in the other half of the plate. After incubation, a cell viability marker (resazurin) was added to the wells. Viable cells convert the blue reagent to red, and the image was taken with a simple point-and-shoot camera.

 

 

Nano-lantern-2

{credit}Akira Takai, Yasushi Okada, Masahiro Nakano and Takeharu Nagai{/credit}

Nano-lantern-1

{credit}Akira Takai, Yasushi Okada, Masahiro Nakano and Takeharu Nagai{/credit}

Nano-lantern-3

{credit}Akira Takai, Yasushi Okada, Masahiro Nakano and Takeharu Nagai{/credit}

Akira Takai, Yasushi Okada, Masahiro Nakano and Takeharu Nagai, at Osaka University, used multicolour luminescent reporters to write the number 10, either by expressing them in bacterial cells streaked on an agar plate, or by aliquoting them in purified form in a 96-well plate.

 

 

 

 

 

10.2-Merge-v2-cropped2

{credit}Lauren Polstein and Charles Gersbach{/credit}

Lauren Polstein and Charles Gersbach at Duke University used light-sensitive transcriptional activators to photoactivate a GFP reporter in mammalian cells in the shape of the number 10.

 

 

 

 

 

Navneet Dogra and T. Kyle Vanderlick at Yale University examined bacteria stained with fluorescein (green) interacting with small unilamellar vesicles labeled in red. They used a laser to photobleach all fluorescence except that in the desired shape of the number 10. Image to come.

Finally, Kristina Woodruff and Sebastian Maerkl at EPFL used a standard microarrayer to spot live mammalian cells onto a 675-well array in the shape of a 10 (Woodruff et al, 2013).  Image to come.

We are very grateful to all contributors – thank you for helping us design a cover that salutes the creativity and ingenuity of methods developers!

 

Microbial sequencing at Nature Methods

Over the years, Nature Methods has published many methods to generate and analyze complex sequence data for microbial studies. We cover highlights from our papers below.

Carl Woese set the stage for a molecular taxonomy of microbial life in 1977 by demonstrating that the 16S ribosomal subunit could form the basis of prokaryotic classification. Amplifying markers such as 16S from microbial mixtures really took off with the advent of high-throughput sequencing, which provided a way to rapidly profile communities sampled directly from the environment. Shotgun sequencing approaches are used more and more for taxonomic profiling as well, enabling gene and genomic sequences to be reconstructed for the functional characterization of communities.

Amplicon-based community profiling
The 454 pyrosequencing platform originally dominated efforts to study the 16S locus because of its long sequence reads. In 2008, Rob Knight and colleagues described the use of error-correcting barcodes for pyrosequencing hundreds of samples together.  Then in 2013, Jeffrey Dangl and colleagues took barcoding to a new level by tagging every template molecule during library prep on the Illumina platform in order to remove much of the PCR bias and error introduced during amplification.

On the computational side, Christopher Quince and colleagues presented PyroNoise in 2009 for ‘denoising’ or removing errors from pyrosequencing flowgrams. Jens Reeder and Rob Knight followed a year later with Denoiser, a fast heuristic alternative. Gene Tyson and colleagues moved away from flowgrams with their Acacia software, which corrects sequence files directly and can also work on Ion Torrent data due to its similar error profile containing homopolymeric repeats.

Once cleaned up, marker sequences need to be grouped into ‘operational taxonomic units’ (OTUs) that roughly correspond to genera, species or strains. Among many algorithms that do this, Robert Edgar introduced UPARSE (we realized that there is some ambiguity but it is pronounced YOU-parse) in 2013 for accurate OTU clustering in the face of erroneous or chimeric sequence reads.

To stitch the computational analysis steps together, ‘quantitative insights into microbial ecology’, or QIIME (pronounced chime) from Rob Knight and colleagues offers a user-friendly modular pipeline for amplicon sequence analysis.

Metagenomic community profiling
In shotgun metagenomics approaches, all fragments of genomic DNA in a sample are sequenced and classified. Isidore Rigoutsos and colleagues introduced PhyloPythia in 2007 to assign fragments to higher taxonomic groups or ‘bins’ based on matching the frequency of tetranucleotide sequences with signatures from known taxa. Its faster, open-source successor PhyloPythiaS from Alice McHardy and colleagues came out in 2012.

Arthur Brady and Steven Salzberg also used sequence composition, or combined it with sequence alignment with Phymm and PhymmBL in 2009; their PhymmBL expanded includes additional functionality and parallelization and came out in 2011.

In 2012, Curtis Huttenhower and colleagues described MetaPhlAn, which limits analysis to clade-specific marker genes to speed up the classification of sequence reads. Peer Bork and colleagues also extracted a limited marker set from metagenomic data in their metagenomic OTUs (mOTU) approach in 2013, but used 40 universally conserved prokaryotic genes. Both methods work best in systems like the human gut that have a large number of sequenced reference genomes.

Genomes from mixtures
Earlier this year, Christopher Quince, Anders Andersson and colleagues published an unsupervised binning method called CONCOCT to help reconstruct genomes from mixtures. It uses sequence composition and differential coverage across samples to assign pre-assembled contiguous sequences (contigs) to species or strain bins.

Single-cell sequencing is another way to obtain microbial genomes. Paul Blainey and Stephen Quake discuss challenges and opportunities for single-cell sequencing in a Commentary in our Method of the Year issue in 2014. When cultures are available, long-read single-molecule sequencing technology can provide very high quality genome sequences; the HGAP software from Jonas Korlach and colleagues makes this possible using a single Pacific Biosciences sequencing library.

With genomic sequences in hand, there remains the question of how to fit them within an appropriate taxonomy. Peer Bork and colleagues tackled the problem in 2013 with their species identification (SpecI) tool, that bases classification on the same 40 markers as mOTU.

Functional analysis and ecology
An array of tools have been designed to wrestle ecological and biological insights from metagenomic sequence data, such as the GENE PRediction IMprovement Pipeline (GenePRIMP) for annotating prokaryotic genomes by Amrita Pati and colleagues in 2010 and the metagenomeSeq method to test for the differential microbe abundance across environments or conditions by Mihai Pop and colleagues in 2013 (also see a comment by Bork and colleagues and the authors’ reply).

In 2010, Rob Knight and colleagues compared 51 methods for their ability to identify biologically relevant distribution patterns using real and simulated 16S pyrosequencing data from samples that were clustered or assayed along environmental gradients. In 2012, Jack Gilbert and colleagues developed microbial assemblage prediction (MAP), an artificial neural network approach to model microbial community structure across the Western English Channel that combines time course metagenomic data from a single site with bioclimatic data gathered over the entire channel.

Quality control and bias
Generating accurate and robust microbial sequence data requires rigorous benchmarking and controls, and experimental methods are constantly improving. Nikos Kyrpides and colleagues studied the use of simulated data to evaluate metagenomic analysis methods in 2007. In 2010, Philip Hugenholtz and colleagues evaluated two methods to deplete rRNA from metatranscriptomes.

J Gregory Caporaso and colleagues further demonstrated the effect of Illumina read quality on taxonomic assignment and diversity assessment in 2013, and Scott Kelley and colleagues developed SourceTracker software to identify contaminants in microbial sequencing studies.

We look forward to many more contributions in the field of microbial sequencing.

 

References:
Alice Carolyn McHardy et al.
Accurate phylogenetic classification of variable-length DNA fragments
Nature Methods 4, 63-72 (2007) doi:10.1038/nmeth976

Konstantinos Mavromatis et al.
Use of simulated data sets to evaluate the fidelity of metagenomic processing methods
Nature Methods, 4 (6), pp. 495-500 (2007) doi:10.1038/nmeth1043

Micah Hamady, Jeffrey J Walker, J Kirk Harris, Nicholas J Gold & Rob Knight
Error-correcting barcoded primers for pyrosequencing hundreds of samples in multiplex
Nature Methods 5, 235-237 (2008) doi:10.1038/nmeth.1184

Christopher Quince et al.
Accurate determination of microbial diversity from 454 pyrosequencing data
Nature Methods 6, 639-641 (2009) doi:10.1038/nmeth.1361

Arthur Brady & Steven L Salzberg
Phymm and PhymmBL: metagenomic phylogenetic classification with interpolated Markov models
Nature Methods 6, 673-676 (2009) doi:10.1038/nmeth.1358

J Gregory Caporaso et al.
QIIME allows analysis of high-throughput community sequencing data
Nature Methods 7, 335-336 (2010) doi:10.1038/nmeth.f.303

Jens Reeder & Rob Knight
Rapidly denoising pyrosequencing amplicon reads by exploiting rank-abundance distributions
Nature Methods 7, 668-669 (2010) doi:10.1038/nmeth0910-668b

He et al.
Validation of two ribosomal RNA removal methods for microbial metatranscriptomics
Nature Methods 7, 807-812 (2010) doi:10.1038/nmeth.1507

Amrita Pati et al.
GenePRIMP: a gene prediction improvement pipeline for prokaryotic genomes
Nature Methods 7, 455-457 (2010) doi:10.1038/nmeth.1457

Justin Kuczynski,  Zongzhi Liu,  Catherine Lozupone,  Daniel McDonald,  Noah Fierer &  Rob Knight
Microbial community resemblance methods differ in their ability to detect biologically relevant patterns
Nature Methods 7, 813-819 (2010) doi:10.1038/nmeth.1499
Patil et al.
Taxonomic metagenome sequence assignment with structured output models
Nature Methods 8, 191-192 (2011) doi:10.1038/nmeth0311-191

Arthur Brady & Steven L Salzberg
PhymmBL expanded: confidence scores, custom databases, parallelization and more
Nature Methods 8, 367-367 (2011) doi:10.1038/nmeth0511-367

Dan Knights et al.
Bayesian community-wide culture-independent microbial source tracking
Nature Methods 8, 761-763 (2011) doi:10.1038/nmeth.1650

Lauren Bragg, Glenn Stone, Michael Imelfort, Philip Hugenholtz &  Gene W Tyson
Fast, accurate error-correction of amplicon pyrosequences using Acacia
Nature Methods 9, 425-426 (2012) doi:10.1038/nmeth.1990

Nicola Segata et al.
Metagenomic microbial community profiling using unique clade-specific marker genes
Nature Methods 9, 811-814 (2012) doi:10.1038/nmeth.2066

Peter E Larsen,  Dawn Field &  Jack A Gilbert
Predicting bacterial community assemblages using an artificial neural network approach
Nature Methods 9, 621-625 (2012) doi:10.1038/nmeth.1975

Robert C Edgar
UPARSE: highly accurate OTU sequences from microbial amplicon reads
Nature Methods 10, 996-998 (2013) doi:10.1038/nmeth.2604

Derek S Lundberg,  Scott Yourstone,  Piotr Mieczkowski,  Corbin D Jones &  Jeffery L Dangl
Practical innovations for high-throughput amplicon sequencing
Nature Methods 10, 999-1002 (2013) doi:10.1038/nmeth.2634

Shinichi Sunagawa et al.
Metagenomic species profiling using universal phylogenetic marker genes
Nature Methods 10, 1196-1199 (2013) doi:10.1038/nmeth.2693

Daniel R Mende,  Shinichi Sunagawa,  Georg Zeller &  Peer Bork
Accurate and universal delineation of prokaryotic species
Nature Methods 10, 881-884 (2013) doi:10.1038/nmeth.2575

Chen-Shan Chin et al.
Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data
Nature Methods 10, 563-569 (2013) doi:10.1038/nmeth.2474

Nicholas A Bokulich et al.
Quality-filtering vastly improves diversity estimates from Illumina amplicon sequencing
Nature Methods 10, 57-59 (2013) doi:10.1038/nmeth.2276

Joseph N Paulson,  O Colin Stine,  Héctor Corrada Bravo &  Mihai Pop
Differential abundance analysis for microbial marker-gene surveys
Nature Methods 10, 1200-1202 (2013) doi:10.1038/nmeth.2658

Paul C Blainey &  Stephen R Quake
Dissecting genomic diversity, one cell at a time
Nature Methods 11, 19-21 (2014) doi:10.1038/nmeth.2783

Johannes Alneberg et al.
Binning metagenomic contigs by coverage and composition
Nature Methods (2014) doi:10.1038/nmeth.3103

Super-resolution microscopy at Nature Methods

On this 10th anniversary of the first issue of Nature Methods it is appropriate to look back at the relationship between the journal and super-resolution microscopy, one of the technologies we have chosen as one of the top ten methods developments in the ten years since Nature Methods published it first issue.

Super-resolution microscopy first appeared in Nature Methods with the online publication of two papers on August 9, 2006. One demonstrated the first super-resolution microscopy image using a genetically-encoded fluorescent probe (Willig et. al., 2006) and the other was the first publication describing stochastic optical reconstruction microscopy (STORM; Rust et. al., 2006), the class of methods now often referred to as single molecule localization microscopy (SMLM). Initially, the papers were mostly overshadowed by the media storm accompanying the publication one day later of photoactivated localization microscopy (PALM) by Betzig et. al. in Science, but the STORM and PALM papers together were instrumental in driving wider development of the nascent super-resolution microscopy field that had previously been confined to a small number of highly specialist groups.

A visualization of SRM papers published in Nature Methods over the years.

A visualization of SRM papers published in Nature Methods over the years.{credit}D. Evanko{/credit}

Nature Methods has now published 64 articles on super-resolution microscopy, 49 full original research articles, 11 Correspondences and 4 Review and Commentary articles. The accompanying illustration conveys the wide range of topics covered and their historical progression. Super-resolution microscopy was also our choice of Method of the Year in 2008.

The first three years of super-resolution microscopy (SRM) publications in Nature Methods were dominated by advances in localization-based SRM and early attempts at live cell SRM. Betzig and colleagues defined important considerations for performing live-cell PALM (Shroff et. al., 2008) and PALM was adapted as a massively parallel single particle tracking technique called sptPALM (Manley et. al., 2008). Another early paper demonstrated the use of dual-plane imaging for 3D SRM several microns deep into a sample (Juette et. al., 2008), but to this day SRM is still dominated by 2D imaging.

It was clear that the probes used for localization-based SRM were critical to the performance of these techniques. In early 2009 we published the new fluorescent proteins, PA-mCherry (Subach et. al., 2009) and mEOS2 (McKinney et. al., 2009), from the Verkhusha and Looger labs respectively. The performance characteristics of mEOS2 and the Looger lab’s very open reagent sharing habit helped contribute to this protein dominating much fluorescence protein based SRM.

In late 2009 we began to address the prior lack of sufficient attention to the analysis methods used in localization-based SRM with the publishing of two papers (Mortensen et. al., 2009 and Smith et. al., 2009) focused on minimum likelihood algorithms for precisely estimating the centers of fluorophore image spots, a fundamental underpinning of the whole class of localization-based SRM methods. At this time we also started publishing Correspondences describing user-friendly software for performing the early localization analysis steps; first LivePALM (Hedde et. al., 2009) and QuickPALM (Henriques et. al., 2010) and in later years DAOSTORM (Holden et. al., 2011) and RapidSTORM (Wolter et. al., 2012). During this period researchers also scoured other fields for algorithms new to imaging analysis and imported the powerful compressed sensing analysis method (Zhu et. al., 2012), developed novel localization methods like radial symmetry (Parthasarathy, 2012) and characterized and corrected the noise attributes of sCMOS cameras so that they could challenge EMCCDs as the camera of choice for localization-based SRM (Huang et. al., 2013). A particularly interesting development was the use of Bayesian analysis for image generation that didn’t require explicit fluorophore localization and could work with the intrinsic blinking and bleaching of high density GFP-labeled live samples (Cox et. al., 2011).

We soon determined that the later analysis steps required for interpretation of the underlying biology were most ripe for, and in need of, further development. Improper analysis could easily lead to artifacts, particularly when trying to use localization-based SRM to examine protein clustering (Annibale et. al., 2011). Notable early work in this area was the use of pair correlation analysis to examine protein organization in the plasma membrane (Sengupta et. al., 2011). An ongoing issue in analyzing localization-based SRM images has been determining the resolution of the resulting image, a far less straightforward task than one might expect. Adoption and development of Fourier ring correlation from electron microscopy provided a compelling solution to this challenge (Niewenhuize et. al., 2013) but more work remains to be done before researchers can be confident of reliably measuring the resolution of their images.

Although manuscripts with a focus on analysis methods made up the majority of articles published in Nature Methods over the past 10 years, there were also continuous developments in imaging technology. STED microscopy was improved through the use of continuous wave lasers (Willig et. al., 2007) and time gating (Vicidomini et. al., 2011). A STED configuration that created a spherical scanning spot was used to image the 3D structure of a single mitochondria (Schmidt et. al., 2008). There was also further development of the optical methods used for localization-based SRM. Temporal focusing of two-photon irradiation allowed confined photoactivation in whole cells, thus limiting photobleaching outside the imaging area (York et. al., 2011). Confined photoactivation and imaging was also accomplished using dual orthogonal objectives to combine light-sheet microscopy with localization-based SRM (Cella Zanacchi et. al., 2011). Finally, a dual-objective scheme with objectives facing one another combined with astigmatism improved the resolution of 3D localization-based SRM (Xu et. al., 2012).

In recent years, alternative SRM methods made an appearance. The scanning-based method, reversible saturable optical fluorescence transitions (RESOLFT), was massively parallelized and used for imaging whole living cells (Chmyrov et. al., 2013). An intriguing recent report combined elements of STED and localization-based SRM in a new imaging modality that discriminates nanoareas of fluorescently labeled rigid proteins using polarization (Hafi et. al., 2014).

Improvements in imaging technology are of little use if you can’t label your targets of interest. Labeling methods have therefore been an important component of the SRM papers published in Nature Methods. Trimethoprim labeling (Wombacher et. al., 2010) and SNAP tag labeling (Klein et. al., 2011) both allowed direct labeling of proteins in live cells and this was combined with bright fast-switching probes to allow fast 3D localization-based SRM in whole living cells at ~25 nm resolution (Jones et. al., 2011). Other investigators improved labeling not by direct labeling using chemical tags, but by using smaller affinity probes such as nanobodies (Ries et. al., 2012) or aptamers (Opazo et. al., 2012). A particularly intriguing class of labeling methods relies on DNA oligos. Barcoding (Lubeck et. al., 2012) and sequential labeling (Jungmann et. al., 2014; Lubeck et. al., 2014) allowed highly multiplex labeling of target proteins and nucleic acids.

With so many developments and choices for researchers it is increasingly important for them to have quality data on the relative performance of different techniques and tools. To this end, Nature Methods has been publishing increasing numbers of Analysis articles reporting such performance comparisons and SRM has been no exception to this. The performance of a wide selection of chemical fluorophores for localization-based SRM was characterized in a real-life imaging situation (Dempsey et. al., 2011) and a recent report characterized the photoactivation efficiency of fluorescent proteins (Durisic et. al., 2014).

We hope you enjoyed this brief summary of SRM in Nature Methods. Although I have tried to include much of what Nature Methods has published in this field, the summary is by no means comprehensive. Most significantly, it doesn’t include many of the methods that are used to double the resolution of fluorescence microscopy. If there is sufficient interest we will consider extending our summary to include both these and more recent developments as they occur.

Optogenetics in neuroscience at Nature Methods

The optogenetic manipulation of cellular properties has not only revolutionized neuroscience, but this technology can also be applied to the manipulation of signaling pathways, transcription or other processes in non-neuronal cells. Here, we highlight some of the papers we have published on the neuroscience side of optogenetics.

Optogenetic tools

2014 has been an exciting year for us with the publication of new optogenetic tools. Klapoetke and Boyden developed Chrimson and Chronos, two channelrhodopsins that they discovered in a screen of algal transcriptomes. Chrimson is more red-shifted than previously known channelrhodopsins while Chronos has faster kinetics. Hochbaum and Cohen described another algal channelrhodopsin called CheRiff, which is highly sensitive to blue light stimulation, making it compatible with red-shifted voltage sensors.

Previously, we published papers describing modifications to optogenetic tools. For example, Prakash and Deisseroth tailored opsin with custom properties. To ensure stoichiometric expression of optogenetic activators and/or inhibitors, Kleinlogel and Bamberg simply and elegantly fused the two proteins into a single chain. Depending on the two partners, this marriage can lead to synergisms or bidirectional effects. Finally, Mattis and Deisseroth undertook a comprehensive characterization of available tools.

Optogenetic applications

Since the initial description of Channelrhodopsin2 (ChR2) as an efficient tool to evoke neural activity in a light-dependent manner, we have seen a flurry of papers applying ChR2 for a variety of questions in neuroscience. For instance, Zhang and Oertner combined this tool with two-photon calcium imaging in rat slices to study synaptic plasticity. Liewald and Gottschalk applied the same methodology to analyze synaptic function in freely moving C. elegans.

ChR2 can also be used to map the function of brains regions as Ayling and Murphy demonstrated by evoking activity in limb muscles via light stimulation in the motor cortex of ChR2 transgenic mice. Similarly, Guo and Ramanathan mapped neural circuitry in C. elegans by combining ChR2-mediated neural activation with imaging of a genetically encoded calcium sensor in downstream neurons. To facilitate circuit mapping in mice, Zhao and Feng generated mouse lines that express ChR2 in GABAergic, cholinergic, serotonergic or parvalbumin-expressing neurons.

While ChR2 is a very popular tool in optogenetics, other family members can do the job as well. C1V1T is a fusion of two different opsins and is particularly useful when applying two-photon excitation, as shown by Packer and Yuste. ReaChR is activated by red light and thus especially useful in vivo. Inagaki and Anderson studied courtship behavior in Drosophila with this tool.

Method of the Year

We celebrated the impact of optogenetics by recognizing the technology as our Method of the Year 2010. We marked the occasion with the publication of special Commentaries on the subjects. Deisseroth discussed the past, present and future of optogenetics. Hegemann and Möglich deliberate on the exploration of new optogenetic tools. And Peron and Svoboda illuminated us on the precise delivery of optogenetic stimulation. In addition, our News Feature recounted the stories behind the “Light tools”.

If we have sparked your interest, the mentioned papers are listed below.

We are excited to hear about the upcoming developments in optogenetics from you.

 

Nathan C Klapoetke, Yasunobu Murata, Sung Soo Kim, Stefan R Pulver, Amanda Birdsey-Benson, Yong Ku Cho, Tania K Morimoto, Amy S Chuong, Eric J Carpenter, Zhijian Tian, Jun Wang, Yinlong Xie, Zhixiang Yan, Yong Zhang, Brian Y Chow, Barbara Surek, Michael Melkonian, Vivek Jayaraman, Martha Constantine-Paton, Gane Ka-Shu Wong & Edward S Boyden
Independent optical excitation of distinct neural populations
Nature Methods 11, 338–346 (2014) doi:10.1038/nmeth.2836

Daniel R Hochbaum, Yongxin Zhao, Samouil L Farhi, Nathan Klapoetke, Christopher A Werley, Vikrant Kapoor, Peng Zou, Joel M Kralj, Dougal Maclaurin, Niklas Smedemark-Margulies, Jessica L Saulnier, Gabriella L Boulting, Christoph Straub, Yong Ku Cho, Michael Melkonian, Gane Ka-Shu Wong, D Jed Harrison, Venkatesh N Murthy, Bernardo L Sabatini, Edward S Boyden, Robert E Campbell & Adam E Cohen
All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins
Nature Methods 11, 825–833 (2014) doi:10.1038/nmeth.3000

Rohit Prakash, Ofer Yizhar, Benjamin Grewe, Charu Ramakrishnan, Nancy Wang, Inbal Goshen, Adam M Packer, Darcy S Peterka, Rafael Yuste, Mark J Schnitzer & Karl Deisseroth
Two-photon optogenetic toolbox for fast inhibition, excitation and bistable modulation
Nature Methods 9, 1171–1179 (2012) doi:10.1038/nmeth.2215

Sonja Kleinlogel, Ulrich Terpitz, Barbara Legrum, Deniz Gökbuget, Edward S Boyden, Christian Bamann, Phillip G Wood & Ernst Bamberg
A gene-fusion strategy for stoichiometric and co-localized expression of light-gated membrane proteins
Nature Methods 8, 1083–1088 (2011) doi:10.1038/nmeth.1766

Joanna Mattis, Kay M Tye, Emily A Ferenczi, Charu Ramakrishnan, Daniel J O’Shea, Rohit Prakash, Lisa A Gunaydin, Minsuk Hyun, Lief E Fenno, Viviana Gradinaru, Ofer Yizhar & Karl Deisseroth
Principles for applying optogenetic tools derived from direct comparative analysis of microbial opsins
Nature Methods 9, 159–172 (2012) doi:10.1038/nmeth.1808

Yan-Ping Zhang & Thomas G Oertner
Optical induction of synaptic plasticity using a light-sensitive channel
Nature Methods 4, 139 – 141 (2006) doi:10.1038/nmeth988

Jana F Liewald, Martin Brauner, Greg J Stephens, Magali Bouhours, Christian Schultheis, Mei Zhen & Alexander Gottschalk
Optogenetic analysis of synaptic function
Nature Methods 5, 895 – 902 (2008) doi:10.1038/nmeth.1252

Oliver G S Ayling, Thomas C Harrison, Jamie D Boyd, Alexander Goroshkov & Timothy H Murphy
Automated light-based mapping of motor cortex by photoactivation of channelrhodopsin-2 transgenic mice
Nature Methods 6, 219 – 224 (2009) doi:10.1038/nmeth.1303

Zengcai V Guo, Anne C Hart & Sharad Ramanathan
Optical interrogation of neural circuits in Caenorhabditis elegans
Nature Methods 6, 891 – 896 (2009) doi:10.1038/nmeth.1397

Shengli Zhao, Jonathan T Ting, Hisham E Atallah, Li Qiu, Jie Tan, Bernd Gloss, George J Augustine, Karl Deisseroth, Minmin Luo, Ann M Graybiel & Guoping Feng
Cell type–specific channelrhodopsin-2 transgenic mice for optogenetic dissection of neural circuitry function
Nature Methods 8, 745-752 (2011) doi:10.1038/nmeth.1668

Adam M Packer, Darcy S Peterka, Jan J Hirtz, Rohit Prakash, Karl Deisseroth & Rafael Yuste
Two-photon optogenetics of dendritic spines and neural circuits
Nature Methods 9, 1202–1205 (2012) doi:10.1038/nmeth.2249

Hidehiko K Inagaki, Yonil Jung, Eric D Hoopfer, Allan M Wong, Neeli Mishra, John Y Lin, Roger Y Tsien & David J Anderson
Optogenetic control of Drosophila using a red-shifted channelrhodopsin reveals experience-dependent influences on courtship
Nature Methods 11, 325–332 (2014) doi:10.1038/nmeth.2765

Karl Deisseroth
Optogenetics
Nature Methods 8, 26–29 (2011) doi:10.1038/nmeth.f.324

Peter Hegemann & Andreas Möglich
Channelrhodopsin engineering and exploration of new optogenetic tools
Nature Methods 8, 39–42 (2011) doi:10.1038/nmeth.f.327

Simon Peron & Karel Svoboda
From cudgel to scalpel: toward precise neural control with optogenetics
Nature Methods 8, 30–34 (2011) doi:10.1038/nmeth.f.325

Monya Baker
Light tools
Nature Methods 8, 19–22 (2011) doi:10.1038/nmeth.f.322

Light sheet imaging in Nature Methods

It was only a few months before Nature Methods was launched in October 2004 that Jan Huisken and Ernst Stelzer had published a paper in Science in which they used light sheet microscopy – what they called selective plane illumination microscopy or SPIM – to image fluorescence within transgenic embryos. Simplistically put, this century-old technique achieves optical sectioning by illuminating a sample through its width with a thin sheet of light. In the last decade, Nature Methods has published a steady stream of papers reporting developments in light-sheet imaging. Here are the highlights.

Our very first light-sheet paper was also from the Stelzer group, reporting the use of deconvolution to improve resolution of the technique (Verveer et al, 2007). This was rapidly followed by a paper from Hans-Ulrich Dodt, in which samples such as entire insects or brain tissue were rendered transparent with clearing agents to produce spectacular light-sheet ‘ultramicrographs’ (Dodt et al, 2007). The push to higher resolution continued, with a paper from Albert Diaspro reporting 3D super-resolution imaging within thick samples using light-sheets (Cella Zanacchi et al, 2011). The Stelzer group, meanwhile, improved performance of the technique in larger samples that scatter more light, by combining it with structured illumination (Keller et al, 2010).

Thai Truong, Willy Supatto and Scott Fraser added two photon excitation to light-sheet imaging, thereby doubling the depth and increasing by an order of magnitude the speed at which they could image samples such as developing embryos with each approach alone (Truong et al, 2011); Supatto recently extended this to imaging in multiple colours  (Mahou et al, 2014). And then in 2012, the groups of Phillip Keller and Lars Hufnagel independently reported microscopes that could take take multiple views of a biological sample simultaneously, allowing rapid imaging of entire developing fly embryos at sub-cellular resolution (Tomer et al, 2012; Krzic et al, 2012).

Though light-sheet imaging is perhaps at its most powerful in the imaging of thick samples like embryos or tissue sections, it has been used for substantial performance improvements in cellular imaging as well. In 2011, Eric Betzig’s group used scanned Bessel beams to create thinner light sheets and thus much improved axial resolution, achieving isotropic 3D resolution and rapid imaging within living cells (Planchon et al, 2011). Note also that, as Tom Vettenburg, Kishan Dholakia and colleagues showed,  generating the light sheet using an Airy beam, rather than Gaussian or Bessel beam, yields an even larger field of view without sacrificing contrast and resolution (Vettenburg et al, 2014). Variations on the light-sheet theme have also been developed by the labs of Makio Tokunaga and Sunney Xie for single-molecule imaging within cells (Tokunaga et al, 2008Gebhardt et al, 2013).

In recent years, the excitement around this technology has been palpable, with several papers reporting impressive applications of light-sheet microscopy: it has been used to functionally image the entire fish brain (Ahrens et al, 2013) and the brain of ‘fictively behaving’ fish (Vladimirov et al, 2014), as well as to image the beating fish heart (Mickoleit et al, 2014).

Perhaps not surprisingly, the emphasis in methods development has also been shifting a little. On the one hand, platforms are being developed to make this valuable technique available more widely, for instance via the OpenSPIM or OpenSpinMicroscopy platforms (Pitrone et al, 2013; Gualda et al, 2013). At the same time, analytical tools are necessarily being developed to handle the vast reams of data that a light-sheet experiment generates. The group of Pavel Tomancak reported Bayesian-based deconvolution methods to analyse the large data sets that result from multiview imaging (Preibisch et al, 2014). Phillip Keller and colleagues described computational methods to segment and track nuclei in data sets from light sheet or other imaging, for fast lineaging of developing embryos (Amat et al, 2014). Misha Ahrens and colleagues reported Thunder, a suite of analytical tools built on a platform for distributed computing, enabling the mapping of brain activity in ‘fictively behaving’ zebrafish (Freeman et al, 2014).

It’s fair to say that this venerable method has been thoroughly revived over the past decade. Light-sheet imaging is poised to yield tremendous biological insight. We hope to keep you updated on future developments in Methagora.

Mass spectrometry-based proteomics at Nature Methods

A look back at highlights in proteomics technology developments published in Nature Methods.

The last decade has seen amazing advances in mass spectrometry-based proteomics technology as well as ever-expanding use of the technology for varied biological applications. Here we take a look back at some proteomics technology development highlights published in Nature Methods over the last 10 years. (A second entry covering biological applications of mass spectrometry-based proteomics is planned for the near future; stay tuned.)

Sample preparation

The first step in a successful proteomics experiment is sample preparation. In 2009 Matthias Mann’s lab published a filter-aided sample preparation (FASP) method that is widely used by the proteomics community. In 2014 the same lab published an optimized approach that performs all sample processing tasks in a single enclosed tube.

Proteins are digested into peptides for ‘shotgun’ proteomics analysis. While trypsin is most widely used, it also comes with known limitations. Albert Heck and colleagues and Neil Kelleher and colleagues described useful alternatives to trypsin.

Proteomics researchers are always striving for higher sensitivity. John Yates’s lab’s DigDeAPr method and Bernhard Kuster’s lab’s use of DMSO to enhance electrospray response allow researchers to do deeper proteomic analysis.

Quantitative methods

Proteomics researchers want to quantify, as well as identify, peptides and proteins. Stable isotope labeling, either through metabolic incorporation or chemical labeling during sample preparation, enables researchers to quantitatively compare multiple samples. Spiking in labeled concatenated signature peptides into samples enables absolute quantification, as shown by Robert Beynon and colleagues.

The SILAC metabolic method has proved to be extremely popular, and we have published applications of SILAC for quantifying proteins and phosphorylation sites in human tissues, and in nematodes (Larance et al. and Fredens et al.).

A limitation with SILAC is that it cannot be used to compare more than three samples at one time. Joshua Coon and colleagues provided a clever way around this with their NeuCode SILAC approach, which in theory could enable up to 39-plex experiments.

Chemical labeling approaches (such as iTRAQ and TMT) currently offer higher multiplexing capability than SILAC, but can suffer from problems of quantitative accuracy. Coon’s lab and Steven Gygi’s lab each provided methods to obtain accurate quantitative data in multiplexed experiments.

Shotgun data analysis

In a typical ‘shotgun’ proteomics (discovery-based) experiment, MS/MS fragmentation spectra are generated for all peptides that can be detected by the mass spectrometer. The proteins are identified by matching these experimental spectra to theoretical or actual MS/MS peptide spectra found in databases. Well-performing tools to do this and methods to control for false discoveries are therefore crucial.

To generate good proteomics data, one must tune the mass spectrometer to the best of its ability. The HCD method from Stevan Horning and Matthias Mann and colleagues and a decision tree algorithm from the Coon lab enable researchers to obtain improved MS/MS data for protein identification.

We have published tools for peptide identification – PercolatorSpectraST, and MS-Cluster – and quantitative data analysis (Census). Lennart Martens’ group showed that combining various data processing workflows leads to greater proteome coverage. Proteogenomics-type approaches using custom databases generated using genomic data are becoming popular as they allow novel peptides not found in standard protein databases to be identified (see Evans et al. and Branca et al.).

Researchers must be careful to not overinterpret their proteomics data. Gygi’s lab wrote a useful Perspective on the target-decoy approach for determining false discovery rate, a metric that has become broadly adopted by the field.

In order to keep tools sharp and highlight areas for development, it is important to systematically put them to the test. In 2005, Gygi’s lab performed a comparison of three platforms. In 2009, a large group of researchers tested their ability to identify proteins in a small test sample. This analysis highlighted common problems that occur especially during data analysis in proteomics investigations.

Targeted proteomics

Targeted proteomics, which we chose as our Method of the Year in 2012, offers a fundamentally different way of analyzing data compared to discovery-based proteomics. Targeted approaches, most commonly selected reaction monitoring (SRM), utilize mass spectrometry assays to identify and quantify peptides selected to represent proteins of interest, akin to Western blotting, but in a multiplexed fashion.

These SRM assays can be laborious to generate, however. Methods for high-throughput SRM assay generation are therefore important (see Picotti et al.Stergachis et al. and Kennedy et al.). In 2008 Ruedi Aebersold’s group set up a database of assays for the yeast proteome, called SRMAtlas, which has since grown to include assays for M. tuberculosis and human. Amanda Paulovich and colleagues just this year presented the CPTAC Assay Portal, a new repository of analytically validated targeted proteomics assays.

As in discovery-based proteomics, statistical validation in targeted proteomics is equally important. Aebersold’s lab developed the mProphet tool and also provide a useful guide to SRM in their 2012 Review.

Biological applications of targeted proteomics are growing. Bart Deplancke and colleagues showed that transcription factors could be followed during cellular differentiation using SRM. Olga Vitek’s group showed that targeted proteins could be quantified using sparse reference labeling. In this current 10th Anniversary issue, Claus Jørgensen’s group reports a quantitative method for monitoring human kinases, and Paola Picotti’s lab describes a panel of assays to quantify ‘sentinel’ proteins reporting on 188 different yeast processes.

Data-independent analysis

Our very first issue in October 2004 featured an interesting paper from Yates and colleagues describing a data-independent mass spectrometry scanning approach for acquiring MS/MS spectra. In contrast to the common data-dependent approach, where the most prominent peptide ions are selected for MS/MS, the data-independent approach can enable more reproducible results as it overcomes issues of peptide ion sampling stochasticity. It took nearly a decade for this clever idea to really catch on, but within the last year or so, we have published practical data-independent analysis implementations from Michael MacCoss’s and Stefan Tenzer’s labs.

Anne-Claude Gingras and Stephen Tate and colleagues, along with Aebersold and colleagues, showed how a quantitative targeted data-independent analysis method called SWATH provides advantages for analyzing protein interactomes by affinity purification-mass spectrometry.

We look forward to many more strong advances in mass spectrometry-based proteomics in the decade to come!

Synthetic Biology at Nature Methods

Since its launch, Nature Methods has seen many papers that have influenced the Synthetic Biology community. As a supplement to our May Focus on Synthetic Biology we take a nostalgic trip through the highlights of our papers in this area for different aspects of synthetic biology.

Cloning
In 2007 Stephen Elledge and Mamie Li developed SLIC (sequence and ligation-independent cloning) a strategy that uses homologous recombination to assembly many DNA fragments in vitro in a single reaction. Later the same year Mitsuhiro Itaya and colleagues also used homologous recombination in their bottom up assembly to unite larger DNA pieces to genomes of ~ 140kb size.

In 2009 Daniel Gibson and colleagues presented their one-pot enzymatic reaction that successfully assembled genomes 100s of kilobases and has since been dubbed ‘Gibson Assembly’.  The method reached fame on Youtube when the Cambridge iGEM team for 2010 created a music video showing how Gibson Assembly saves frustrated scientists:

Gene and genome synthesis
In 2007, to improve error-free DNA synthesis, Duhee Bang and George Church developed circular assembly amplification that eliminated error-containing oligonucleotides from the assembly. A few years later  Jay Shendure and colleagues introduced their dial-out PCR to retrieve desired DNA molecules from a library  for gene assembly.

For an in depth review on the topic of DNA synthesis, error correction and gene assembly visit Sriram Kosuri and George Church’s review in our Focus issue.

In 2010, on the heels of their breakthrough with Mycoplasma mycoides JCVI-syn1.0 – the first chemically synthesized bacterial genome (Gibson D, et al Science 329, 2010) ­- Gibson et al. published the chemical synthesis of the mouse mitochondrial genome in our pages. They adapted Gibson Assembly to begin at the oligonucleotide level to rapidly make larger fragments that were then combined into the desired genome, exclusively in vitro.  Once synthesized a bacterial genome might need to be further modified, but to do so in an organism other than E. coli proved challenging. In 2013 Bogumil Karas et al, showed that whole genomes, as large as 1.8 megabases can be directly transferred from bacteria to yeast where genetic manipulation is routine.

In our current Focus issue Gibson reviews the state of the art in genome assembly techniques , compares strategies and discusses what the future may hold.

Genome modification
To quickly generate large libraries of promoters in targeted regions of a bacterial chromosome  George Church and colleagues presented coselection MAGE (multiplex automated genome engineering) in 2012.  The increasingly popular CRISPR system can also rapidly edit genomes with few off-target effects when Cas9 is used as a nickase as William Skarnes and colleagues showed earlier this year.

Gene activation can be tuned by targeting transcription factors via the CRISPR-Cas9 system as Charles Gersbach demonstrated in 2013.

Circuit design
To ease construction of complex circuits Adam Arkin and colleagues adapted known translational regulators to control transcriptional elongation in 2012.  A bit later the same year Jim Collins and colleagues showed that an iterative plug-and-play method makes use of a large repository of genetic components when designing circuits.  This year Jeff Tabor and colleagues showed how gene circuit dynamics can be controlled with light. On April 28 Douglas Densmore and his team introduced Raven , software that calculates assembly plans for complex circuits.

Parts characterization
To be successful in any of the above applications one needs reliable and well characterized parts. Last year Drew Endy, Adam Arkin and colleagues presented a method to quantify the performance of genetic elements and in a companion paper they introduced a library of standardized transcription and translation initiation elements available through biofab.

Towards the end of 2013 Christopher Voigt and colleagues expanded the designer’s toolbox with over 500 well characterized transcriptional terminators. Robert Landick discussed how these ‘better stop signs’ as he termed them provide insight into the mechanism of termination.

UPDATE: There is now a joint special on Synthetic Biology at nature.com/synbio with articles from Nature, Nature Reviews Microbiology and Nature Methods.

Enjoy reading. The papers mentioned above are listed below in chronological order.

Continue reading