Super-resolution microscopy at Nature Methods

On this 10th anniversary of the first issue of Nature Methods it is appropriate to look back at the relationship between the journal and super-resolution microscopy, one of the technologies we have chosen as one of the top ten methods developments in the ten years since Nature Methods published it first issue.

Super-resolution microscopy first appeared in Nature Methods with the online publication of two papers on August 9, 2006. One demonstrated the first super-resolution microscopy image using a genetically-encoded fluorescent probe (Willig et. al., 2006) and the other was the first publication describing stochastic optical reconstruction microscopy (STORM; Rust et. al., 2006), the class of methods now often referred to as single molecule localization microscopy (SMLM). Initially, the papers were mostly overshadowed by the media storm accompanying the publication one day later of photoactivated localization microscopy (PALM) by Betzig et. al. in Science, but the STORM and PALM papers together were instrumental in driving wider development of the nascent super-resolution microscopy field that had previously been confined to a small number of highly specialist groups.

A visualization of SRM papers published in Nature Methods over the years.

A visualization of SRM papers published in Nature Methods over the years.{credit}D. Evanko{/credit}

Nature Methods has now published 64 articles on super-resolution microscopy, 49 full original research articles, 11 Correspondences and 4 Review and Commentary articles. The accompanying illustration conveys the wide range of topics covered and their historical progression. Super-resolution microscopy was also our choice of Method of the Year in 2008.

The first three years of super-resolution microscopy (SRM) publications in Nature Methods were dominated by advances in localization-based SRM and early attempts at live cell SRM. Betzig and colleagues defined important considerations for performing live-cell PALM (Shroff et. al., 2008) and PALM was adapted as a massively parallel single particle tracking technique called sptPALM (Manley et. al., 2008). Another early paper demonstrated the use of dual-plane imaging for 3D SRM several microns deep into a sample (Juette et. al., 2008), but to this day SRM is still dominated by 2D imaging.

It was clear that the probes used for localization-based SRM were critical to the performance of these techniques. In early 2009 we published the new fluorescent proteins, PA-mCherry (Subach et. al., 2009) and mEOS2 (McKinney et. al., 2009), from the Verkhusha and Looger labs respectively. The performance characteristics of mEOS2 and the Looger lab’s very open reagent sharing habit helped contribute to this protein dominating much fluorescence protein based SRM.

In late 2009 we began to address the prior lack of sufficient attention to the analysis methods used in localization-based SRM with the publishing of two papers (Mortensen et. al., 2009 and Smith et. al., 2009) focused on minimum likelihood algorithms for precisely estimating the centers of fluorophore image spots, a fundamental underpinning of the whole class of localization-based SRM methods. At this time we also started publishing Correspondences describing user-friendly software for performing the early localization analysis steps; first LivePALM (Hedde et. al., 2009) and QuickPALM (Henriques et. al., 2010) and in later years DAOSTORM (Holden et. al., 2011) and RapidSTORM (Wolter et. al., 2012). During this period researchers also scoured other fields for algorithms new to imaging analysis and imported the powerful compressed sensing analysis method (Zhu et. al., 2012), developed novel localization methods like radial symmetry (Parthasarathy, 2012) and characterized and corrected the noise attributes of sCMOS cameras so that they could challenge EMCCDs as the camera of choice for localization-based SRM (Huang et. al., 2013). A particularly interesting development was the use of Bayesian analysis for image generation that didn’t require explicit fluorophore localization and could work with the intrinsic blinking and bleaching of high density GFP-labeled live samples (Cox et. al., 2011).

We soon determined that the later analysis steps required for interpretation of the underlying biology were most ripe for, and in need of, further development. Improper analysis could easily lead to artifacts, particularly when trying to use localization-based SRM to examine protein clustering (Annibale et. al., 2011). Notable early work in this area was the use of pair correlation analysis to examine protein organization in the plasma membrane (Sengupta et. al., 2011). An ongoing issue in analyzing localization-based SRM images has been determining the resolution of the resulting image, a far less straightforward task than one might expect. Adoption and development of Fourier ring correlation from electron microscopy provided a compelling solution to this challenge (Niewenhuize et. al., 2013) but more work remains to be done before researchers can be confident of reliably measuring the resolution of their images.

Although manuscripts with a focus on analysis methods made up the majority of articles published in Nature Methods over the past 10 years, there were also continuous developments in imaging technology. STED microscopy was improved through the use of continuous wave lasers (Willig et. al., 2007) and time gating (Vicidomini et. al., 2011). A STED configuration that created a spherical scanning spot was used to image the 3D structure of a single mitochondria (Schmidt et. al., 2008). There was also further development of the optical methods used for localization-based SRM. Temporal focusing of two-photon irradiation allowed confined photoactivation in whole cells, thus limiting photobleaching outside the imaging area (York et. al., 2011). Confined photoactivation and imaging was also accomplished using dual orthogonal objectives to combine light-sheet microscopy with localization-based SRM (Cella Zanacchi et. al., 2011). Finally, a dual-objective scheme with objectives facing one another combined with astigmatism improved the resolution of 3D localization-based SRM (Xu et. al., 2012).

In recent years, alternative SRM methods made an appearance. The scanning-based method, reversible saturable optical fluorescence transitions (RESOLFT), was massively parallelized and used for imaging whole living cells (Chmyrov et. al., 2013). An intriguing recent report combined elements of STED and localization-based SRM in a new imaging modality that discriminates nanoareas of fluorescently labeled rigid proteins using polarization (Hafi et. al., 2014).

Improvements in imaging technology are of little use if you can’t label your targets of interest. Labeling methods have therefore been an important component of the SRM papers published in Nature Methods. Trimethoprim labeling (Wombacher et. al., 2010) and SNAP tag labeling (Klein et. al., 2011) both allowed direct labeling of proteins in live cells and this was combined with bright fast-switching probes to allow fast 3D localization-based SRM in whole living cells at ~25 nm resolution (Jones et. al., 2011). Other investigators improved labeling not by direct labeling using chemical tags, but by using smaller affinity probes such as nanobodies (Ries et. al., 2012) or aptamers (Opazo et. al., 2012). A particularly intriguing class of labeling methods relies on DNA oligos. Barcoding (Lubeck et. al., 2012) and sequential labeling (Jungmann et. al., 2014; Lubeck et. al., 2014) allowed highly multiplex labeling of target proteins and nucleic acids.

With so many developments and choices for researchers it is increasingly important for them to have quality data on the relative performance of different techniques and tools. To this end, Nature Methods has been publishing increasing numbers of Analysis articles reporting such performance comparisons and SRM has been no exception to this. The performance of a wide selection of chemical fluorophores for localization-based SRM was characterized in a real-life imaging situation (Dempsey et. al., 2011) and a recent report characterized the photoactivation efficiency of fluorescent proteins (Durisic et. al., 2014).

We hope you enjoyed this brief summary of SRM in Nature Methods. Although I have tried to include much of what Nature Methods has published in this field, the summary is by no means comprehensive. Most significantly, it doesn’t include many of the methods that are used to double the resolution of fluorescence microscopy. If there is sufficient interest we will consider extending our summary to include both these and more recent developments as they occur.

Light sheet imaging in Nature Methods

It was only a few months before Nature Methods was launched in October 2004 that Jan Huisken and Ernst Stelzer had published a paper in Science in which they used light sheet microscopy – what they called selective plane illumination microscopy or SPIM – to image fluorescence within transgenic embryos. Simplistically put, this century-old technique achieves optical sectioning by illuminating a sample through its width with a thin sheet of light. In the last decade, Nature Methods has published a steady stream of papers reporting developments in light-sheet imaging. Here are the highlights.

Our very first light-sheet paper was also from the Stelzer group, reporting the use of deconvolution to improve resolution of the technique (Verveer et al, 2007). This was rapidly followed by a paper from Hans-Ulrich Dodt, in which samples such as entire insects or brain tissue were rendered transparent with clearing agents to produce spectacular light-sheet ‘ultramicrographs’ (Dodt et al, 2007). The push to higher resolution continued, with a paper from Albert Diaspro reporting 3D super-resolution imaging within thick samples using light-sheets (Cella Zanacchi et al, 2011). The Stelzer group, meanwhile, improved performance of the technique in larger samples that scatter more light, by combining it with structured illumination (Keller et al, 2010).

Thai Truong, Willy Supatto and Scott Fraser added two photon excitation to light-sheet imaging, thereby doubling the depth and increasing by an order of magnitude the speed at which they could image samples such as developing embryos with each approach alone (Truong et al, 2011); Supatto recently extended this to imaging in multiple colours  (Mahou et al, 2014). And then in 2012, the groups of Phillip Keller and Lars Hufnagel independently reported microscopes that could take take multiple views of a biological sample simultaneously, allowing rapid imaging of entire developing fly embryos at sub-cellular resolution (Tomer et al, 2012; Krzic et al, 2012).

Though light-sheet imaging is perhaps at its most powerful in the imaging of thick samples like embryos or tissue sections, it has been used for substantial performance improvements in cellular imaging as well. In 2011, Eric Betzig’s group used scanned Bessel beams to create thinner light sheets and thus much improved axial resolution, achieving isotropic 3D resolution and rapid imaging within living cells (Planchon et al, 2011). Note also that, as Tom Vettenburg, Kishan Dholakia and colleagues showed,  generating the light sheet using an Airy beam, rather than Gaussian or Bessel beam, yields an even larger field of view without sacrificing contrast and resolution (Vettenburg et al, 2014). Variations on the light-sheet theme have also been developed by the labs of Makio Tokunaga and Sunney Xie for single-molecule imaging within cells (Tokunaga et al, 2008Gebhardt et al, 2013).

In recent years, the excitement around this technology has been palpable, with several papers reporting impressive applications of light-sheet microscopy: it has been used to functionally image the entire fish brain (Ahrens et al, 2013) and the brain of ‘fictively behaving’ fish (Vladimirov et al, 2014), as well as to image the beating fish heart (Mickoleit et al, 2014).

Perhaps not surprisingly, the emphasis in methods development has also been shifting a little. On the one hand, platforms are being developed to make this valuable technique available more widely, for instance via the OpenSPIM or OpenSpinMicroscopy platforms (Pitrone et al, 2013; Gualda et al, 2013). At the same time, analytical tools are necessarily being developed to handle the vast reams of data that a light-sheet experiment generates. The group of Pavel Tomancak reported Bayesian-based deconvolution methods to analyse the large data sets that result from multiview imaging (Preibisch et al, 2014). Phillip Keller and colleagues described computational methods to segment and track nuclei in data sets from light sheet or other imaging, for fast lineaging of developing embryos (Amat et al, 2014). Misha Ahrens and colleagues reported Thunder, a suite of analytical tools built on a platform for distributed computing, enabling the mapping of brain activity in ‘fictively behaving’ zebrafish (Freeman et al, 2014).

It’s fair to say that this venerable method has been thoroughly revived over the past decade. Light-sheet imaging is poised to yield tremendous biological insight. We hope to keep you updated on future developments in Methagora.

Is phototoxicity compromising experimental results?

Light-induced damage to biological samples during fluorescence imaging is known to occur but receives too little attention by researchers.

The December Technology Feature in Nature Methods asks if super-resolution microscopy is right for you and a point that comes up repeatedly from the researchers we interviewed is the danger of phototoxicity and photodamage caused by the high irradiation intensities needed for the illuminating light. This has long been a concern with these methods and many of the papers describing them mention it.

But as discussed in the December Editorial, even fluorescence microscopy with low irradiation intensities can cause dangerous levels of phototoxicity that permanently damage the sample. Microscopists are aware of these concerns but there has been little effort to implement processes intended to reduce the likelihood of it compromising research study results. Dave Piston, Director of the Biophotonics Institute at Vanderbilt University School of Medicine, laments that while phototoxicity is a big deal he has gotten zero traction with NIH reviewers on trying to build some rules for it.

There are some good resources available to researchers that highlight the dangers of phototoxicity and provide advice on how to limit it. Methods in Cell Biology Vol 114 has an excellent chapter by Magidson and Khodjakov, Circumventing Photodamage in Live-Cell Microscopy, that should be mandatory reading for all researchers using fluorescence microscopy for biological research. Also, Nikon’s MicroscopyU has a literature list with several dozen references and recommended reading on phototoxicity. It could use some updating but is still useful.

Despite the amount of microscopy literature that discusses phototoxicity, discussion of the phenomenon in research articles published in Nature Journals is conspicuously absent. This is highlighted by a simple full-text search we performed on the HTML versions of research articles published in Nature, Nature Cell Biology, Nature Immunology, Nature Methods and Nature Neuroscience. The articles retrieved were limited to original research articles.

The table below lists the number of occurrences of each of the listed words in the period from January 1, 2005 to November 3, 2013 in each of the indicated journals. The percentages represent the number fraction of articles containing ‘phototoxicity’ relative to the numbers of articles containing each of the microscopy- or fluorescence-related terms. Note that this is NOT a measure of co-occurrence, only a measure of how common the term ‘phototoxicity’ is relative to the other terms.

phototoxicity fluorescence fluorescent microscopy microscope
# # % # % # % # %
Nature 8 2120 0.4% 1925 0.4% 1995 0.4% 1918 0.4%
Nature Cell Biology 8 815 1.0% 728 1.1% 866 0.9% 822 1.0%
Nature Immunology 6 552 1.1% 574 1.0% 408 1.5% 326 1.8%
Nature Methods 27 565 4.8% 494 5.5% 441 6.1% 407 6.6%
Nature Neuroscience 18 639 2.8% 727 2.5% 587 3.1% 736 2.4%

 

The same analysis was repeated with the term ‘photodamage’ to determine if there was a substantial difference in the usage of these two similar terms.

photodamage fluorescence fluorescent microscopy microscope
# # % # % # % # %
Nature 18 2120 0.8% 1925 0.9% 1995 0.9% 1918 0.9%
Nature Cell Biology 6 815 0.7% 728 0.8% 866 0.7% 822 0.7%
Nature Immunology 2 552 0.4% 574 0.3% 408 0.5% 326 0.6%
Nature Methods 29 565 5.1% 494 5.9% 441 6.6% 407 7.1%
Nature Neuroscience 12 639 1.9% 727 1.7% 587 2.0% 736 1.6%

 

These results carry the potentially large caveat that the analysis did not include the text of the supplementary information, but the rarity with which phototoxicity or photodamage is discussed (0.4% to 7% relative to microscopy terms) suggests that researchers don’t appreciate how important it is to pay attention to artifacts that result from light irradiation. Luckily, there are exceptions to this state of affairs.

An excellent example of testing for phototoxicity and the subtle effects it can induce can be found in a manuscript from Jeff Magee’s lab at Janelia Farm Research Campus published last year in Nature. Quoting from the manuscript, “Particular care was taken to limit photodamage during imaging and uncaging. This included the use of a passive 8× pulse splitter in the uncaging path in most experiments to reduce photodamage drastically [Ji, N. et al. Nat. Methods (2008)]. Basal fluorescence of both channels was continuously monitored as an immediate indicator of damage to cellular structures. Subtle signs of damage included decreases in or loss of phasic Ca2+ signals in spine heads in response to either uncaging or current injection, small but persistent depolarization following uncaging, and changes in the kinetics of voltage responses to uncaging or current injection. Experiments were terminated if neurons exhibited any of these phenomena.”

It is easy to see how these changes in Ca2+ responses could easily have been interpreted as real biological effects caused by the uncaged glutamate, rather than the uncaging light itself.

It is unrealistic to expect that any mandates or oversight would be able to prevent or detect such consequences of phototoxicity in research studies. It is essential that investigators themselves be vigilant and implement appropriate controls to detect these effects. Na Ji, also at Janelia Farm Research Campus says, “It is not enough to only look for instant and dramatic signs of phototoxicity. Sometimes the effects may be more subtle and even unperceivable during the imaging period, but may become obvious when the same sample is imaged the next day. Care has to be taken in data collection and interpretation, especially when the biological process under investigation itself is a subtle one.”

Finally, the application is just as important as the imaging method being used. For example, light-sheet microscopy is excellent at reducing irradiation levels in volumetric imaging. But some applications of super-resolution microscopy, even on living samples, might be less susceptible to artifacts caused by phototoxicity than are sensitive long-term imaging applications of living samples by light-sheet microscopy. Nobody’s microscope earns them a free pass on the dangers of photodamage arising from phototoxicity. Everyone needs to be vigilant.

Update: A reader helpfully pointed out that the danger of phototoxicity and photodamage also applies to optogenetics, where light (often in the blue region of the spectrum) is used to control protein activity.

Promoting shared hardware design

Now is the time to move open-source hardware development into basic research labs.

Having convinced airport security to haul a suspicious looking briefcase packed with hardware on board, Pete Pitrone, an imaging specialist in the group of Pavel Tomancak, headed for South Africa. His aim was to introduce young students to the parts, many manufactured at his own institute, in the hope they would assemble them into a sophisticated working microscope. It was a symbolic step to demonstrate the potential for building new tools in laboratories and beyond.

The OpenSPIM microscope-in-a-briefcase.
Credit: Vineeth Surendranath

Manufacturing has gained an appealing image of late. In his State of the Union Address in February, US president Barack Obama announced the creation of three manufacturing hubs modeled after an institute in Youngstown, Ohio. His comments referenced the ability to innovate quickly with additive manufacturing, which relies on digital design and 3D printing: relatively recent improvements which have changed the way that physical objects and devices are made, and helped to open up the design process.

Taking advantage of these developments at the grassroots level is an enthusiastic crop of do-it-yourself ‘builders’ or ‘hackers’ who are promoting a culture of shared design and open innovation, and have spawned a movement towards open-source hardware. Analogous to open-source software, open-source hardware licenses prevent the patenting of hardware designs or physical objects, and require comprehensive and freely accessible design and instructions to allow anyone to build the same device. One working definition and a helpful list of considerations is provided by the Open Source Hardware Association.

The advent of cheap 3D printers such as the RepRap and MakerBot have made it easy, cheap and relatively fast to turn digital designs into objects. 3D printing involves the layered deposition of a heated polymer through a precisely positioned moving extruder. Open-source electronics that can be used to control hardware with software from the likes of Arduino and Raspberry Pi are also making it easier to manufacture sophisticated devices.

In our July editorial, we argue that basic research shares the values of openness and reproducibility embodied by open-source hardware. Beyond making research tools easier and cheaper to build and replicate, developing devices in an open-source environment can actually speed innovation by encouraging community feedback early in development. This can make the work that goes into extensive documentation and robustness testing worthwhile for the individual research group.

Open-source differs from traditional design in its focus:

  • open-source tools are specifically designed for others to build and modify them
  • open-source tools must include extensive documentation, including parts lists, any related software code (also published as open-source) and design files
  • the focus on reproducibility encourages simple, streamlined design
  • modularity and integration are ultimate goals

Many in the design field have said that this focus actually improves designs and promotes the broadest uptake.

Applied technologies like photovoltaics and hardware infrastructure for cloud computing are investing in open-source approaches, but there are currently very few examples of open-source hardware in basic research. OpenPCR publishes the designs for a thermal cycler to conduct PCR, which can be purchased as an inexpensive kit and assembled by hand. Some labs simply use 3D printing to generate teaching models and basic equipment like test tube racks (e.g. the DeRisi lab).

pic2

Pitrone teaching South African students how to assemble an OpenSPIM scope.
Credit: P. Tomancak

The July issue of Nature Methods includes two leading examples of academic efforts in this direction, OpenSpim and OpenSpinMicroscopy, that include detailed designs for light-sheet microscopes that can take 3D movies of living things. Parts for the OpenSPIM scope were hiding in the briefcase en route to an EMBO course organized by Musa Mhlanga along with Freddy Frischknecht and Jost Enninga in Pretoria. A highlight according to Pavel Tomancak was to watch talented high school students assemble and successfully operate the scope in under two hours. The availability of design details and the focus on making it possible to build are a model of accessibility that has ramifications in teaching and outreach, encouraging many others to play around with the hardware.

Hardware innovation is a critical part of the technological advances that drive science. To carry out experiments, many research laboratories need tools that are simply not available, cost too much or will take too long to develop commercially. An open-source approach can lower the barriers to adopting, disseminating and ultimately improving tools for research.

 

Whole brain cellular-level activity mapping in a second

It is now possible to map the activity of nearly all the neurons in a vertebrate brain at cellular resolution in just over a second. What does this mean for neuroscience research and projects like the Brain Activity Map proposal?

In an Article that just went live in Nature Methods, Misha Ahrens and Philipp Keller from HHMI’s Janelia Farm Research Campus used high-speed light sheet microscopy to image the activity of 80% of the neurons in the brain of a fish larva at speeds of a whole brain every 1.3 seconds. This represents—to our knowledge—the first technology that achieves whole brain imaging of a vertebrate brain at cellular resolution with speeds that approximate neural activity patterns and behavior.

Click on the image to view the video.

Brain activity imaging of a whole zebrafish brain at single-cell resolution. Click on image to view video [20 MB].

Interestingly, the paper comes out at a time when much is being discussed and written about mapping brain activity at the cellular level. This is one of the main proposals of the Brain Activity Map—a project that is being discussed at the White House and could be NIH’s next ‘big science’ project for the next 10-15 years. [Just for clarity, the authors of this work are not formally associated with the BAM proposal].

The details of BAM’s exact goals and a clear roadmap and timeline to achieve them have yet to be presented, but from what its proponents have described in a recent Science paper the main aspiration of the project is to improve our understanding of how whole neuronal circuits work at the cellular level. The project seeks to monitor the activity of whole circuits as well as manipulate them to study their functional role. To reach these goals, first and foremost one must have technology capable of measuring the activity of individual neurons throughout the entire brain in a way that can discriminate individual circuits. The most obvious way to do this is by imaging the activity as it is occurring.

With improvements in the speed and resolution of existing microscopy setups and in the probes for monitoring activity, exhaustive imaging of neuronal function across a small transparent organism was bound to be possible—as this study has now shown.

The study has also made interesting discoveries. The authors saw correlated activity patterns measured at the cellular level that spanned large areas of the brain—pointing to the existence of broadly distributed functional circuits. The next steps will be to determine the causal role that these circuits play in behavior—something that will require improvements in the methods for 3D optogenetics. Obtaining the detailed anatomical map of these circuits will also be key to understand the brain’s organization at its deepest level.

These are some of the types of experiments described in the BAM proposal and they are clearly within reach in the next 10 years–whether through a centralized initiative or through normal lab competition and peer review. While it is expected that in mice, too, functional circuits will span large brain areas, performing these types of experiments in mice will require more methodological imagination. It will not be possible to place a living mouse brain within the microscope system used by Ahrens and Keller to image the zebrafish brain. The mouse brain is significantly bigger, is largely impenetrable to visible light and is surrounded by a skull. Realistically, we may not see methods that enable whole brain activity mapping in mammals at the cellular level for quite a while.

But there is much worth learning about brain function in smaller organisms such as the zebrafish and drosophila, and microscopy systems such as this will be capable of providing important fundamental insights into brain function that are relevant to our understanding of the human brain.

Whether it will be through BAM or not, the neuroscience community has important challenges to tackle ahead. At Nature Methods, we have been actively involved in supporting technology development in the neurosciences from the very beginning and we look forward with enthusiasm to doing so during this exciting period in neuroscience research.

Update: We just published an Editorial on this topic in our May issue.

Nature journals provide a CC license for community experiments

Nature Methods has long been an advocate of the value of community experiments (or competitions/challenges) to assess and compare the performance of algorithms and software tools. In 2008 we discussed the value of these competitions and advocated that they also be used to assess the performance of less widely used algorithms such as those used for single particle tracking. Such an experiment for assessing single particle tracking was run in 2012, although the results are still awaiting publication.

Publication of such work has often been confined to more specialized journals but in 2012 Nature Methods started publishing manuscripts emanating from these competitions with a manuscript assessing the performance of gene regulatory network inference methods based on results of one of the DREAM5 challenges.

In recognition of the profound value such challenges provide to the wider scientific community the Nature journals will now be publishing manuscripts describing the results of these challenges under a Creative Commons attribution-noncommercial-share alike unported license. This is the same license we use for publishing first genome papers, standards papers and white papers. The first example of this is an Analysis article published in Nature Methods yesterday describing the results of the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment.

Publication of such community experiments will necessarily be highly selective and likely increasingly so as such challenges become more prevalent, as illustrated by the explosion in the number of Grand Challenges in Medical Image Analysis. But these community experiments provide invaluable information on the performance of methods that are otherwise difficult to objectively compare. We hope that the potential for publication in a Nature journal and the open access provided by a creative commons license helps encourage broader participation in these efforts and visibility of the results.

Update: February 12
We just published another manuscript describing a community experiment. This Analysis article presents the results of the first FlowCAP challenge that assessed the performance of flow cytometry automated analysis methods.

Our reporting standards for fluorescent proteins – Feedback wanted

Several years ago, based on informal input from various members of the community, Nature Methods established some internal minimum reporting standards for manuscripts describing new or improved fluorescent proteins. These were never formally reported but were often communicated to authors of submitted manuscripts when the characterization data provided didn’t meet these standards.

Recently we were fortunate enough to be able to meet with a substantial number of fluorescent protein developers to informally endorse and revise these standards. Our revised reporting requirements for fluorescent proteins are listed below.

Minimum reporting requirements for fluorescent proteins

  1. Full absorption and excitation (250nm to 750nm) and emission curves (350nm to 950nm) under single-photon excitation and at least some data under 2-photon excitation
  2. Values for quantum yield, extinction coefficient, brightness and pKa
  3. Gel filtration data to show that the protein is monomeric or acknowledgement that it isn’t monomeric
  4. Data on fluorophore maturation time including the final maturation percentage. Detailed protocol must be provided
  5. Image data on several representative protein fusions to show that it does not disrupt protein function. This should include tubulin since it is pretty universally used for this purpose
  6. In vitro photostability data compared to other representative proteins. At a minimum this should be decay curves under widefield and confocal illumination to test two different irradiation intensity regimes. Ideally, graphs of the decay time constant versus power should be provided
  7. Cytotoxicity measured in mammalian cells by flow cytometry and compared to EGFP and at least one established fluorescent protein in the spectral range of the reported protein

We also used this opportunity to set some standards for photoswitchable fluorescent proteins. These proteins display quite complicated behaviors and the desired characteristics can vary depending on the application. An example of this is the different characteristics desired for (f)PALM/STORM vs RESOLFT or SSIM super-resolution imaging. These new reporting standards are listed below and supplement the ones above which would also apply to photoswitchable fluorescent proteins.

Additional minimum reporting requirements for photoswitchable FPs

  1. Graphs of 20+ cycles at different powers to observe decay with full details on power and methods
  2. Absorption spectra before and after photoconversion
  3. Optimal parameters for the best power and also for another power
  4. Measurement of how complete the switching is

We encourage developers and users of fluorescent proteins to comment on these minimal reporting standards. But more than that… we’d like your help in moving forward from here.

Are additional standards needed due to new developments?

Do we need standards similar to these in other areas?

We have found that enforcing common standards on highly related tools can greatly improve the efficiency and objectivity of the peer review process and help avoid holding similar developments to different standards. Of course, this requires flexibility in enforcement and we will always allow editor’s some discretion in enforcing these requirements when there is a legitimate reason for doing so.

Fluorescent Proteins and Sensors Webinar – Questions & Answers

Our very first webinar is now live. The topic is “Fluorescent protein and sensors: A practical discussion” and you can register to view it at www.nature.com/webcasts/fluorescent_proteins. Update: Registration link inactivated. Please go here to listen to the archived discussion in .mp3 format.

Nature Methods was joined by Robert Campbell, David Piston and Thomas Knopfel who have been developing and using fluorescent proteins and sensors for years. We had a nice discussion that provided good practical information for users of these tools. If you haven’t watched it, I encourage you to do so. If you watch the webcast within the first month it is live you have the opportunity to submit questions for our participants. Please use the form on the webcast viewing page to submit questions. There will be a delay in providing answers here on our blog while we consult with the participants.

Participants in fluorescent proteins and sensors webinar

Our participants: Robert Campbell, David Piston and Thomas Knopfel

Here we will be posting the questions we receive and answers from our participants. Readers may also comment directly on the blog below but we can not guarantee that any questions asked there will be answered. We do encourage anyone in the community to chime in with their response to any questions that are posed, even if they don’t agree with our participants.

High-content screening: Our tech feature and the GE image competition

In single cell experiments, each well in a 384-well plate can spout a fountain of information. Chris Bakal at the Institute of Cancer Research, which is part of the University of London, practices “high content in high throughput” as he extracts hundreds of different features from single cells in his lab. In this month’s technology feature on single cell analysis, Bakal explains where his work leads and what he looks for in an imaging system.

In the past, drug discovery has driven high-content analysis but that trend is shifting. High content screening instruments are now increasingly finding homes in academic labs.

For example, the IN Cell Analyzer from GE Healthcare Life Sciences is also used at the University of Texas MD Anderson Cancer Center’s department of experimental Therapeutics. Geoffrey Grandjean, who was interviewed in this month’s technology feature, helped to set up a core facility service there, performing high-throughput, high-content siRNA screening. Seeing the images on a regular basis motivated him to start graduate school in experimental therapeutics.

Grandjean co-authored a paper in Cancer Research that looked at resistance to anti-mitotic chemotherapy agents and documented several gene clusters influencing reaction to the chemotherapy drug in “strikingly different ways.” The team also found that modulating microtubule stability in cancer cells is a way to enhance paclitaxel cytotoxicity. The team used siRNA to look at genetic factors regulating microtubule stability when ovarian cancer cells are treated with the drug paclitaxel.

{credit}Credit: G. Grandjean/U of Texas MD Anderson Cancer Center{/credit}

One of Grandjean’s images won last year’s IN Cell Analyzer Image Competition. A cancer drug makes the cellular scaffold so rigid that the cell cannot divide, thus resulting in a huge cell that dwarfs those around it.

Voting for GE Healthcare’s 2012 cell imaging competition has opened in two categories: microscopy and high-content analysis. You can cast your vote by December 19th. Winners will have their images displayed in New York City’s Times Square.

Q&A with the Nikon Small World Winners

For the last few years Nature Methods has published the winning image of the Nikon Small World Photomicrography Competition on our cover. This year I was lucky enough to serve on the competition judging panel alongside three other judges chosen by the competition organizers. The competition was fierce but the image below was chosen as the 2012 winner.

Nikon Small World winning image

Nikon Small World 2012 winning image{credit}Jennifer Peters & Michael Taylor{/credit}

The winning photomicrographers responsible for the image are microscopy specialist Jennifer Peters, a staff member of the Light Microscopy Core Facility at St. Jude Children’s Research Hospital in Memphis, Tennessee, and chemical biologist Michael Taylor. They created this image of the blood-brain barrier in a live transgenic zebrafish. The blood-brain barrier controls which substances come into contact with the vertebrate brain.

To view the other contest images and learn more about the competition please visit https://www.nikonsmallworld.com

Nature Methods talked briefly to the researchers about the science behind the beauty.

So what am I looking at?

MT: What you’re looking at is the blood-brain barrier of a live zebrafish larva at 6 days post-fertilization. The image of the brain vasculature is probably about 400 microns across.

JP: I always tell Mike that he has the most photogenic fish. It’s a 3D stack of images taken with a confocal microscope and collapsed in 2D. It changes from pink to red to yellow to green to blue as you go deeper into the brain. So the rainbow is pretty, but it’s also providing you with spatial information.

What do you hope to learn from these kinds of images?

MT: What we want to do with this is answer age-old questions in the field of brain-barrier biology. We want to answer “When does the blood-brain barrier develop, and what signaling pathways are involved in blood-brain barrier formation and maintenance?”

MT: We also want to be able to use it in drug screens. We’re interested in identifying compounds that modify the blood-brain barrier. The blood-brain barrier often keeps chemotherapeutics from getting into the brain, but it also breaks down in many neurodegenerative diseases. So if we can find small molecules that influence these processes, maybe we can come up with ways to treat diseases.

What did you have to do to get the image?

MT: We first made a transgenic blood-brain barrier reporter line in zebrafish to image this structure in a live animal. This transgenic line drives expression of the red fluorescent protein mCherry specifically in brain endothelial cells that make up the blood-brain barrier.

JP: In order to make the image look the way it does took some trial and error and some finessing. Imaging these live fish can be a challenge, since we have to keep the fish immobile and alive. We can also image the fish for up to 30 hours to examine development in real time.

These fish really are an ideal sample for microscopy and for the types of screens that Mike was talking about. You can fill up 96-well plates with fish embryos and visually screen them.

How did you get interested in science?

JP: [I’ve] always liked science ever since I was a kid. My father and I used to do chemistry experiments in the kitchen. I like instrumentation and like to take things apart and put them back together. That combined with the visual impact brought me to these kinds of studies.

MT: I had never considered science as a career. I was a business major and took chemistry to fulfill a requirement. At UC Davis, I became a biochemistry major and started working in a research lab to earn some spare money, and became interested in pursuing scientific research for my career. My boss encouraged me to go to graduate school, which is when I began working with zebrafish.