TechBlog: PacBios are hackable, too

{credit}Pacific Biosciences Inc.{/credit}

Sometimes, a DNA sequencer is more than it seems. In this month’s Technology Feature, I talk to the researchers who have figured out ways to squeeze new life from an outdated DNA sequencer, the Illumina GAIIx. That’s a popular choice for sequencer-hackers, but not the only one. Stanford structural biologist Joseph Puglisi uses a PacBio RSII from Pacific Biosciences to plumb the biochemistry of protein translation.

The RSII was designed as a single-molecule DNA sequencer, in which powerful cameras capture the flashes of light that result when a DNA polymerase molecule tethered to the base of a microscopic well inserts a fluorescently labeled base into newly synthesized DNA. But according to Jonas Korlach, the company’s chief scientific officer, that’s just one of its applications. “Yes, it’s a sequencer, but at the same time it’s also the world’s most powerful single-molecule microscope.”

All that’s required to make that microscope record something other than DNA synthesis, fundamentally, is for researchers to replace the tethered DNA polymerase with another enzyme, and to add the appropriate fluorescent reagents. To alter the running conditions, researchers also need PacBio to ‘open’ its system software to afford them greater control — for instance, to adjust experimental temperature, imaging conditions, and fluid addition. According to Korlach, just four instruments worldwide have been tweaked in this way. (As with the Illumina hardware discussed in the Technology Feature, such hacks only work on PacBio’s older RSII; the newer Sequel is not hackable, Korlach says.)

The company offers these researchers what support it can, but because they are pursuing home-brew applications, Korlach says, researchers who run into technical issues must solve them in-house. “They are mostly on their own.”

Researchers have used these modified systems to address the biophysics of cell-cell interaction, transcription, splicing, and in Puglisi’s case, translation. Puglisi’s is a structural biology lab, and structural methods tend to provide static pictures. But biology is dynamic. So, his team typically pairs the methods up. “We always like to couple structural investigations with some way to animate the structure and bring it to life,” Puglisi says. Since 2014, the lab has published some 25 studies using the RSII to study the ribosome.

In one recent study, for instance, Puglisi’s team studied the impact of modifying one particular carbon atom in the backbone of RNA. That modification, they found, causes the ribosome to pause, possibly in order to allow ancillary biological processes, such as protein folding or protein processing, to occur.

“The biology of the system really still needs to be worked out, but the dynamic behavior and structural signatures that we saw were so striking that … there has to be some neat biology here,” Puglisi says.

Korlach, who worked with Puglisi on some of his earliest efforts on the RSII, says the team, with Puglisi’s postdoc Sotaro Uemura (now at the University of Tokyo) worked out these methods on nights and weekends, when the laboratory was otherwise unoccupied. And he recalls the excitement of getting the system to work that first time.

“It was pretty thrilling when we saw the first traces of real-time dynamics of ribosome translation,” he says. “That was the first time any human had ever seen a ribosome make a protein in real time on a single-molecule level, with codon resolution. Those are the types of milestones that as a method developer you live for.”

 

Jeffrey M. Perkel is Technology Editor, Nature

 

Suggested posts

Lattice light-sheet microscopy gets an AO upgrade

‘Carbon rainbow’ enables highly multiplexed microscopy

New instruments advance mass spec imaging

TechBlog: Git: The reproducibility tool scientists love to hate

{credit}PLOS Comput Biol, 12, e1004668 (2016){/credit}

Early in his graduate career, John Blischak found himself creating figures for his advisor’s grant application.

Blischak was using the programming language R to generate the figures, and as he iterated and optimized his code, he ran into a familiar problem: Determined not to lose his work, he gave each new version a different filename — analysis_1, analysis_2, and so on, for instance — but failed to document how they had evolved.

“I had no idea what had changed between them,” says Blischak, who now is a postdoctoral scholar at the University of Chicago. “If the professor were to come back and say, ‘which version did you use to create this figure?’ I would have had no idea.”

Later, while attending a workshop on basic research computing skills, he discovered a better approach: Git.

Continue reading

Resubmitting your study to a new journal could become easier

Rejected manuscripts are a fact of life in science, but a new initiative might take some of the sting out of the process.

newspaper-pile-naturejobs-blog-2

{credit}Image credit: Getty Images/Mateusz Zagorski{/credit}

By Chris Woolston

The National Information Standards Organization (NISO), a Baltimore, Maryland-based non-profit that promotes standardization in publishing, has embraced a plan to make it easier for journals to share rejected manuscripts and manuscript reviews without forcing authors to go through another arduous submission process. Continue reading

How product management could be a route out of academia for PhDs and postdocs

This job makes great use of a scientific skill set and is criminally underrated, says Issa Moody.

Let’s face it. Job prospects for PhD candidates and postdoctoral scientists are dismal. In 2012, a study on the biomedical research workforce, conducted by the National Institutes of Health and pictorialized by the American Society for Cell Biology, showed that there is a significant number of biology PhDs in the US who have resorted to doing non-science jobs. Those who stay in science face financial penalties: one 2017 Nature Biotechnology study  demonstrated postdocs, on average, forfeit 20% of their earning potential within the first 15 years of completing their PhD program. Continue reading

TechBlog: Tell-tale LIPSTIC reveals cell-cell interactions

Screen Shot 2018-05-22 at 8.44.27 AM

{credit}Pasqual, G. et al. Nature 553, 496–500 (2018).{/credit}

By Esther Landhuis

The mammalian immune system is a sprawling network of cells, each with unique properties and functions. As discussed in my latest Technology Feature, immunologists have developed a range of technologies to characterize those populations, from mass cytometry to single-cell DNA sequencing.

But the immune system is, in fact, a system, and its members don’t act alone. Immune activity depends upon cell-to-cell interaction, as when a dendritic cell cozies up to a T cell and activates it, or when a T cell run-in prompts a B cell to make antibodies. “When those cells meet physically, that’s when you start an immune response,” explains Gabriel Victora, an immunologist at Rockefeller University in New York.

Victora and coworkers came up with a clever way to track these sorts of kiss-and-run incidents using a method they’ve aptly named LIPSTIC (Labelling Immune Partnerships by SorTagging Intercellular Contacts). The system is designed such that an interaction between protein receptors on two cells — from a dish or a mouse — triggers an enzymatic reaction that tags one of the cells with a tell-tale reporter molecule. That tag – a five amino-acid peptide capped with biotin – is like lipstick on a paramour’s collar, Victora says: “You know there has been an interaction if you put ‘lipstick’ on one cell and it shows up on the other.”

In a study published in January in Nature, the Rockefeller team used the LIPSTIC approach to study interactions between dendritic cells and CD4+ helper-T cells in transgenic mice – interactions that are critical for jumpstarting CD8+ killer-T cells in response to immunization. “Within one lymph node we could detect the dozen or so dendritic cells that were starting an immune response,” Victora says.

Immunologist Scott Mueller of the University of Melbourne in Australia is also using LIPSTIC mice to determine how dendritic cells signal to CD4+ T cells – but his group is examining immune responses to viral infection. By visualising these cellular interactions in real-time with intravital microscopy, “we hope LIPSTIC will help us identify the types of interactions between cells that we cannot ‘see’ by other methods,” Mueller says.

At this point LIPSTIC mice are set up to analyze cell-cell interactions mediated by the pairing of CD40 and CD40L surface proteins, which are found on antigen-presenting cells and activated T cells, respectively. Victora’s group plans to create additional LIPSTIC strains to analyze other receptor-ligand pairs of interest to immunologists. So far they have distributed the CD40-CD40L mice or reagents to about a dozen labs.

Esther Landhuis is a freelance science journalist in the San Francisco Bay area.

 

Suggested posts

Lattice light-sheet microscopy gets an AO upgrade

‘Carbon rainbow’ enables highly multiplexed microscopy

eMAGE extends the gene-editing toolkit

Lattice light-sheet microscopy gets an AO upgrade

AO-LLSM microscope photo

{credit}Betzig Lab, Janelia Research Campus{/credit}

In late 2014, just a month after learning he had won that year’s Nobel Prize in Chemistry for superresolution microscopy, Eric Betzig and colleagues described a technique that has taken the microscopy world by storm.

Continue reading

From academia to Silicon Valley — and back

pexels-photo-77984-smaller

Although faculty members transition from industry to academia (and vice-versa), it’s rare to go back and forth. How does each setting help a researcher grow, and what skills are critical in both environments? Sam King offers his insight.

Five years ago, I left my tenured position in computer science at the University of Illinois at Urbana-Champaign to push myself intellectually and professionally in industry. During these years, I started a company (Adrenaline Mobility), sold my company to Twitter, worked as a software engineer, managed a two-person team, managed a 25-person organization, battled overseas fraudsters and fake accounts, and led a nine-month project (an eternity in industry) that ended up being the largest growth initiative in the history of Twitter. Continue reading

Machine learning gets a journal for interactive figures

Screen Shot 2018-04-13 at 11.31.07 AM

Distill wants to be a sandbox for what a scientific paper can be

By Anna Nowogrodzki

Sometimes it’s hard to understand someone else’s research through a static scientific paper. Across countless universities and companies, at whiteboards and cafeteria tables, you’ll find scientists in animated conversations explaining their research to one another, asking questions, playing around with each other’s data: in short, interacting. Across the internet in recent years, people have extended these explanations to include interactive graphics and code.

Now a web-only machine-learning journal called Distill aims to provide a formal home for these interactive graphical explanations, which in recent years have expanded to blogs and other online fora.

Continue reading

Put your email inbox on a low-spam diet

tumblr_nnzhjl7AAQ1uv17mmo1_1280Mark Clemons has published over 250 papers over the past two-plus decades, nearly all of them involving breast cancer. So imagine his surprise when Clemons, a medical oncologist at the University of Ottawa, Canada, received a flattering email inviting him to submit his work to, of all places, a journal focusing on yoga research.

Continue reading

TechBlog: Software quality tests yield best practices

Screen Shot2

{credit}Alexandros Stamatakis/GitHub{/credit}

Life science research increasingly runs on software. A good fraction, perhaps even most of it, is made by academics, for academics: Rough around the edges, perhaps, but effective — not to mention free. But, is it of high quality?

Alexandros Stamatakis decided to find out.

Stamatakis is a computer scientist and bioinformatician at HITS, the Heidelberg Institute for Theoretical Studies in Germany, and a professor of computer science at the Karslruhe Institute of Technology. His team has been developing and refining software tools for evolutionary biology for more than 15 years, he says, including one called RAxML (from which the code snippet shown above was pulled). Yet for all that time, he says, his code still wasn’t perfect.

“The more I developed it the more bugs I had to fix and the more I started worrying about software quality,” he says.

Not software ‘accuracy’, mind you — when it comes to phylogenetics, it’s difficult to know whether software is providing the correct answer. “You don’t know the ground-truth,” Stamatakis says. Rather, he was curious whether popular tools meet computer-science standards for quality.

To find out, Stamatakis and his team downloaded the code for 16 popular phylogenetic tools (plus, as a control, one from the field of astronomy), which collectively have been cited more than 90,000 times. They then ran those codes — 15 of which were written in C/C++ and the last in Java — through a series of tests.

For instance, they looked at how well software can scale from a desktop computer to a large cluster, something that increasingly is necessary as life science datasets balloon in size. They measured the amount of duplicated code in the software to get a rough indication of maintainability. And they counted the number of so-called ‘assertions’ — logical statements in the code that assert, for instance, that a value falls within a certain range, and that cause the software to terminate should they fail — to obtain a measure of code ‘correctness’.

“There have been empirical studies by computer scientists working in the field of software engineering, where they showed that there is a correlation between incorrect code, or code defects, and the number of assertions used — or let’s better say, an anti-correlation,” Stamatakis says.

So, how did the toolset do? Not too well.

As documented in an article published 29 January in Molecular Biology and Evolution, none of the 16 programs in the round-up, including Stamatakis’ own RAxML, aced all the tests. (With 57,233 lines of code, RAxML exhibited both compiler warnings and memory leaks.) But, he stresses, that is neither to denigrate the programmers who wrote those tools — who, after all, were simply trying (and generally succeeding) to solve a particular problem — nor to suggest they do not work properly.

Rather, he says, potential users must exercise caution in using these tools. “They shouldn’t blithely trust software. And they shouldn’t view it as black boxes,” but instead (as he puts it in his article) as “potential Pandora’s boxes”.

Users should strive also to understand what their code is doing, Stamatakis advises. And if unexpected results arise, repeat them using a separate tool that performs the same task, to ensure they aren’t chasing digital phantoms.

Stamatakis concludes his article with a series of ‘best practices’ for software developers. These include running tests for memory allocation errors and leaks, using assertions, checking for code compilation warnings using multiple compilers, and minimizing code complexity and duplication — practices that are common in professional software development but less so in the life sciences.

The tools Stamatakis’ team used to run its tests are freely available, so readers can try them themselves to see how trustworthy their chosen software is.

Journal editors, he says, should consider requiring such tests of any peer-reviewed work, either performed by the authors themselves prior to submission, or by the peer-reviewers. In fact, during our conversation, Stamatakis suggested he might make the toolbox available as a Python script or Docker container, to make it easier for others to adopt. If and when he does, we’ll let you know. In the meantime, caveat emptor!

 

Jeffrey Perkel is Technology Editor, Nature

 

Suggested posts

‘Manubot’ powers a crowdsourced ‘deep-learning’ review

eLife replaces commenting system with Hypothesis annotations

Interactive figures, a mea culpa