TECHNOLOGY FEATURE

Technology to watch in 2018

Thought leaders reveal the technologies and topics likely to transform life-science research in the year ahead.
Illustration of an internet-connected city

The Internet of Things has transformed many aspects of our lives and is now, along with other breakout technologies, poised to transform life-science research.Credit: chombosan/Alamy

Recoding the genome

George Church Geneticist, Harvard Medical School, Boston, Massachusetts.

For all the excitement surrounding the gene-editing tool CRISPR, it is not that efficient or precise. It’s hard to make many changes at once. My lab has set the record so far — making 62 modifications to the genome of a single cell — but we have compelling applications that need a greater number of simultaneous changes. Now, however, we have the technologies required to make this feasible.

‘Codon recoding’ is a completely generic way to make any organism resistant to most or all viruses and requires tens of thousands of precise changes per cell. Each codon, a section of DNA three bases long, such as TTG, corresponds to a specific amino acid, such as leucine, or a translational signal (start, stop and so on). Given that there are six codons for the leucine, we can switch any one for another, taking advantage of the redundancy built into the genetic code. Once done with those swaps, we delete the gene for the leucine transfer RNA (tRNA) that matches up with the swapped-out codons, so the cell can no longer recognize that sequence.

Now, when a virus infects a cell that has all of these codons recoded, it cannot translate its proteins from its messenger RNA because of the missing tRNA, and the virus will die. Viruses are not that robust; it doesn’t take much to throw them out of whack.

To make multiple, precise changes at once, we use the multiplexed automated genome engineering (MAGE) technique. Short segments of genetic material containing the precise base-pair changes you want to make are introduced into cells that are prevented from making DNA-mismatch repairs. After a few rounds of cellular replication the changes are incorporated fully into the bacterial genome.

Theoretically, this can be done in every organism for which viruses are a problem — microorganisms used in the dairy industry and agriculturally important plants and animals. In addition, researchers could make virus-resistant pigs whose organs can be used for transplants, and virus-resistant human cells to use for producing pharmaceuticals and vaccines.

What is really gee whiz here is that you have the potential to make an organism resistant to all viruses — even viruses that have never been studied. But there are many other things that recoding can accomplish. Pamela Silver at Harvard Medical School and Daniel Gibson at Synthetic Genomics in La Jolla, California, have collaborated to develop another recoding technology to improve vaccine strains of Salmonella typhimurium.

Researchers could also recode an organism to incorporate non-standard amino acids in proteins to enable chemistries that don’t exist in current organisms: amino acids that fluoresce, resemble nucleic acids or form unusual bonds. Whole new dimensions of biochemistry emerge when you are not limited to the universal and ordinary 20 amino acids. Jason Chin’s lab at the MRC Laboratory of Molecular Biology in Cambridge, UK, is using this approach to make precise alterations at the molecular level of well-known proteins in fruit flies.

Last, but not least, recoding provides a potent strategy for bio-containment. If a virus-resistant organism were to escape, even if they weren’t ‘bad’ for the environment, they would take over natural niches and ‘win’. Using one of these non-standard amino acids, you can engineer an organism that can grow only if it is given that certain nutrient. The result is an ‘escape-proof’ strategy for experimental organisms used in the laboratory.

Graphic showing how transcriptome mapping works

Source: Xiaowei Zhuang

Mapping the transcriptome

Xiaowei Zhuang Director, Center for Advanced Imaging, Harvard University, Cambridge, Massachusetts.

A new global initiative to identify all cell types in the human body and map their spatial organization — the recently launched Human Cell Atlas (HCA) initiative — is a grand goal. A project of this scale will need many complementary technologies.

Single-cell RNA sequencing is a powerful way to identify different cell types and an important tool for creating the HCA, but it requires taking a tissue apart into individual cells and then isolating the RNA. What’s not preserved is the spatial context of cells in a tissue — how they are organized and interact.

We’d like a technology that can provide this spatial context by imaging the transcription profiles of cells in intact tissue. My lab is developing MERFISH, or multiplexed error-robust fluorescence in situ hybridization, an image-based, single-cell transcriptomics approach.

MERFISH uses error-robust barcodes to identify each different type of RNA in the cell, and combinatorial labelling and sequential imaging to detect these barcodes in a massively multiplexed manner (see ‘Transcriptome mapping’).

We’ve already demonstrated the ability to image 1,000 different messenger RNAs (mRNAs) in single cells. With further development, MERFISH has the potential to detect the whole transcriptome in cells in intact tissues.

This spatially resolved RNA-profiling data will give us a physical picture for the HCA — we can image individual cells, categorize them by their gene-expression profiles and map their spatial organization. It can be combined with data on the morphology and function of cells obtained by other imaging technologies to further enrich that picture.

At the moment, our picture of the cell atlas is mostly incomplete. If you don’t have a global picture, you just don’t know what you are missing — let alone how to design effective therapeutics to intervene in the case of disease.

*****

Boosting cancer vaccines

Elaine Mardis Co-executive director, Institute for Genomic Medicine, Nationwide Children’s Hospital, Columbus, Ohio.

In the field of cancer immunogenomics, researchers want to know which of the mutated proteins encoded by the cancer genome are capable of eliciting an immune response in a given individual. Such proteins, called neo-antigens, could be utilized to develop personalized cancer vaccines or indicate other treatments.

One exciting technology that could be used to study these neo-antigens is CyTOF, a so-called mass-cytometry method for identifying cells that express specific proteins.

In typical flow cytometry, researchers mix antibodies labelled with a fluorescent molecule with cells to tag proteins of interest. Then the cells are analysed, one by one, to measure their relative abundances on the basis of those proteins. CyTOF replaces the limited handful of fluorescent tags with metallic labels that are detected in a mass spectrometer — 100 or more different labels at once, compared with perhaps a dozen in the case of flow cytometry.

This technology could transform the field of cancer immunogenomics, by enabling researchers to work out which neo-antigens produced by an individual’s cancerous cells are the most abundant and most likely to elicit a strong reaction from the immune system. Researchers could then use that information to create personalized anti-cancer ‘vaccines’. These, used in combination with new cancer drugs that release the brakes on the immune system, could put people with cancer in a position to fight off their own disease.

But for any given neo-antigen predicted by the genome, it’s guesswork as to whether it will elicit a significant immune response. CyTOF gives us insight into that question, by letting us quantify the binding strengths of multiple predicted peptides to the person’s T cells.

And it’s not just for cancer genomics. CyTOF can be used to track the abundance and make-up of any suite of proteins produced by cells, as long as you can find antibodies to bind your proteins of interest. It’s allowing us to ask questions at the protein level in a much more multidimensional and precise way than before.

*****

Linking genotype and phenotype

Ruedi Aebersold Systems biologist, Institute of Molecular Systems Biology, ETH Zurich, Switzerland.

Clearly, we’re living in a very interesting time — there is an enormous amount of high-quality genomic information on genetic variability. At the same time, we can collect masses of health-related data on the human population, ranging from the number of steps taken in a day to blood pressure and clinical imaging. The trick is to relate the two to each other. Especially in medicine, if we want to translate a genetic variation into a treatment, then we need mechanistic insights into the processes that are disrupted by disease.

The key to this link is the analysis of protein complexes, which are the functional units of cells. How do we go from big data — such as the genome of an ovarian tumour — to working out which protein complexes are perturbed and how?

One path blends computation and quantitative proteomics, in which several thousand proteins are consistently and accurately quantified across cohorts of tumour and control samples. Such data sets can now be generated using mass-spectrometry techniques such as SWATH-MS (sequential window acquisition of all theoretical mass spectra). Complexed proteins would be expected to have a high degree of co-variation — that is, to increase or decrease in abundance synchronously. But if the complex is perturbed and loses subunits owing to mutation or structural changes, the subunit co-variance would be different. This is one way to pinpoint protein complexes that are perturbed in cancer.

Such altered complexes can then be studied at the structural level using cryo-electron microscopy single-particle analysis or cryo-electron tomography (CET), both of which can image molecules at a resolution of around 5–10 ångströms. This is high enough to visualize how mutations change the composition, topology and structure, and by inference the function, of the affected complex.

CET also has the power to reveal how structures vary with other modulations, such as the addition of a phosphate group to the completed molecule. A really significant advance for 2018 is the refinement of focused-ion beam milling. This technique takes a mammalian cell or tissue section that is otherwise too thick for CET and mills out a thin window of the cell, such that the structure of a particular protein complex can be observed in the context of the cell.

Together, these technologies will increase our understanding of how a protein complex is perturbed at the molecular level in disease. And they will illuminate how one would engineer a drug to destroy it, inactivate it or restore its normal activity.

*****

Extending genome sequence analysis

Rebecca Calisi Rodríguez Reproductive biologist, University of California, Davis.

When I was entering graduate school, I was fascinated by the discovery in 2000 of a completely new hormone, gonadotropin inhibitory hormone (GnIH), that inhibits the reproductive axis when animals are stressed. Studies of GnIH are completely changing our understanding of how the brain regulates reproduction. I thought, “Hell, what else don’t we know about? When will the next discovery happen that completely changes the way we understand reproduction?”

Today, thanks to high-throughput DNA sequencing of genomes and transcriptomes, the rate of discovery is rising sharply. It took around US$3 billion to sequence the human genome 15 years ago. It costs a few thousand dollars today, and the price is still falling. This is important because it allows us to investigate animals not usually studied in laboratories in the ecosystems and habitats in which they have evolved, which has the potential to yield more physiologically relevant data.

As a reproductive biologist, I am particularly excited that this brings us closer to understanding the great symphony — maybe cacophony — of mechanisms driving sexual behaviour and reproduction.

We recently used RNA sequencing to get a more in-depth view of how the reproductive axis in common pigeons responds to stress. Chronic stress can disrupt reproduction, and we want to know all the ways it can do this.

We are looking at the activity of every gene actively transcribed in the reproductive axis — the hypothalamus in the brain, the pituitary gland and the gonads — in response to stress. This enormous data set has produced hundreds of hypotheses to examine the effects of stress on what could be newly discovered reproductive mechanisms. These will lead us towards targets for genetic intervention or therapy for the millions of men and women who report fertility problems.

But we can also benefit from taking a step back and examining whole animals in the real world. For example, feral pigeons could serve as powerful models for assessing what effects the exposure to hazardous substances in the environment, or the ‘expose-ome’, has on the reproductive axis. Our findings show that free-ranging pigeons experience similar exposure threats in the environment as humans living in the same neighbourhood. We can use pigeons as canaries were once used in coal mines, as bioindicators of hazardous substances in the environment. Sequencing techniques can then allow us to determine how those exposures affect the well-conserved reproductive system.

We can take our shiny new technologies and marry them with ‘old-school’ scientific tools to expand discovery in ways we never could before. We could look at pigeons in their environment in real time, characterize changes in their genomes and proteomes and see the effects on reproduction. We are the natural historians, at the level of the gene, of our time.

Person testing Apple watch

Devices such as this Apple Watch are inspiring the development of an Internet of Scientific Things.Credit: David Paul Morris/Bloomberg/Getty

Making an Internet of Scientific Things

Vivienne Ming Theoretical neuroscientist and executive chair, Socos Labs, Berkeley, California.

The Internet of Things, all of those Internet-enabled devices that are becoming so common in homes today — Alexa, Google Home, Nest thermostats, smartphones — they are the sensors and actuators of a massive swarm intelligence. We think of a single Alexa device, the Internet-connected smart assistant developed by Amazon, as a lone personal assistant, but it is more accurate to recognize it as part of a massively distributed multisensor array extending into millions of homes and feeding an enormous experimentation system that is the true Alexa. Rather than millions of individual robots, it is a single artificial intelligence (AI) that is constantly learning about the world, with the actions of one family influencing its exploration, and exploitation, of another.

Those distributed intelligences are transforming our lives, but they could also be transformative in the sciences. I would like for, and believe we are ready for, researchers to begin collaborating on a distributed Internet of Scientific Things (IoST) — an open system for connecting distributed sensors and actuators to a powerful machine-learning platform driving global-scale experiments. Even simple versions of this system have enormous power. Google found that its smartphones could pick up on early symptoms of Parkinson’s disease from changes in gait detected by the phone’s accelerometer and gyroscope. Using an expanded set of smartphone sensors, my team was able to predict the onset of manic episodes in people with bipolar disorder. But right now, that sort of experimental power is inaccessible to most scientists.

Imagine if researchers could access data from smartphones, smartwatches and appliances running IoST apps, along with more conventional sensors used in experiments around the world? Add to that AI systems mining for relevant published research and data already out there in your field. Similar to how current commercial AI identifies hidden business connections for salespeople, the AI of an IoST would augment the work of scientists hunting for data relevant to their fields. What if my neuroimaging software was directly plugged into an IoST platform and made data accessible in real-time, not just to my lab, but to everyone in my field and beyond? Or, logging into the platform to discover the activity of five new researchers that you should meet. Imagine that.

Admittedly, there are scary elements of these massively distributed systems. Will certain organizations have restrictive control over the data? Will findings from these new platforms go through traditional scientific publishers, through companies such as Alibaba or Amazon, or through open-access platforms like GitHub and arXiv? Serious issues of access and research ethics must be addressed, but transformation is looming.

Already, individual labs and researchers are leveraging the possibilities. But the scientific community must take the lead. If we, as scientists, build these systems ourselves, we can make publishing more egalitarian, data collection more sharable, and science more transparent. Otherwise, someone else will build it for us. But the amazing tradition that is science should not be obfuscated in the hands of just a few people.

Nature 553, 531-534 (2018)

doi: 10.1038/d41586-018-01021-5
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up