Light sheet microscopy time-lapse of a zebrafish embryo during gastrulation. Montage of individual time points in a maximum intensity projection with color coding for depth. Credit: Gopi Shah and Jan Huisken, Morgridge Institute for Research

While buzzing about in search of food, a fruit fly encounters a deadly wasp. Fortunately, its brain reacts to the threat by initiating a cascade of responses across a network of neurons that help it to flee. Philipp Keller’s group at the Howard Hughes Medical Institute’s Janelia Research Campus has developed a variety of sophisticated strategies for deconvolving the circuitry underlying this and other complex functions of the Drosophila nervous system, using a combination of optogenetic manipulation and cutting-edge light-sheet microscopy to simulate various stimuli in living tissue and analyze the response. But perhaps the most remarkable aspect of this project is the extent to which the instruments themselves are running the show. “The microscope can basically do these experiments completely on its own,” says Keller.

This work is a particularly advanced example of an emerging field of computer-assisted imaging known as ‘smart microscopy’. In these configurations, the microscope is not merely a conduit for the collection of image data. Instead, incoming data are analyzed by algorithms that guide the instrument on how to proceed — for example, deciding which events to image and how specifically to image them, or compensating for optical or physical perturbations that might undermine further data collection.

For Keller, this means using machine learning algorithms that can determine precisely when and where an experimental manipulation should be applied to the Drosophila nervous system, and then home in on subsequent events that are relevant to the fly’s response. Numerous other groups are pursuing similar efforts, where the microscope is essentially educated to identify and selectively focus on biological events of interest to the researcher. “If somebody can build a self-driving car, we can work on a self-driving microscope,” says Suliana Manley, a researcher at the Swiss Federal Institute of Technology in Lausanne who is developing instruments for super-resolution analysis of mitochondrial dynamics.

Dora Mahecic, a graduate student in Suliana Manley’s group, configures the lab’s high-throughput multi-focal iSIM instrument. Credit: EPFL Hillary Sanctuary

But this is also part of a broader movement toward the use of computational techniques to make the most of imaging experiments. For example, machine learning techniques are being used to design better microscopy experiments, overcome limitations in imaging quality, or even boost the performance of an instrument beyond the limits of its optics — turning flat images into 3D volumes or conferring super-resolution quality upon diffraction-limited data. “It’s not about a revolution in optics or computational research or the way we look at biological systems by itself,” says Ricardo Henriques, of the Instituto Gulbenkian de Ciencia in Portugal. “The revolution comes from having machine learning bridge all this together to get more out of our data.”

Image consultants

Every microscope, no matter how sophisticated, has limitations and trade-offs — particularly when one is imaging living specimens. A technique that delivers remarkable sub-diffraction-limit spatial resolution may also inflict too much damage on cells to be practical for extended time periods. Conversely, imaging approaches that are gentler in terms of exposure to laser light and less prone to cause photobleaching may yield poorer temporal resolution or suffer from a poor signal-to-noise ratio. Such limitations mean that researchers routinely need to make compromises in designing live-cell imaging experiments.

As a solution, many researchers are turning to machine learning strategies, which employ algorithms that can essentially be ‘educated’ in how to analyze, interpret and respond to particular types of data. For example, by training such algorithms with ‘ideal’ images of a particular set of samples, as well as images of similar samples taken with the intended experimental setup, one can potentially restore experimental images to something much closer to that ideal in terms of clarity and resolution. Researchers led by Martin Weigert, Loic Royer and Florian Jug at the Center for Systems Biology Dresden in Germany demonstrated such an approach, termed ‘content-aware image restoration’ (CARE), in a 2018 study1. Henriques, who collaborated on the study, notes that CARE proved much more effective than conventional denoising algorithms. “That immediately opens the door to doing imaging with lower illumination at a level that was not possible before,” he says. Thus, one can do longer-term or higher temporal resolution fluorescence microscopy experiments while reducing the risk of damaging the sample. Henriques’s group is now using CARE to perform extended-duration imaging of HIV infection in live cells, which was previously difficult due to the tendency of the sparsely labeled viral particles to rapidly photobleach.

Algorithmic processing can also greatly extend the information that can be extracted from microscopic images. Aydogan Ozcan’s team at the University of California Los Angeles has been using a computational tool known as a generative adversarial network (GAN) to perform cross-modality transformations, in which image datasets from two different microscopy formats are used as training data. If successful, the trained algorithm can extrapolate how an image collected from one instrument would look if analyzed in a more sophisticated but costly or laborious experiment.

Artistic depiction of an image transformation neural network, where the input images are enhanced to be equivalent to images acquired by, for example, a super-resolution instrument. Credit: Aydogan Ozcan, UCLA

For example, Ozcan’s team showed that they could extract three-dimensional features from a single planar microscopy image by training the GAN to accurately interpret out-of-focus visual data2. Once these contour maps have been established, users of Ozcan’s Deep-Z algorithm can then refocus their view within a single wide-field fluorescence image as if they were zooming through a stack of images collected over a protracted 3D confocal microscopy experiment. “We’ve imaged neurons firing across the entire body of C. elegans by imaging one plane,” he says. “And you can create a volumetric movie from a single two-dimensional movie acquired within a certain axial range.” Importantly, this approach can also be applied for correcting focus errors, such as arise if a sample drifts during imaging or as a result of surface irregularities.

Using a similar GAN-based approach, Ozcan and colleagues demonstrated the ability to up-convert images obtained with diffraction-limited instruments to super-resolution3. By teaching the algorithm how samples imaged on a structured illumination microscopy (SIM) instrument look when analyzed by total internal reflection fluorescence (TIRF), it becomes possible to computationally extract images with SIM resolution from TIRF data. The resulting images can achieve the best of both worlds. “It actually oftentimes beats SIM reconstruction because it eliminates some of the artifacts associated with SIM,” says Ozcan, “and drastically simplifies the imaging setup and speeds up the measurements.” This approach also proved effective at deriving super-resolution stimulated emission-depletion (STED)-quality images from confocal microscopy, while retaining the greater depth of field of the latter modality.

Single-molecule localization-based techniques such as stochastic optical reconstruction microscopy (STORM) differ from most other fluorescence-based imaging methods in the amount of reconstruction required. Here, images are acquired by sequentially switching random subpopulations of individual photo-activatable fluorophores on and off and then computationally assembling the resulting sets of pictures into a final image with molecular-scale detail. “Your raw data looks nothing like an image — you’re dealing with these raw frames that are just single molecules,” says Manley.

Overlapping signals from fluorophores can confound the analysis of STORM images, particularly for 3D imaging, and this is typically resolved by imaging fewer fluorophores per round — a tactic that precludes high-speed imaging. Yoav Shechtman’s team at the Technion in Haifa, Israel, employed a deep-learning-based approach to overcome this difficult scenario4. They used their algorithm to engineer the point-spread function (PSF) — the representation of a fluorophore’s signal as generated by a microscope’s detection system — to enable better discrimination of individual signals in 3D. This would have been challenging using standard mathematical approaches, according to Shechtman. But by training their algorithm with a vast series of simulated STORM images, his team could derive engineered PSFs that greatly improved the speed and efficiency of 3D imaging. “You can take frames with higher density,” says Shechtman, “which translates into fewer frames to achieve the same number of localization events.” Using this ‘DeepSTORM3D’ approach, they could track the dynamic volumetric movement of individual chromosomal telomeres within mouse cells at ten frames per second. Shechtman’s team is now using this approach to characterize the cellular entry of nanoparticles for clinical imaging and therapeutic applications.

On-the-job training

On the other side of the equation, algorithms can also be leveraged to actively reconfigure a microscope’s settings or the design of an experiment to deliver the optimal results for a given sample or set of research priorities. “You have the ability to modify the PSF, the illumination, the detector and the properties of the sample itself — there’s all these different things you could attempt to optimize in a smart microscope,” says Duke University researcher Roarke Horstmeyer.

Introducing real-time control for even a single parameter can lead to considerable improvements. For example, Laura Waller of the University of California Berkeley and her graduate student Henry Pinkard developed a multiphoton microscopy system in which a machine learning algorithm actively adjusts the level of illumination throughout the imaging process. “We’re trying to find the optimal amount to increase the intensity at every point as you scan through this 3D sample,” says Waller. In an initial demonstration with mouse lymph node samples, their team was able to image individual T cells within a far larger volume than was possible before. The adaptive illumination scheme enabled them to overcome tissue scattering from deep inside the sample while also limiting photodamage.

For Jan Huisken of the Morgridge Institute for Research in Wisconsin, the priorities of smart microscopy are a little different. Huisken works extensively with multi-view light-sheet microscopy, a relatively gentle imaging modality that can capture detailed 3D information about large specimens — including whole embryos — from myriad angles while inflicting minimal photodamage. This can leave users buried in potentially irrelevant imaging data. “Only a small fraction of that is information that scientists would actually want to have in the end,” says Huisken. “That led to us asking, ‘wouldn’t it be wonderful if the microscope could think ahead and only acquire the useful data?’” As an initial step in that direction, his group published a ‘smart rotation’ technique that uses software to analyze initial imaging data from an experiment to guide subsequent imaging steps5. “The microscope can automatically find the perfect angle that gives you the most information, or decide if you need to acquire multiple views,” says Huisken.

These are all important steps toward the goal of achieving more extensive, multi-parameter control of the imaging experiment. Flavie Lavoie-Cardinal of the CERVO Brain Research Centre in Quebec City described one such platform in a 2019 publication, in which she used machine learning to optimize STED imaging experiments across a range of parameters, such as laser power and scanning speed6. After teaching the software what a ‘good’ image looks like for a particular set of experiments, she found that it quickly became accomplished at adjusting the microscope on its own. “The algorithm got much, much better at finding the optimal parameters for one workflow than a non-experienced user,” says Lavoie-Cardinal. “I was faster, because I have ten years of experience, but it would find the same parameters that I do.” This has proven valuable for her group’s work on understanding the dynamic molecular-scale cytostructural changes that take place within neurons during various activation states.

STED super-resolution image of a neuron, with overlaid color indicating quality scores used to inform the machine learning–based optimization process. Credit: Flavie Lavoie-Cardinal, CERVO Brain Research Centre

Manley’s group is likewise looking to introduce greater computational control to SIM, a super-resolution modality that offers the advantage of being faster than STED or STORM. Her team recently developed a multi-field, automated iteration of instant SIM (iSIM)7, a platform first developed by Hari Shroff and Andrew York at the US National Institutes of Health. This version of iSIM can capture data at rates of up to 100 frames per second, but such temporal resolution is only essential for a subset of cellular events. The processes of mitochondrial fission and fusion are a major focus of her lab, and she notes that these events play out over a range of different timescales. “The ultimate step in the fission process takes place on this very fast timescale,” she says, whereas other aspects of mitochondrial dynamics unfold more slowly. A well-trained machine learning algorithm could thus help the microscope recognize those events and adapt accordingly by speeding up or slowing down data collection. “If you could use a real-time adaptive controller to capture just those events you’re interested in at a hundred frames per second, that would be amazing,” says Manley.

The sophisticated machine-controlled manipulation and imaging experiments now being performed by the Keller lab represent the culmination of years of effort toward an automated light-sheet system. In 2016, his group published the foundational version of this platform, known as AutoPilot8. AutoPilot relies on software control over the light sheets and detectors to enable long-term analysis of live samples while maintaining stable focus and resolution and carefully regulating the amount of light delivered to the sample. “Throughout the period of embryogenesis, you have different optical properties in different parts of the sample for all sorts of different reasons,” says Keller. “If you don’t have a microscope that can adapt and figure out what’s going on there, then you can’t get high-quality image data throughout the process.” In subsequent work, his group has modified the AutoPilot system so that it can achieve long-term imaging of even larger samples, including monitoring the development of a mouse embryo from the gastrula stage to the early phases of organogenesis9.

Software for hard problems

Keller’s approach is notable in that it does not employ machine learning, but rather uses a generalizable algorithm based on mathematical principles. “We looked at over 30 different metrics and had ‘ground truth’ image data that we annotated and asked what’s the best that a human that could do if they had to solve this problem,” he says. The resulting strategy, coupled with a few clever tricks to facilitate the quality control process — such as a patterned light-sheet that allows the instrument to easily recognize whether it is in proper focus — proved sufficient to guide AutoPilot on its mission.

But most computer-guided microscopy efforts rely on machine learning algorithms that must be carefully trained before being unleashed in actual experiments. These algorithms are only as good as their education allows and must be fed vast amounts of data that reflects the kind of tasks that they will ultimately be tackling. “Every time I would generate an image, I would rate it,” says Lavoie-Cardinal. “We generated a large databank of rated super-res images with many different structures, different proteins, different imaging contexts — and with that, we could train our network.” Modalities like STORM, which do not immediately generate human-recognizable images, offer more flexibility in terms of training. For his work with DeepSTORM3D, Shechtman was able to generate numerous simulated STORM images that reflect a wide range of experimental scenarios, and these proved sufficient for guiding the PSF engineering process.

Inadequate training can greatly restrict the usefulness of a machine learning algorithm. “If you train it so that it only sees microtubules, it’s going to recover microtubules no matter what you give it,” says Shechtman. But some algorithms can achieve a surprising intuitive capacity after training. For example, during the development of Deep-Z, Ozcan’s group relied entirely on axial defocusing — moving the specimen in and out of focus on the z axis — for the training process. But the resulting algorithm punched above its weight in grappling with samples that were tilted, bent and warped. “It kind of generalized to handle surface-to-surface transformations, even though it was only trained with plane-to-plane transformations,” he says. Nevertheless, users need to remain fully aware of the biases that can potentially arise even from a well-trained neural network, and establish robust quality control measures and carefully monitor the data to avoid being misled by appealing but inaccurate results.

A well-chosen algorithmic strategy is also important. Many groups in the computer-assisted microscopy space are using deep learning, a subset of machine learning–based approaches that rely on neural networks to analyze and interpret data. The GAN strategy used by Ozcan actually employs two dueling neural networks, where the output of the first is critiqued and fact-checked by the second. “It’s like if you have fake Picasso: an expert at Picasso will recognize that and say ‘it looks good but it’s fake’,” he says. This helps constrain the first neural network from getting too ‘creative’. It can also be helpful to give neural networks foundational knowledge of real-world principles in areas such as optics, and Waller’s team relies heavily on what is known as physics-based learning to complement deep learning. “It’s about trying to get the best of both worlds,” she says.

Almost by definition, the machine learning process is something of a black box: a computationally led process to solve problems too difficult for humans to tackle. This can be disconcerting to some biologists, who may be skeptical about ceding control over their experiment to an algorithm. “Some people feel threatened that they will lose their jobs or their capacity to have critical thinking about their experiments, but I think that’s the incorrect way to think about it; I think critical thinking will always need to be here,” says Henriques. But on the flip side, technology enthusiasts and early adopters should be cautious about seeing this as a universal solution to every imaging problem that arises. “A lot can still be done with conventional computational techniques, and this is an advantage,” says Huisken, whose own smart microscopy efforts still largely rely on such methods. “Because although machine learning is powerful, we probably still know too little about what it’s actually doing.”

Automatic for the people

Despite the skeptics, the general concept of smart microscopy is steadily drawing interest from life scientists with hard imaging problems to tackle. “We collaborate a lot with biologists, and they’re just incredibly excited about it,” says Manley.

Much of the foundational work in this space is taking place in labs with a heavy focus on mathematics, engineering and computer science, and it can be a challenge to translate this into a user-friendly framework for dedicated wet-lab denizens. But smart microscopy’s pioneers are taking care to disseminate their code in an accessible open-source format and working on polished interfaces that make computer-assisted experiments more intuitive. For example, Henriques’s team has been building a software suite called ZeroCostDL4Mic, which offers an entry-level framework for developing smart imaging tools10. “It sends data to the cloud and uses free services to train neural networks that would then do predictions on how to improve data collected on a microscope, or even how to control a microscope,” says Henriques, noting that these tools can also be installed and run locally rather than on the cloud.

The upgrade from conventional to smart microscopy need not be expensive. Although training a machine learning algorithm can be computationally intensive, most experiments can be done with existing equipment. “We actually developed more or less everything using a normal laptop and the acquisition computer of our microscopes,” says Lavoie-Cardinal. And even for the most sophisticated automated configurations, like the AutoPilot-based systems developed in the Keller lab, the cost of upgrades is minor relative to the pricey instruments to which they are being attached. For the platform described in his 2016 paper, Keller estimates spending about $20,000 to modify a $300,000 light-sheet microscope.

Huisken and colleagues are now looking into ways to alleviate that latter cost through their Flamingo project, in which portable but powerful light-sheet microscopes are essentially loaned out to labs around the world. “This system is by no means a smart microscope — but it is perfectly suited to be turned into one,” says Huisken. These instruments can be run under automated control by an external computer, or by a remote expert operator in an entirely different location. On the experimental side, the user only needs to know how to prepare and load their sample, along with what sort of analysis they’re looking to perform. “The intelligence that analyzes this data could be anywhere in the world — in the cloud, on a Google farm, or whatever — and this entity just needs to provide an update to the small text file that gives instructions to the microscope,” says Huisken.

Todd Bakken puts the finishing touches on an early prototype of the Flamingo light sheet microscope in Jan Huisken’s lab. Credit: Jan Huisken, Morgridge Institute for Research

But this technology can also be layered onto even simpler formats. Ozcan’s team has developed accessible microscopes that can be coupled to a cell phone camera and is upgrading their performance with the same GAN-based approach that turns confocal-acquired data into STED-quality super-resolution images. “The raw data coming from the mobile microscope are trained against a benchtop microscope, so that some of the color aberrations and resolution loss are mitigated, and it looks like it’s coming from a benchtop state-of-the-art microscope,” he says. His group has already applied this approach as a tool for screening for sickle-cell disease from patient blood samples in point-of-care settings.

These demonstrations suggest that the democratization of smart microscopy is gaining momentum. “I think ten years from now we’ll all be using self-driving microscopes,” says Henriques. And in parallel, the field’s pioneers are continuing to push the capabilities of what the technology can accomplish. Even as he continues his smart microscope–facilitated exploration of the Drosophila nervous system, Keller is now working toward another ambitious long-term imaging project: assembling comprehensive digital maps of developing embryos. “We want to know where every single cell is at all times: what’s dividing and where it’s moving, and keeping track of its identity,” he says. The challenges are steep; Keller notes that conventional imaging techniques are still not up to the task, let alone a computationally guided system. But his group and their collaborators in Jan Funke’s lab at Janelia are hard at work on a next-generation deep-learning framework, along with a sophisticated control system that can rapidly coordinate imaging in response to its instructions. “We hope within the next few years to basically have microscopes that can really follow the development of an animal in real time,” he says.