Deep-Z uses deep learning to go from a two-dimensional snapshot to three-dimensional fluorescence images. The method improves imaging speed while reducing light dose, and was shown to be useful for accurate structural and functional imaging of neurons in Caenorhabditis elegans.
Deep learning in microscopy
The December 2019 issue of Nature Methods features a focus on Deep Learning in Microscopy. In this web collection, related content featured in the Nature journals is highlighted to celebrate these technological advances.
Recent research papers can be found under Research; Reviews, Perspectives, and news features under Comments and Reviews. These publications are selected by the editors of Nature Journals and this collection will be regularly updated throughout the year.
Content-aware image restoration (CARE) uses deep learning to improve microscopy images. CARE bypasses the trade-offs between imaging speed, resolution, and maximal light exposure that limit fluorescence imaging to enable discovery.
Convolutional neural networks enable prediction of fluorescently labeled structures from three-dimensional time-lapse transmitted-light images. Applications include multiplexed long time-lapse imaging and prediction of fluorescence in electron micrographs.
CDeep3M provides a user-friendly tool for deep-learning-based image segmentation via a cloud-based deep convolutional neural network. Demonstrations include challenging light, X-ray, and electron microscopy segmentation tasks.
A user-friendly ImageJ plugin enables the application and training of U-Nets for deep-learning-based image segmentation, detection and classification tasks with minimal labeling requirements.
This analysis describes the results of three Cell Tracking Challenge editions for examining the performance of cell segmentation and tracking algorithms and provides practical feedback for users and developers.
Annotated image data are required for image analysis, to test analytical methods, and to train learning algorithms. This paper describes and characterizes a tool that allows researchers to crowdsource image-annotation tasks.
A deep learning approach enables fast and robust prediction of hematopoietic stem cell lineage choice in time-lapse imaging three generations before conventional marker onset.
The 2018 Data Science Bowl challenged competitors to develop an accurate tool for segmenting stained nuclei from diverse light microscopy images. The winners deployed innovative deep-learning strategies to realize configuration-free segmentation.
The 2018 Human Protein Atlas Image Classification competition sought to improve automated classification of protein subcellular localizations from fluorescence images. The winning strategies involved innovative deep learning approaches for multi-label classification.
Deep learning enables cross-modality super-resolution imaging, including confocal-to-STED and TIRF-to-TIRF-SIM image transformation. Imaging of a larger FOV and greater depth of field is possible with higher resolution and SNR at lower light doses.
Accelerating PALM/STORM microscopy with deep learning allows super-resolution imaging of >1,000 cells in a few hours.
Deep learning is combined with massive-scale citizen science to improve large-scale image classification
Pattern recognition in imaging data by >300,000 players of a global, online, commercial computer game is combined with deep learning to improve the accuracy of annotation of subcellular protein localization.
Unsupervised data to content transformation with histogram-matching cycle-consistent generative adversarial networks
Labelling training data to train machine learning models is very time intense. A new method shows that content transformation can be effectively learned from generated data, avoiding the need for any manual labelling in segmentation and classification tasks.
Intelligent feature engineering and ontological mapping of brain tumour histomorphologies by deep learning
Neural networks are a promising digital pathology tool but are often criticized for their limited explainability. Faust and others demonstrate how machine-learned features correlate with human-understandable histological patterns and groupings, permitting increased transparency of deep learning tools in medicine.
Volume electron microscopy data of brain tissue can tell us much about neural circuits, but increasingly large data sets demand automation of analysis. Here, the authors introduce cellular morphology neural networks and successfully automate a range of morphological analysis tasks.
Automated analysis of RNA localisation in smFISH data has been elusive. Here, the authors simulate and use a large dataset of images to design and validate a framework for highly accurate classification of sub-cellular RNA localisation patterns from smFISH experiments.
Deconvolution of subcellular protrusion heterogeneity and the underlying actin regulator dynamics from live cell imaging
Cell protrusion dynamics are heterogeneous at the subcellular level, but current analyses operate at the cellular or ensemble level. Here the authors develop a computational framework to quantify subcellular protrusion phenotypes and reveal the underlying actin regulator dynamics at the leading edge.
Live-cell phenotypic-biomarker microfluidic assay for the risk stratification of cancer patients via machine learning
An assay that uses machine-learning algorithms on phenotypic-biomarker data from live primary cells predicts post-surgical adverse pathology in prostate-cancer and breast cancer tissue samples from patients.
An active texture-based digital atlas enables automated mapping of structures and markers across brains
An active atlas for automatic alignment of brains to a reference atlas is presented. The method uses the fine-scale pattern of tissue. The atlas is refined by each new brain and can inform on the structural variability between different brains.
The idtracker.ai software tracks freely moving animals in large groups of up to 100 individuals. The tool is versatile and has been applied to groups of fruit flies, zebrafish, medaka, ants and mice.
LEAP is a deep-learning-based approach for the analysis of animal pose. LEAP’s graphical user interface facilitates training of the deep network. The authors illustrate the method by analyzing Drosophila and mouse behavior.
Flood-filling networks are a deep-learning-based pipeline for reconstruction of neurons from electron microscopy datasets. The approach results in exceptionally low error rates, thereby reducing the need for extensive human proofreading.
webKnossos is a browser-based tracing and annotation tool for 3D electron microscopy data sets that is optimized for seamless data viewing. The tool’s flight-mode view facilitates fast neurite tracing because of its egocentric viewpoint.
SyConn is a computational framework that infers the synaptic wiring of neurons in volume electron microscopy data sets with machine learning. It has been applied to zebra finch, mouse and zebrafish neuronal tissue samples.
Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.
Comments and Reviews
Machine learning approaches that include deep learning are moving beyond image classification to change the way images are made.
Two approaches apply deep learning to improve single-molecule localization microscopy.
A deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation. Richards et al. argue that this inspires fruitful approaches to systems neuroscience.
A Review on applications of deep machine learning in image analysis that offers practical guidance for biologists.
This Perspective highlights recent applications of deep learning in fluorescence microscopy image reconstruction and discusses future directions and limitations of these approaches.
ilastik is an user-friendly interactive tool for machine-learning-based image segmentation, object classification, counting and tracking.