Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
The December 2019 issue of Nature Methods features a focus on Deep Learning in Microscopy. In this web collection, related content featured in the Nature journals is highlighted to celebrate these technological advances.
Recent research papers can be found under Research; Reviews, Perspectives, and news features under Comments and Reviews. These publications are selected by the editors of Nature Journals and this collection will be regularly updated throughout the year.
Deep-Z uses deep learning to go from a two-dimensional snapshot to three-dimensional fluorescence images. The method improves imaging speed while reducing light dose, and was shown to be useful for accurate structural and functional imaging of neurons in Caenorhabditis elegans.
Content-aware image restoration (CARE) uses deep learning to improve microscopy images. CARE bypasses the trade-offs between imaging speed, resolution, and maximal light exposure that limit fluorescence imaging to enable discovery.
Convolutional neural networks enable prediction of fluorescently labeled structures from three-dimensional time-lapse transmitted-light images. Applications include multiplexed long time-lapse imaging and prediction of fluorescence in electron micrographs.
CDeep3M provides a user-friendly tool for deep-learning-based image segmentation via a cloud-based deep convolutional neural network. Demonstrations include challenging light, X-ray, and electron microscopy segmentation tasks.
This analysis describes the results of three Cell Tracking Challenge editions for examining the performance of cell segmentation and tracking algorithms and provides practical feedback for users and developers.
Annotated image data are required for image analysis, to test analytical methods, and to train learning algorithms. This paper describes and characterizes a tool that allows researchers to crowdsource image-annotation tasks.
The 2018 Data Science Bowl challenged competitors to develop an accurate tool for segmenting stained nuclei from diverse light microscopy images. The winners deployed innovative deep-learning strategies to realize configuration-free segmentation.
The 2018 Human Protein Atlas Image Classification competition sought to improve automated classification of protein subcellular localizations from fluorescence images. The winning strategies involved innovative deep learning approaches for multi-label classification.
Deep learning enables cross-modality super-resolution imaging, including confocal-to-STED and TIRF-to-TIRF-SIM image transformation. Imaging of a larger FOV and greater depth of field is possible with higher resolution and SNR at lower light doses.
Pattern recognition in imaging data by >300,000 players of a global, online, commercial computer game is combined with deep learning to improve the accuracy of annotation of subcellular protein localization.
Labelling training data to train machine learning models is very time intense. A new method shows that content transformation can be effectively learned from generated data, avoiding the need for any manual labelling in segmentation and classification tasks.
Neural networks are a promising digital pathology tool but are often criticized for their limited explainability. Faust and others demonstrate how machine-learned features correlate with human-understandable histological patterns and groupings, permitting increased transparency of deep learning tools in medicine.
Volume electron microscopy data of brain tissue can tell us much about neural circuits, but increasingly large data sets demand automation of analysis. Here, the authors introduce cellular morphology neural networks and successfully automate a range of morphological analysis tasks.
Automated analysis of RNA localisation in smFISH data has been elusive. Here, the authors simulate and use a large dataset of images to design and validate a framework for highly accurate classification of sub-cellular RNA localisation patterns from smFISH experiments.
Cell protrusion dynamics are heterogeneous at the subcellular level, but current analyses operate at the cellular or ensemble level. Here the authors develop a computational framework to quantify subcellular protrusion phenotypes and reveal the underlying actin regulator dynamics at the leading edge.
An assay that uses machine-learning algorithms on phenotypic-biomarker data from live primary cells predicts post-surgical adverse pathology in prostate-cancer and breast cancer tissue samples from patients.
An active atlas for automatic alignment of brains to a reference atlas is presented. The method uses the fine-scale pattern of tissue. The atlas is refined by each new brain and can inform on the structural variability between different brains.
LEAP is a deep-learning-based approach for the analysis of animal pose. LEAP’s graphical user interface facilitates training of the deep network. The authors illustrate the method by analyzing Drosophila and mouse behavior.
Flood-filling networks are a deep-learning-based pipeline for reconstruction of neurons from electron microscopy datasets. The approach results in exceptionally low error rates, thereby reducing the need for extensive human proofreading.
webKnossos is a browser-based tracing and annotation tool for 3D electron microscopy data sets that is optimized for seamless data viewing. The tool’s flight-mode view facilitates fast neurite tracing because of its egocentric viewpoint.
SyConn is a computational framework that infers the synaptic wiring of neurons in volume electron microscopy data sets with machine learning. It has been applied to zebra finch, mouse and zebrafish neuronal tissue samples.
Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.
A deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation. Richards et al. argue that this inspires fruitful approaches to systems neuroscience.