Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy

Abstract

Understanding cells as integrated systems is central to modern biology. Although fluorescence microscopy can resolve subcellular structure in living cells, it is expensive, is slow, and can damage cells. We present a label-free method for predicting three-dimensional fluorescence directly from transmitted-light images and demonstrate that it can be used to generate multi-structure, integrated images. The method can also predict immunofluorescence (IF) from electron micrograph (EM) inputs, extending the potential applications.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Label-free imaging tool pipeline and application using 3D transmitted light-to-fluorescence models.
Fig. 2: Label-free imaging tool facilitates 2D automated registration across imaging modalities.

Similar content being viewed by others

Data availability

Data used to train the 3D models are available at https://downloads.allencell.org/publication-data/label-free-prediction/index.html.

References

  1. Skylaki, S., Hilsenbeck, O. & Schroeder, T. Challenges in long-term imaging and quantification of single-cell dynamics. Nat. Biotechnol. 34, 1137–1144 (2016).

    Article  CAS  Google Scholar 

  2. Chen, B.-C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).

    Article  Google Scholar 

  3. Selinummi, J. et al. Bright field microscopy as an alternative to whole cell fluorescence in automated analysis of macrophage images. PLoS One 4, e7497 (2009).

    Article  Google Scholar 

  4. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-assisted Intervention—MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer, Cham, ​2015).

  5. Collman, F. et al. Mapping synapses by conjugate light-electron array tomography. J. Neurosci. 35, 5792–5807 (2015).

    Article  CAS  Google Scholar 

  6. Russell, M. R. G. et al. 3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy. J. Cell. Sci. 130, 278–291 (2017).

    Article  CAS  Google Scholar 

  7. Kopek, B. G., Shtengel, G., Grimm, J. B., Clayton, D. A. & Hess, H. F. Correlative photoactivated localization and scanning electron microscopy. PLoS One 8, e77209 (2013).

    Article  CAS  Google Scholar 

  8. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. bioRxiv Preprint at https://www.biorxiv.org/content/early/2018/07/03/236463 (2018).

  9. Graham, F. L., Smiley, J., Russell, W. C. & Nairn, R. Characteristics of a human cell line transformed by DNA from human adenovirus type 5. J. Gen. Virol. 36, 59–74 (1977).

    Article  CAS  Google Scholar 

  10. Rasheed, S., Nelson-Rees, W. A., Toth, E. M., Arnstein, P. & Gardner, M. B. Characterization of a newly derived human sarcoma cell line (HT-1080). Cancer 33, 1027–1033 (1974).

    Article  CAS  Google Scholar 

  11. Goshima, G. et al. Genes required for mitotic spindle assembly in Drosophila S2 cells. Science 316, 417–421 (2007).

    Article  CAS  Google Scholar 

  12. Gurcan, M. N. et al. Histopathological image analysis: a review. IEEE. Rev. Biomed. Eng. 2, 147–171 (2009).

    Article  Google Scholar 

  13. Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).

    Article  Google Scholar 

  14. Roberts, B. et al. Systematic gene tagging using CRISPR/Cas9 in human stem cells to illuminate cell organization. Mol. Biol. Cell 28, 2854–2874 (2017).

    Article  CAS  Google Scholar 

  15. Cardona, A. et al. TrakEM2 software for neural circuit reconstruction. PLoS One 7, e38011 (2012).

    Article  CAS  Google Scholar 

  16. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017).

    Article  Google Scholar 

  17. Zhou, S. K., Greenspan, H. & Shen, D. Deep Learning for Medical Image Analysis (Academic Press, Cambridge, MA, 2017).

  18. Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image. Anal. 42, 60–88 (2017).

    Article  Google Scholar 

  19. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. arXiv Preprint at https://arxiv.org/abs/1412.6980 (2014).

  20. Farnebäck, G. Two-frame motion estimation based on polynomial expansion. In Image Analysis: SCIA 2003 (eds Bigun, J. & Gustavsson, T.) 363–370 (Springer, Berlin, Heidelberg, 2003).

Download references

Acknowledgements

We thank the entire Allen Institute for Cell Science team, who generated and characterized the gene-edited hiPS cell lines, developed image-based assays, and recorded the high replicate datasets suitable for modeling and without whom this work would not have been possible. We especially thank the Allen Institute for Cell Science Gene Editing, Assay Development, Microscopy, and Pipeline teams for providing cell lines and images of different transmitted-light imaging modalities, and particularly K. Gerbin, A. Nelson, and H. Malik for performing the cardiomyocyte differentiation and culture, and W. Leung, J. Tang, M. Hendershott, and N. Gaudreault for gathering the additional time series, CAAX-labeled, cardiomyocyte, HEK293, and HT-1080 data. We thank the Allen Institute for Cell Science Animated Cell team and T. Do specifically for providing expertise in figure preparation. We thank D. Fernandes for developing an early proof-of-concept 2D version of the model. We thank members of the Allen Institute for Brain Science Synapse Biology department for preparing samples and providing images that were the basis for training the conjugate array tomography data. These contributions were absolutely critical for model development. HEK293 cells were provided via the Viral Technology Laboratory at the Allen Institute for Brain Science. Cardiomyocyte and hiPSC data in this publication were derived from cells in the Allen Cell Collection, a collection of fluorescently labeled hiPSCs derived from the parental WTC line provided by B.R. Conklin, at Gladstone Institutes. We thank Google Accelerated Science for telling us about studies of 2D deep learning in neurons before we began this project. This work was supported by grants from NIH/NINDS (R01NS092474) (S.S., F.C.) and NIH/NIMH (R01MH104227) (F.C.). We thank P.G. Allen, founder of the Allen Institute for Cell Science, for vision, encouragement, and support.

Author information

Authors and Affiliations

Authors

Contributions

G.R.J. conceived the project. C.O. implemented the model for 2D and 3D images. M.M. provided guidance and support. C.O., S.S., F.C., and G.R.J. designed computational experiments. C.O., S.S., M.M., F.C., and G.R.J. wrote the paper.

Corresponding author

Correspondence to Gregory R. Johnson.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1 Diagram of CNN architecture underpinning presented tool.

There are no batch normalization or ReLU layers on the last layer of the network, and the number of output channels per layer is shown above the box of each layer. Figured adapted from Lecture Notes in Computer Science 234–241 (2015).

Supplementary Figure 2 Representative labeled structure models and model predictions for 3D transmitted-light microscopy.

Results from predicting DNA fluorescence images from DNA and DNA+ models and shown in top row (see Methods). From left, a single z-slice of a 3D transmitted-light input image, a ground-truth ("target", observed) fluorescence image, an image predicted by the DNA model under standard training, and an image predicted by an extended version of the DNA model (DNA+). Subsequent rows are divided into two columns, each with paired images of ground truth and predicted structure localization. In each column, leftmost images show a single z-slice of a ground-truth ("target", observed) fluorescence image for the labeled structure, while images on the right show predicted structure localization given standard model training (see Methods). Left to right and top-down presentation order is determined by performance (see Methods, Fig. 1c) for nucleoli, nuclear envelope, microtubules, actin filaments, mitochondria, cell membrane, endoplasmic reticulum, nuclear envelope (DIC), actomyosin bundles, tight junctions, Golgi apparatus, and desmosomes models. All models trained on and used bright-field images as inputs (not shown), except where noted (nuclear envelope, DIC). Z-slices were selected to highlight the structure of interest associated with each model. Image-slice pairs were identically contrast stretched, such that black and white values corresponded to the 0.1 and 99.9th percentiles of the target image intensity, respectively. All images shown are independent from model training data. Each target-prediction example was randomly selected from a larger pool of test images. Pool sizes were 18 for the cell membrane model, 10 for the DIC nuclear envelope model, and 20 for all other models. Scale bar is 20 µm.

Supplementary Figure 3 Performance of 3D fluorescence image prediction from transmitted-light microscopy (bright-field) image inputs, assessed as a function of the number of training images for DNA and nucleoli models.

The image-wise Pearson correlation coefficient (r) was measured between 20 ground-truth (“target”) and predicted fluorescence images. Each target/predicted image pair in the test set is a point in the resultant r distribution; the 25th, 50th, and 75th percentile image pairs are spanned by the box for each indicated structure, with whiskers indicating the last data points within the ×1.5 interquartile range of the lower and upper quartiles. Outliers for each distribution are shown as circles.

Supplementary Figure 4 Performance of 3D DNA prediction from transmitted-light (bright-field) images using a fully 3D versus 2D model.

a, Top and bottom rows each show an example of, from left to right, an input bright-field image, a ground-truth (“target”) fluorescence image, a fluorescence image predicted using a 3D model ("DNA+", see Methods), and a fluorescence image predicted using a 2D model ("combined 2D", see Methods). Two-dimensional model output is composed of predictions on individual input bright-field z-slices combined into a 3D volume. The center z-, y-, and x-slices are shown for each 3D image (denoted by the cyan, green, and red lines, respectively, in the top-leftmost image). Scale bar is 20 µm. The 3D model produces outputs with more detail and accuracy. b, Distributions of the image-wise Pearson correlation coefficient (r) between target and predicted test images from the 3D and 2D models across 20 test images. Each target/predicted image pair in the test set is a point in the resultant r distribution; the 25th, 50th, and 75th percentile image pairs are spanned by the box for each indicated structure, with whiskers indicating the last data points within the ×1.5 interquartile range of the lower and upper quartiles. Outliers for each distribution are shown as circles. The number of images (n) in each distribution is 20.

Supplementary Figure 5 Performance of 3D fluorescence image prediction from transmitted-light microscopy (bright-field) image inputs of different cell types.

a, Representative DNA predictions on 3D bright-field input images of, from the top, HEK-293 cells, cardiomyocytes, and HT-1080 cells using a DNA model trained on hiPSC images (model predicting DNA with an extended training procedure, “DNA+”, see Methods) and a DNA model trained on HEK-293 images ("m-HEK-293"). From left to right, columns are the bright-field input image, the target fluorescence image, the predicted fluorescence image using the DNA+ model, and the predicted fluorescence images using the m-HEK-293 model. The center z-, y-, and x-slices are shown for each 3D image (denoted by the cyan, green, and red lines, respectively, in the leftmost image). Scale bar is 20 µm. b, Distributions of the image-wise Pearson correlation coefficient (r) between target and predicted test images of different cell types using DNA+ and m-HEK-293 models. Each target/predicted image pair in the test set is a point in the resultant r distribution; the 25th, 50th, and 75th percentile image pairs are spanned by the box for each indicated structure, with whiskers indicating the last data points within the ×1.5 interquartile range of the lower and upper quartiles. Outliers for each distribution are shown as circles. Box colors for each input images cell type indicate which model was applied, DNA+ or m-HEK-293. The numbers of images (n) in the hiPSC, HEK-293, cardiomyocyte, and HT-1080 distributions are 20, 18, 8, and 10, respectively. For more details see Methods.

Supplementary Figure 6 Comparison of ground-truth (“target”) and predicted pixel values in images from Supplementary Fig. 2.

For each structure indicated, 1% of pixels were randomly selected from the target image are plotted against the corresponding pixels in the predicted image. The number of pixels (n) compared was 36,126 for the microtubules prediction, 96,501 for the cell membrane prediction, 96,501 for the DIC nuclear envelope prediction, and 48,168 for all other predictions. Pixel intensities are normalized in the target in predicted images (see Methods).

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–6, Supplementary Table 1, and Supplementary Note 1

Reporting Summary

Supplementary Video 1

Three-dimensional rendering of transmitted-light microscopy prediction results. The movie illustrates the relationship between 3D time-lapse transmitted-light (bright-field) input and multiple prediction images. First, individual z-plane images from 3D transmitted light are shown in succession. Next, individual predictions are shown overlaid in color in the following order: DNA (blue), nucleoli (cyan), nuclear envelope (yellow), cell membrane (magenta), and mitochondria (green). Next, a composite rendering of all channels is shown, followed by a time lapse of a single plane from the dataset (also shown in part in Fig. 1e). Finally, a volumetric 3D rendering is shown and played through the individual time points four times, alternating between showing mitochondria and membrane, together with the nuclear structures (DNA, nuclear membrane, and nucleolus). The boxed outline depicts the extent of the field of view of this volume, which encompasses 97 × 65 × 19 µm3

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ounkomol, C., Seshamani, S., Maleckar, M.M. et al. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat Methods 15, 917–920 (2018). https://doi.org/10.1038/s41592-018-0111-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-018-0111-2

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing