Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Semantic segmentation of microscopic neuroanatomical data by combining topological priors with encoder–decoder deep networks

A preprint version of the article is available at bioRxiv.

Abstract

Understanding of neuronal circuitry at cellular resolution within the brain has relied on neuron tracing methods that involve careful observation and interpretation by experienced neuroscientists. With recent developments in imaging and digitization, this approach is no longer feasible with the large-scale (terabyte to petabyte range) images. Machine-learning-based techniques, using deep networks, provide an efficient alternative to the problem. However, these methods rely on very large volumes of annotated images for training and have error rates that are too high for scientific data analysis, and thus requires a substantial volume of human-in-the-loop proofreading. Here we introduce a hybrid architecture combining prior structure in the form of topological data analysis methods, based on discrete Morse theory, with the best-in-class deep-net architectures for the neuronal connectivity analysis. We show significant performance gains using our hybrid architecture on detection of topological structure (for example, connectivity of neuronal processes and local intensity maxima on axons corresponding to synaptic swellings) with precision and recall close to 90% compared with human observers. We have adapted our architecture to a high-performance pipeline capable of semantic segmentation of light-microscopic whole-brain image data into a hierarchy of neuronal compartments. We expect that the hybrid architecture incorporating discrete Morse techniques into deep nets will generalize to other data domains.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Three prevalent modalities of light-microscopic data acquisition where we have applied our algorithm.
Fig. 2: Existing CNN architectures suitable for neurite segmentation.
Fig. 3: Architecture of the MaskRCNN network and semantic segmentation workflow.
Fig. 4: Illustrations of results from the process detection pipeline for single-colour STPT images, three-colour fluorescent WSIs from the MBA and the bright-field WSIs.
Fig. 5: Semantic segmentation results.

Similar content being viewed by others

Data availability

The STPT data were collected as a part of the Brain Initiative Cell Census Network and shared online. The raw images of the STPT dataset are available from ftp://download.brainimagelibrary.org:8811/biccn/huang/connectivity/anterograde/180830_JH_WG_Fezf2LSLflp_CFA_female_processed/. The sample MBA data can be viewed at http://brainarchitecture.org/viewer4/mouse/map/3611. The trained models and the annotated data are available at http://brainarchitecture.org/semantic-segmentation-in-brain-images.

Code availability

The process detection and semantic segmentation code for reproduction of the results in this paper and limited data with their links and documentation are available on Github at https://github.com/samik1986/ML_Semantic_Segmenation_NMI.

References

  1. Bohland, J. W. et al. A proposal for a coordinated effort for the determination of brainwide neuroanatomical connectivity in model organisms at a mesoscopic scale. PLoS Comput. Biol. 5, e1000334 (2009).

    Article  Google Scholar 

  2. Dodt, H.-U. et al. Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain. Nat. Methods 4, 331–336 (2007).

    Article  Google Scholar 

  3. Ragan, T. et al. Serial two-photon tomography for automated ex vivo mouse brain imaging. Nat. Methods 9, 255 (2012).

    Article  Google Scholar 

  4. Oh, S. W. et al. A mesoscale connectome of the mouse brain. Nature 508, 207–214 (2014).

    Article  Google Scholar 

  5. Pinskiy, V. et al. High-throughput method of whole-brain sectioning, using the tape-transfer technique. PLoS ONE 10, e0102363 (2015).

    Article  Google Scholar 

  6. Lin, M. K. et al. A high-throughput neurohistological pipeline for brain-wide mesoscale connectivity mapping of the common marmoset. eLife 8, e40042 (2019).

    Article  Google Scholar 

  7. Halavi, M., Hamilton, K. A., Parekh, R. & Ascoli, G. Digital reconstructions of neuronal morphology: three decades of research trends. Front. Neurosci. 6, 49 (2012).

    Article  Google Scholar 

  8. Helmstaedter, M. & Mitra, P. P. Computational methods and challenges for large-scale circuit mapping. Curr. Opin. Neurobiol. 22, 162–169 (2012).

    Article  Google Scholar 

  9. Peng, H., Meijering, E. & Ascoli, G. A. From DIADEM to BigNeuron. Neuroinform. 13, 259–260 (2015).

    Article  Google Scholar 

  10. Rey-Villamizar, N. et al. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python. Front. Neuroinform. 8, 39 (2014).

    Article  Google Scholar 

  11. Peng, H. et al. BigNeuron: large-scale 3D neuron reconstruction from optical microscopy images. Neuron 87, 252–256 (2015).

    Article  Google Scholar 

  12. Lawrie, S. M. & Abukmeil, S. S. Brain abnormality in schizophrenia: a systematic and quantitative review of volumetric magnetic resonance imaging studies. Br. J. Psychiatry 172, 110–120 (1998).

    Article  Google Scholar 

  13. Taylor, R. H., Lavealle, S., Burdea, G. C. & Mosges, R. Computer-integrated Surgery: Technology and Clinical Applications (MIT Press, 1995).

  14. Zijdenbos, A. P. & Dawant, B. M. Brain segmentation and white matter lesion detection in MR images. Crit. Rev. Biomed. Eng. 22, 401–465 (1994).

    Google Scholar 

  15. Worth, A. J., Makris, N., Caviness, V. S.Jr & Kennedy, D. N. Neuroanatomical segmentation in MRI: technological objectives. Int. J. Pattern Recognit. Artif. Intell. 11, 1161–1187 (1997).

    Article  Google Scholar 

  16. Khoo, V. S. et al. Magnetic resonance imaging (MRI): considerations and applications in radiotherapy treatment planning. Radiother. Oncol. 42, 1–15 (1997).

    Article  Google Scholar 

  17. Grimson, W. E. L. et al. Utilizing segmented mri data in image-guided surgery. Int. J. Pattern Recognit. Artif. Intell. 11, 1367–1397 (1997).

    Article  Google Scholar 

  18. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  19. LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  20. Fukushima, K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193–202 (1980).

    Article  Google Scholar 

  21. Pahariya, G. et al. High precision automated detection of labeled nuclei in gigapixel resolution image data of mouse brain. Preprint at BioRxiv https://doi.org/10.1101/252247 (2019).

  22. Ramesh, N., Yoo, J.-H. & Sethi, I. Thresholding based on histogram approximation. In IEEE Proc.—Vision, Image and Signal Processing Vol. 142, 271–279 (IEEE, 1995).

  23. Sharma, N. et al. Segmentation and classification of medical images using texture-primitive features: application of BAM-type artificial neural network. J. Med. Phys. 33, 119–126 (2008).

    Article  Google Scholar 

  24. Boykov, Y. Y. & Jolly, M.-P. Interactive graph cuts for optimal boundary and region segmentation of objects in nd images. In Proc. Eighth IEEE International Conference on Computer Vision, ICCV 2001 Vol. 1, 105–112 (IEEE, 2001).

  25. Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017).

    Article  Google Scholar 

  26. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012) 1097–1105 (NIPS, 2012).

  27. He, K., Gkioxari, G., Dollár, P. & Girshick, R. Mask R-CNN. In Proc. IEEE International Conference on Computer Vision 2961–2969 (IEEE, 2017).

  28. Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You only look once: unified, real-time object detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 779–788 (IEEE, 2016).

  29. Vinyals, O., Toshev, A., Bengio, S. & Erhan, D. Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 39, 652–663 (2016).

    Article  Google Scholar 

  30. Sabour, S., Frosst, N. & Hinton, G. E. Dynamic routing between capsules. In Advances in Neural Information Processing Systems 30 (NIPS 2017) 3856–3866 (NIPS, 2017).

  31. Badrinarayanan, V., Kendall, A. & Cipolla, R. Segnet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).

    Article  Google Scholar 

  32. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Int. Conference on Medical Image Computing and Computer-assisted Intervention 234–241 (Springer, 2015).

  33. Buslaev, A., Seferbekov, S. S., Iglovikov, V. & Shvets, A. Fully convolutional network for automatic road extraction from satellite imagery. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 197–1973 (IEEE, 2018).

  34. Belkin, M., Hsu, D. J. & Mitra, P. Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate. Advances in Neural Information Processing Systems 31 (NIPS 2018) 2300–2311 (NIPS, 2018).

  35. Ciçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-assisted Intervention 424–432 (Springer, 2016).

  36. Milletari, F., Navab, N. & Ahmadi, S.-A. V-Net: fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV) 565–571 (IEEE, 2016).

  37. Johnson, J. M. & Khoshgoftaar, T. M. Survey on deep learning with class imbalance. J. Big Data 6, 27 (2019).

    Article  Google Scholar 

  38. Delgado-Friedrichs, O., Robins, V. & Sheppard, A. Skeletonization and partitioning of digital images using discrete Morse theory. IEEE Trans. Pattern Anal. Mach. Intell. 37, 654–666 (2014).

    Article  Google Scholar 

  39. Gyulassy, A., Bremer, P.-T., Hamann, B. & Pascucci, V. A practical approach to Morse–Smale complex computation: scalability and generality. IEEE Trans. Vis. Comput. Graph. 14, 1619–1626 (2008).

    Article  Google Scholar 

  40. Robins, V., Wood, P. J. & Sheppard, A. P. Theory and algorithms for constructing discrete Morse complexes from grayscale digital images. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1646–1658 (2011).

    Article  Google Scholar 

  41. Dey, T. K., Wang, J. & Wang, Y. Road network reconstruction from satellite images with machine learning supported by topological methods. In Proc. 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems 520–523 (ACM, 2019).

  42. Edelsbrunner, H. & Harer, J. Persistent homology—a survey. Contemp. Math. 453, 257–282 (2008).

    Article  MathSciNet  Google Scholar 

  43. Forman, R. A user’s guide to discrete Morse theory. Sém. Lothar. Combin. 48, B48c (2002).

  44. Sousbie, T., Pichon, C., Colombi, S., Novikov, D. & Pogosyan, D. The 3D skeleton: tracing the filamentary structure of the Universe. Mon. Not. R. Astron. Soc. 383, 1655–1670 (2008).

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge IHC Brightfield Imaged data from P. Strick at U Pitt, and thank J. Nagashima and M. Hanada also at U Pitt for annotating these images. We acknowledge the effort from annotators at the Center for Computational Brain Research at IIT Madras for the bulk of the data annotation and proofreading for this project. This work was supported by the NIH (EB022899, MH114824, MH114821, NS107466, AT010414), the Crick-Clay Professorship (Cold Spring Harbor Laboratory), the Mathers Charitable Foundation, and H. N. Mahabala Chair (IIT Madras). Work at Ohio State University was in addition partly supported by the NSF under grants CCF-1740761, RI-1815697 and DMS-1547357.

Author information

Authors and Affiliations

Authors

Contributions

The idea of using topological priors in the pipeline was conceptualized by Y.W. and P.P.M. Algorithmic design and development was performed by S.B. and L.M. Proofreading assistance and neuroanatomical expertise for neuroanatomical ground truth data was provided by J.J. and K.M. Data preparation, including quality control and acquisition, was performed by B.-X.H., J.J. and K.M. under the supervision of J.H. and P.P.M. The ALBU baseline was tested by D.W. Evaluation of the algorithm was conducted by S.B., D.W., L.M. and X.L. Data preparation, including design of an online proofreading interface and hosting, was done by M.-K.L., M.S. and K.R. S.B., L.M., J.J. and P.P.M. prepared and edited the paper.

Corresponding author

Correspondence to Partha P. Mitra.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Working of Discrete Morse Algorithm.

The Discrete Morse algorithm is given an input image (a). A Gaussian filter is applied to the image (b) - a density function is defined at the pixels. Then the algorithm extracts the ridges of the function across the domain (c) - these ridges form the 1-stable manifold. Finally, each path in the 1-stable manifold is assigned an grayscale value based on intensity along the path, and a grayscale mask is outputted in (d).

Supplementary information

Supplementary Information

Supplementary Figs. 1–5, evaluations, discussion and Tables 1–7.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Banerjee, S., Magee, L., Wang, D. et al. Semantic segmentation of microscopic neuroanatomical data by combining topological priors with encoder–decoder deep networks. Nat Mach Intell 2, 585–594 (2020). https://doi.org/10.1038/s42256-020-0227-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-020-0227-9

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics