Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Developing a brain atlas through deep learning

A preprint version of the article is available at arXiv.

Abstract

Neuroscientists have devoted substantial effort to the creation of standard brain reference atlases for high-throughput registration of anatomical regions of interest. However, the variability in brain size and form across individuals poses a considerable challenge for such reference atlases. To overcome these limitations, we introduce a fully automated deep neural network-based method (named SeBRe) for segmenting brain regions of interest with minimal human supervision. We demonstrate the validity of our method on brain images from different developmental time points of mice, across a range of neuronal markers and imaging modalities. We further assess the performance of our method on images of magnetic resonance-scanned human brains. Our registration method can accelerate brain-wide exploration of region-specific changes in brain development and, by easily segmenting brain regions of interest for high-throughput brain-wide analysis, offer an alternative to existing complex brain registration techniques.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Architecture of the SeBRe deep learning pipeline.
Fig. 2: SeBRe multistage image processing pipeline.
Fig. 3: The performance of SeBRe in segmenting brain regions.
Fig. 4: The rotationally invariant performance of SeBRe on the extended mouse dataset.
Fig. 5: The generalized performance of SeBRe on unseen FISH brain sections.
Fig. 6: The performance of SeBRe in segmenting subregions of the hippocampus.
Fig. 7: Performance of SeBRe on human brain sections.
Fig. 8: A comparision of SeBRe with commonly used brain registration methods.

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author on reasonable request. The publicly available datasets that are used in this study are available at brain-map.org/api/index.html and https://www.nitrc.org/frs/shownotes.php?release_id=2316. The annotated datasets that are used in this study are available at https://github.com/itsasimiqbal/SeBRe and https://bitbucket.org/theolab/.

Code availability

We provide the code for the SeBRe toolbox at https://github.com/itsasimiqbal/SeBRe and https://bitbucket.org/theolab/.

References

  1. Lein, Ed. S. et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature 445, 168 (2007).

    Article  Google Scholar 

  2. Fürth, D. et al. An interactive framework for whole-brain maps at cellular resolution. Nat. Neurosci. 21, 139 (2018).

    Article  Google Scholar 

  3. Niedworok, C. J. et al. aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data. Nat. Commun. 7, 11879 (2016).

    Article  Google Scholar 

  4. Jarrett, K., Kavukcuoglu, K. & LeCun, Y. What is the best multi-stage architecture for object recognition? In IEEE 12th International Conference on Computer Vision 2146–2153 (IEEE, 2009).

  5. Ren, S., He, K., Girshick, R. & Sun, J. Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 91–99 (NIPS, 2015).

  6. He, K., Gkioxari, G., Dollár, P., & Girshick, R. Mask R-CNN. In Proc. IEEE International Conference on Computer Vision 2961–2969 (IEEE, 2017).

  7. de Vos, B. D., Berendsen, F. F., Viergever, M. A., Staring, M. & Išgum, I. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support 204–212 (Springer, 2017).

  8. Li, H. & Fan, Y. Non-rigid image registration using fully convolutional networks with deep self-supervision. Preprint at https://arxiv.org/abs/1709.00799 (2017).

  9. Milletari, F. et al. Hough-CNN: deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 164, 92–102 (2017).

    Article  Google Scholar 

  10. Lin, T.-Y. et al. Microsoft COCO: common objects in context. In European Conference on Computer Vision (Springer, 2014).

  11. Allen Brain Atlas API (Allen Institute for Brain Science, 2015); https://brain-map.org/api/index.html

  12. Rohlfing, T. Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable. IEEE Trans. Med. Imaging 31, 153–163 (2012).

    Article  Google Scholar 

  13. Klein, S. et al. Elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2010).

    Article  Google Scholar 

  14. Kutten, K. S. et al. A large deformation diffeomorphic approach to registration of CLARITY images via mutual information. In International Conference on Medical Image Computing and Computer-assisted Intervention (Springer, 2017).

  15. Shakeri, Mahsa, et al. Sub-cortical brain structure segmentation using F-CNN’s. In IEEE 13th International Symposium on Biomedical Imaging (IEEE, 2016).

  16. Mehta, R., Majumdar, A. & Sivaswamy, J. BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures. J. Med. Imaging 4, 024003 (2017).

    Article  Google Scholar 

  17. Roy, A. G., Conjeti, S., Navab, N. & Wachinger, C. & Alzheimer’s Disease Neuroimaging Initiative. QuickNAT: a fully convolutional network for quick and accurate segmentation of neuroanatomy. NeuroImage 186, 713–727 (2019).

    Article  Google Scholar 

  18. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  19. Lin, T.-Y. et al. Feature pyramid networks for object detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017).

  20. Girshick, R. Fast R-CNN. In Proc. IEEE International Conference on Computer Vision 1440–1448 (IEEE, 2015).

Download references

Acknowledgements

This work was supported by a grant from the European Research Council (ERC, 679175, T.K.).

Author information

Authors and Affiliations

Authors

Contributions

A.I., R.K. and T.K. conceptualized the study and wrote the paper. A.I and R.K. developed the SeBRe method and performed the quantitative comparison with other registration and segmentation methods.

Corresponding author

Correspondence to Theofanis Karayannis.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–9.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Iqbal, A., Khan, R. & Karayannis, T. Developing a brain atlas through deep learning. Nat Mach Intell 1, 277–287 (2019). https://doi.org/10.1038/s42256-019-0058-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-019-0058-8

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing