Developing a brain atlas through deep learning

Abstract

Neuroscientists have devoted substantial effort to the creation of standard brain reference atlases for high-throughput registration of anatomical regions of interest. However, the variability in brain size and form across individuals poses a considerable challenge for such reference atlases. To overcome these limitations, we introduce a fully automated deep neural network-based method (named SeBRe) for segmenting brain regions of interest with minimal human supervision. We demonstrate the validity of our method on brain images from different developmental time points of mice, across a range of neuronal markers and imaging modalities. We further assess the performance of our method on images of magnetic resonance-scanned human brains. Our registration method can accelerate brain-wide exploration of region-specific changes in brain development and, by easily segmenting brain regions of interest for high-throughput brain-wide analysis, offer an alternative to existing complex brain registration techniques.

A preprint version of the article is available at ArXiv.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: Architecture of the SeBRe deep learning pipeline.
Fig. 2: SeBRe multistage image processing pipeline.
Fig. 3: The performance of SeBRe in segmenting brain regions.
Fig. 4: The rotationally invariant performance of SeBRe on the extended mouse dataset.
Fig. 5: The generalized performance of SeBRe on unseen FISH brain sections.
Fig. 6: The performance of SeBRe in segmenting subregions of the hippocampus.
Fig. 7: Performance of SeBRe on human brain sections.
Fig. 8: A comparision of SeBRe with commonly used brain registration methods.

Data availability

The data that support the findings of this study are available from the corresponding author on reasonable request. The publicly available datasets that are used in this study are available at brain-map.org/api/index.html and https://www.nitrc.org/frs/shownotes.php?release_id=2316. The annotated datasets that are used in this study are available at https://github.com/itsasimiqbal/SeBRe and https://bitbucket.org/theolab/.

Code availability

We provide the code for the SeBRe toolbox at https://github.com/itsasimiqbal/SeBRe and https://bitbucket.org/theolab/.

References

  1. 1.

    Lein, Ed. S. et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature 445, 168 (2007).

  2. 2.

    Fürth, D. et al. An interactive framework for whole-brain maps at cellular resolution. Nat. Neurosci. 21, 139 (2018).

  3. 3.

    Niedworok, C. J. et al. aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data. Nat. Commun. 7, 11879 (2016).

  4. 4.

    Jarrett, K., Kavukcuoglu, K. & LeCun, Y. What is the best multi-stage architecture for object recognition? In IEEE 12th International Conference on Computer Vision 2146–2153 (IEEE, 2009).

  5. 5.

    Ren, S., He, K., Girshick, R. & Sun, J. Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 91–99 (NIPS, 2015).

  6. 6.

    He, K., Gkioxari, G., Dollár, P., & Girshick, R. Mask R-CNN. In Proc. IEEE International Conference on Computer Vision 2961–2969 (IEEE, 2017).

  7. 7.

    de Vos, B. D., Berendsen, F. F., Viergever, M. A., Staring, M. & Išgum, I. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support 204–212 (Springer, 2017).

  8. 8.

    Li, H. & Fan, Y. Non-rigid image registration using fully convolutional networks with deep self-supervision. Preprint at https://arxiv.org/abs/1709.00799 (2017).

  9. 9.

    Milletari, F. et al. Hough-CNN: deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 164, 92–102 (2017).

  10. 10.

    Lin, T.-Y. et al. Microsoft COCO: common objects in context. In European Conference on Computer Vision (Springer, 2014).

  11. 11.

    Allen Brain Atlas API (Allen Institute for Brain Science, 2015); https://brain-map.org/api/index.html

  12. 12.

    Rohlfing, T. Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable. IEEE Trans. Med. Imaging 31, 153–163 (2012).

  13. 13.

    Klein, S. et al. Elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2010).

  14. 14.

    Kutten, K. S. et al. A large deformation diffeomorphic approach to registration of CLARITY images via mutual information. In International Conference on Medical Image Computing and Computer-assisted Intervention (Springer, 2017).

  15. 15.

    Shakeri, Mahsa, et al. Sub-cortical brain structure segmentation using F-CNN’s. In IEEE 13th International Symposium on Biomedical Imaging (IEEE, 2016).

  16. 16.

    Mehta, R., Majumdar, A. & Sivaswamy, J. BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures. J. Med. Imaging 4, 024003 (2017).

  17. 17.

    Roy, A. G., Conjeti, S., Navab, N. & Wachinger, C. & Alzheimer’s Disease Neuroimaging Initiative. QuickNAT: a fully convolutional network for quick and accurate segmentation of neuroanatomy. NeuroImage 186, 713–727 (2019).

  18. 18.

    He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  19. 19.

    Lin, T.-Y. et al. Feature pyramid networks for object detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017).

  20. 20.

    Girshick, R. Fast R-CNN. In Proc. IEEE International Conference on Computer Vision 1440–1448 (IEEE, 2015).

Download references

Acknowledgements

This work was supported by a grant from the European Research Council (ERC, 679175, T.K.).

Author information

A.I., R.K. and T.K. conceptualized the study and wrote the paper. A.I and R.K. developed the SeBRe method and performed the quantitative comparison with other registration and segmentation methods.

Correspondence to Theofanis Karayannis.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–9.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark