Neuroscientists have devoted substantial effort to the creation of standard brain reference atlases for high-throughput registration of anatomical regions of interest. However, the variability in brain size and form across individuals poses a considerable challenge for such reference atlases. To overcome these limitations, we introduce a fully automated deep neural network-based method (named SeBRe) for segmenting brain regions of interest with minimal human supervision. We demonstrate the validity of our method on brain images from different developmental time points of mice, across a range of neuronal markers and imaging modalities. We further assess the performance of our method on images of magnetic resonance-scanned human brains. Our registration method can accelerate brain-wide exploration of region-specific changes in brain development and, by easily segmenting brain regions of interest for high-throughput brain-wide analysis, offer an alternative to existing complex brain registration techniques.
Access optionsAccess options
Subscribe to Journal
Get full journal access for 1 year
only $8.67 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The data that support the findings of this study are available from the corresponding author on reasonable request. The publicly available datasets that are used in this study are available at brain-map.org/api/index.html and https://www.nitrc.org/frs/shownotes.php?release_id=2316. The annotated datasets that are used in this study are available at https://github.com/itsasimiqbal/SeBRe and https://bitbucket.org/theolab/.
Lein, Ed. S. et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature 445, 168 (2007).
Fürth, D. et al. An interactive framework for whole-brain maps at cellular resolution. Nat. Neurosci. 21, 139 (2018).
Niedworok, C. J. et al. aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data. Nat. Commun. 7, 11879 (2016).
Jarrett, K., Kavukcuoglu, K. & LeCun, Y. What is the best multi-stage architecture for object recognition? In IEEE 12th International Conference on Computer Vision 2146–2153 (IEEE, 2009).
Ren, S., He, K., Girshick, R. & Sun, J. Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 91–99 (NIPS, 2015).
He, K., Gkioxari, G., Dollár, P., & Girshick, R. Mask R-CNN. In Proc. IEEE International Conference on Computer Vision 2961–2969 (IEEE, 2017).
de Vos, B. D., Berendsen, F. F., Viergever, M. A., Staring, M. & Išgum, I. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support 204–212 (Springer, 2017).
Li, H. & Fan, Y. Non-rigid image registration using fully convolutional networks with deep self-supervision. Preprint at https://arxiv.org/abs/1709.00799 (2017).
Milletari, F. et al. Hough-CNN: deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 164, 92–102 (2017).
Lin, T.-Y. et al. Microsoft COCO: common objects in context. In European Conference on Computer Vision (Springer, 2014).
Allen Brain Atlas API (Allen Institute for Brain Science, 2015); https://brain-map.org/api/index.html
Rohlfing, T. Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable. IEEE Trans. Med. Imaging 31, 153–163 (2012).
Klein, S. et al. Elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2010).
Kutten, K. S. et al. A large deformation diffeomorphic approach to registration of CLARITY images via mutual information. In International Conference on Medical Image Computing and Computer-assisted Intervention (Springer, 2017).
Shakeri, Mahsa, et al. Sub-cortical brain structure segmentation using F-CNN’s. In IEEE 13th International Symposium on Biomedical Imaging (IEEE, 2016).
Mehta, R., Majumdar, A. & Sivaswamy, J. BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures. J. Med. Imaging 4, 024003 (2017).
Roy, A. G., Conjeti, S., Navab, N. & Wachinger, C. & Alzheimer’s Disease Neuroimaging Initiative. QuickNAT: a fully convolutional network for quick and accurate segmentation of neuroanatomy. NeuroImage 186, 713–727 (2019).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Lin, T.-Y. et al. Feature pyramid networks for object detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017).
Girshick, R. Fast R-CNN. In Proc. IEEE International Conference on Computer Vision 1440–1448 (IEEE, 2015).
This work was supported by a grant from the European Research Council (ERC, 679175, T.K.).
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Figs. 1–9.