Recent whole-brain mapping projects are collecting large-scale three-dimensional images using modalities such as serial two-photon tomography, fluorescence micro-optical sectioning tomography, light-sheet fluorescence microscopy, volumetric imaging with synchronous on-the-fly scan and readout or magnetic resonance imaging. Registration of these multi-dimensional whole-brain images onto a standard atlas is essential for characterizing neuron types and constructing brain wiring diagrams. However, cross-modal image registration is challenging due to intrinsic variations of brain anatomy and artifacts resulting from different sample preparation methods and imaging modalities. We introduce a cross-modal registration method, mBrainAligner, which uses coherent landmark mapping and deep neural networks to align whole mouse brain images to the standard Allen Common Coordinate Framework atlas. We build a brain atlas for the fluorescence micro-optical sectioning tomography modality to facilitate single-cell mapping, and used our method to generate a whole-brain map of three-dimensional single-neuron morphology and neuron cell types.
This is a preview of subscription content, access via your institution
Open Access articles citing this article.
Neuroinformatics Open Access 30 November 2023
Nature Methods Open Access 02 October 2023
Nature Methods Open Access 28 September 2023
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
The full resolution fMOST image datasets (https://download.brainimagelibrary.org/biccn/zeng/luo/fMOST/) of all mouse brains used in this study, as well as the original and CCFv3 registered single neuron reconstructions (https://doi.org/10.35077/g.25), are available at BICCN’s Brain Image Library (BIL) at Pittsburgh Supercomputing Center (www.brainimagelibrary.org). The single-neuron reconstructions, the CCFv3 registered version of these reconstructions, as well as 3D navigation movie-gallery of these data are also available at SEU-ALLEN Joint Center, Institute for Brain and Intelligence (https://braintell.org/projects/fullmorpho/). The Allen CCFv3 is available at (http://atlas.brain-map.org/), and MRI mouse brian used in this study can be obtained through (https://www.nitrc.org/projects/incfwhsmouse). We also provided downsampled fMOST, LSFM, VISoR and MRI whole mouse brain data and LSFM partially imaged brain data along with code and testing scripts at Github page (https://github.com/Vaa3D/vaa3d_tools/tree/master/hackathon/mBrainAligner). The full resolution LSFM and VISoR brain images are available on request. Source data are provided with this paper.
The source code of all mBrainAligner modules, including stripe artifacts removal, automatic global registration, automatic local registration, semiautomatic refinement along with the binary executable files and sample data can be found at mBrainAligner’s GitHub page (https://github.com/Vaa3D/vaa3d_tools/tree/master/hackathon/mBrainAligner). The automatic global, local registration and semiautomatic registration modules were written in C++ (v.11) with Qt (v.5.6), and the stripe artifacts removal module was implemented using Matlab 2016.
Gong, H. et al. High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level. Nat. Commun. 7, 1–12 (2016).
Economo, M. N. et al. A platform for brain-wide imaging and reconstruction of individual neurons. eLife 5, e10566 (2016).
Osten, P. & Margrie, T. W. Mapping brain circuitry with a light microscope. Nat. Methods 10, 515–523 (2013).
Ragan, T. et al. Serial two-photon tomography for automated ex vivo mouse brain imaging. Nat. Methods 9, 255–258 (2012).
Wang, H. et al. Scalable volumetric imaging for ultrahigh-speed brain mapping at synaptic resolution. Natl Sci. Rev. 6, 982–992 (2019).
Xu, F. et al. High-throughput mapping of a whole rhesus monkey brain at micrometer resolution. Nat. Biotechnol. https://doi.org/10.1038/s41587-021-00986-5 (2021).
Niedworok, C. J. et al. Charting monosynaptic connectivity maps by two-color light-sheet fluorescence microscopy. Cell Rep. 2, 1375–1386 (2012).
Dodt, H.-U. et al. Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain. Nat. Methods 4, 331–336 (2007).
Chung, K., Wallace, J., Kim, S. Y., Kalyanasundaram, S. & Deisseroth, K. Structural and molecular interrogation of intact biological systems. Nature 497, 332–337 (2013).
Tomer, R., Ye, L., Hsueh, B. & Deisseroth, K. Advanced CLARITY for rapid and high-resolution imaging of intact tissues. Nat. Protoc. 9, 1682–1697 (2014).
Murray, E. et al. Simple, scalable proteomic imaging for high-dimensional profiling of intact systems. Cell 163, 1500–1514 (2015).
Renier, N. et al. iDISCO: a simple, rapid method to immunolabel large tissue samples for volume imaging. Cell 159, 896–910 (2014).
Pan, C. et al. Shrinkage-mediated imaging of entire organs and organisms using uDISCO. Nat. Methods 13, 859–867 (2016).
Susaki, E. A. et al. Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging. Nat. Protoc. 10, 1709–1727 (2015).
Richardson, D. S. & Lichtman, J. W. Clarifying tissue clearing. Cell 162, 246–257 (2015).
Thomas, R., Smallwood, P. M., John, W., Jeremy, N. & Eshel, B. J. Genetically-directed, cell type-specific sparse labeling for the analysis of neuronal morphology. PLoS One 3, e4099 (2008).
Aransay, A., Rodríguez-López, C., García-Amado, M., Clascá, F. & Prensa, L. Long-range projection neurons of the mouse ventral tegmental area: a single-cell axon tracing analysis. Front. Neuroanat. 9, 59 (2015).
Ghosh, S. et al. Sensory maps in the olfactory cortex defined by long-range viral tracing of single neurons. Nature 472, 217–220 (2011).
Lin, R. et al. Cell-type-specific and projection-specific brain-wide reconstruction of single neurons. Nat. Methods 15, 1033–1036 (2018).
Wang, Y. et al. TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain. Nat. Commun. 10, 3474 (2019).
Winnubst, J. et al. Reconstruction of 1,000 projection neurons reveals new cell types and organization of long-range connectivity in the mouse brain. Cell 179, 268–281.e213 (2019).
Zhou, H. et al. GTree: an Open-source tool for dense reconstruction of brain-wide neuronal population. Neuroinformatics 19, 305–317 (2020).
Peng, H. et al. BigNeuron: large-scale 3D neuron reconstruction from optical microscopy images. Neuron 87, 252–256 (2015).
Ecker, J. R. et al. The BRAIN initiative cell census consortium: Lessons learned toward generating a comprehensive brain cell atlas. Neuron 96, 542–557 (2017).
Oh, S. W. et al. A mesoscale connectome of the mouse brain. Nature 508, 207–214 (2014).
Zingg, B., Hintiryan, H., Gou, L., Song, M. Y. & Dong, H.-W. Neural Networks of the Mouse Neocortex. Cell 156, 1096–1111 (2014).
Hintiryan, H. et al. The mouse cortico-striatal projectome. Nat. Neurosci. 19, 1100–1114 (2016).
Bienkowski, M. S. et al. Integration of gene expression and brain-wide connectivity reveals the multiscale organization of mouse hippocampal networks. Nat. Neurosci. 21, 1628–1643 (2018).
Dong, H. W. The Allen Reference Atlas: A Digital Color Brain Atlas of the C57Bl/6J Male Mouse (John Wiley & Sons Inc., 2008).
Wang, Q. et al. The Allen mouse brain common coordinate framework: a 3D reference atlas. Cell 181, 936–953.e920 (2020).
Perens, J. et al. An optimized mouse brain atlas for automated mapping and quantification of neuronal activity using iDISCO+ and light sheet fluorescence microscopy. Neuroinformatics 19, 433–446 (2020).
Peng, H. et al. Morphological diversity of single neurons in molecularly defined cell types. Nature 598, 174–181 (2021).
Long, F., Peng, H., Liu, X., Kim, S. K. & Myers, E. A 3D digital atlas of C. elegans and its application to single-cell analyses. Nat. Methods 6, 667–672 (2009).
Peng, H. et al. BrainAligner: 3D registration atlases of Drosophila brains. Nat. Methods 8, 493–500 (2011).
Randlett, O. et al. Whole-brain activity mapping onto a zebrafish brain atlas. Nat. Methods 12, 1039–1046 (2015).
Griffiths, V. A. et al. Real-time 3D movement correction for two-photon imaging in behaving animals. Nat. Methods 17, 741–748 (2020).
Avants, B. B., Tustison, N. & Song, G. Advanced normalization tools (ANTS). Insight J. 2, 1–35 (2009).
Klein, S., Staring, M., Murphy, K., Viergever, M. A. & Pluim, J. P. W. Elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2009).
Modat, M. et al. Fast free-form deformation using graphics processing units. Computer Methods Prog. Biomedicine 98, 278–284 (2010).
Niedworok, C. J. et al. aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data. Nat. Commun. 7, 1–9 (2016).
Renier, N. et al. Mapping of brain activity by automated volume analysis of immediate early genes. Cell 165, 1789–1802 (2016).
Goubran, M. et al. Multimodal image registration and connectivity analysis for integration of connectomic data from microscopy to MRI. Nat. Commun. 10, 1–17 (2019).
Fan, J., Cao, X., Yap, P.-T. & Shen, D. BIRNet: brain image registration using dual-supervised fully convolutional networks. Med. Image Anal. 54, 193–206 (2019).
Cao, X., Yang, J., Zhang, J., Nie, D., Kim, M., Wang, Q., & Shen, D. Deformable image registration based on similarity-steered CNN regression. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention (eds. Descoteaux M., Maier-Hein L., Franz A., Jannin P., Collins D., Duchesne S.) 300–308 (Springer, Cham, 2017).
Zeng, H. & Sanes, J. R. Neuronal cell-type classification: challenges, opportunities and the path forward. Nat. Rev. Neurosci. 18, 530–546 (2017).
Harris, J. A. et al. Hierarchical organization of cortical and thalamic connectivity. Nature 575, 195–202 (2019).
Muñoz-Castaneda, R. et al. Cellular anatomy of the mouse primary motor cortex. Nature 598, 159–166 (2021).
Jones, E. G. The Thalamus (Springer Science & Business Media, 2012).
Tward, D. J. et al. Solving the where problem in neuroanatomy: a generative framework with learned mappings to register multimodal, incomplete data into a reference brain. Preprint at bioRxiv https://doi.org/10.1101/2020.03.22.002618 (2020).
Ramon y Cajal, S. Histologie du système nerveux de l’homme et des vertébrés. Maloine, Paris 2, 153–173 (1911).
Peng, H., Ruan, Z., Long, F., Simpson, J. H. & Myers, E. W. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat. Biotechnol. 28, 348–353 (2010).
Chi, J. et al. Three-dimensional adipose tissue imaging reveals regional variation in beige fat biogenesis and PRDM16-dependent sympathetic neurite density. Cell Metab. 27, 226–236.e223 (2018).
Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (2007).
HL, C. & Rangarajan, A. A new point matching algorithm for non-rigid registration. Computer Vis. Image Underst. 89, 114–141 (2003).
Harris, C. G. & Stephens, M. A combined corner and edge detector. in Proc. Alvey Vision Conference Vol. 15, 10–5244 (Citeseer, 1988).
Wahba, G. Spline Models for Observational Data (SIAM, 1990).
Qu, L. & Peng, H. Littlequickwarp: an ultrafast image warping tool. Methods 73, 38–42 (2015).
Dalal, N. & Triggs, B. Histograms of oriented gradients for human detection. In Proc. Computer Society Conference on Computer Vision and Pattern Recognition Vol. 1 886–893 (IEEE, 2005).
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Proc. International Conference on Medical Image Computing and Computer-assisted Intervention (eds. Ourselin S., Joskowicz L., Sabuncu M., Unal G., Wells W) 424–432 (Springer, 2016).
Fürth, D. et al. An interactive framework for whole-brain maps at cellular resolution. Nat. Neurosci. 21, 139–149 (2018).
Mortensen, E. N. & Barrett, W. A. Intelligent scissors for image composition. In Proc. 22nd Annual Conference on Computer Graphics and Interactive Techniques 191–198 (ACM TOG, 1995).
This work was initialized and funded by Southeast University to support Open Science collaboration through the SEU-ALLEN Joint Center, Institute of Brain and Intelligence. We thank Y. Wang and Q. Wang (Allen Institute for Brain Science) for assistance with manual annotation of some somas, Y. Yu (Allen Institute for Brain Science) for downsampling several fMOST datasets, S. Jiang (Southeast University) for assistance in data storage and transmission, D.H. Lu (Tencent) for building mBrainAligner’s web portal interface, M. Dierssen (Center for Genomic Regulation) and L. Manubens-Gil (Southeast University) for providing partial brain LSFM data, W. Luo, H. Wang, J. Yang, M. Wang, Y. Tang and R. Zhang (Anhui University) for help with semiautomatic registration, the team at SEU-ALLEN Joint Center for neuron reconstruction, and the supercomputing platform of Anhui University for computing support. This work was also partially funded by the National Science Foundation of China (NSFC) grant 61871411 and the University Synergy Innovation Program of Anhui Province GXXT-2019-008 and GXXT-2021-001 to L.Q., the NSFC grant 32071367 to Y.W., the NSFC-Guangdong Joint Fund U20A6005 to G.B. and the Fundamental Research Funds for the Central Universities 2242021k30046.
The authors declare no competing interests.
Peer review Information Nature Methods thanks the anonymous reviewers for their contribution to the peer review of this work. Nina Vogt was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
a, Interface of semi-automatic registration module. b, Three levels of sub-region granularity (coarse, medium, and fine) are provided for each main brain region. c-h, Comparison of registration accuracy on 31 fMOST brains when semi-automatic registration is applied after automatic registration. Box plot: center line, median; box limits, upper and lower quartiles; whiskers, 1.5 x interquartile range; points, outliers.
a, Architecture of segmentation network. b, 3D visualization of segmentation results of six brain regions. c-d, Comparison of registration accuracy of mBrainAligner on 31 fMOST brains with deep learning features toggled on and off. Box plot: center line, median; box limits, upper and lower quartiles; whiskers, 1.5 x interquartile range; points, outliers.
Extended Data Fig. 3 Comparison of different methods in registering images of different modalities to CCFv3.
Four green dashed boxes show the brain registration results of four different modalities (LSFM, VISoR, MRI, fMOST) generated by different methods (MIRACL, ClearMap, aMAP and mBrainAligner,). The first and second rows of each box show the coronal and sagittal view of registered brains respectively with zoom-in view of two subregions. Registered brains of different modalities with brain region boundaries (orange) defined in CCFv3 overlaid. In each box, arrows of the same color point to the same spatial coordinates.
Extended Data Fig. 4 Comparison of soma localization between manual annotation and computational prediction results generated by different methods.
The left side of diagram shows manually annotated results, and the right side shows their computationally predicted results.
a-f, 63 CCFv3 mapped neurons of six brains (ID 17302, 17545, 18454, 18455, 18457 and 18464) with somas residing in VPM or VPL. The first row of a-f shows CCFv3 aligned brains generated by four different methods (mBrainAligner, MIRACL, ClearMap and aMAP). The cross-lines overlaid on the aligned brains point to the same coordinate on the inner boundary of cortex. The left-most image of second row of a-f shows corresponding slice of CCFv3 average template, and the right three images show the maximum intensity projection view of CCFv3 with mapped neurons overlaid, with red for mBrainAligner, green for MIRACL, yellow for ClearMap and purple for aMAP.
Coronal sections of fMOST-domain atlas of mouse brain, with average template shown on left and annotation template on right.
Extended Data Fig. 7 Topography of thalamic neuron projections by integrating CCF-registered brains.
a, Location of axonal arbors of cortical projection neurons grouped by thalamic nuclei. Dots represent center of axonal arbors in the flattened 2D cortical map. Locations of corresponding somas are color-coded in three separate columns: (left to right) the anterior-posterior (soma-x), dorsal-ventral (soma-y) and lateral-medial (soma-z) axes. b, Examples of single-neuron projections (VPL, LGd, MG). Colors represent individual neurons.
a, One coronal slice of fMOST mouse brain image. b, The Fourier spectrum of image shown in a. c, Constructed Gaussian notch filter. d, Log transformed image of a. e, Filtered spectrum of log-transformed image using filter as shown in c. f, Image after stripe removal.
a, Partially imaged brain region of hippocampus (in LSFM modality). b, Global alignment result of partial image to CCFv3. The left column shows the maximum intensity projection of globally aligned partial brain (top) and its blended view with CCFv3 average template (bottom), with their three-view shown on the right. c, Result after semi-automatic refinement.
Statistical source data of Fig.2b.
Statistical source data of Fig.3c,i.
Statistical source data of Fig.4d.
Statistical source data of Extended Data Fig.1c,d.
Statistical source data of Extended Data Fig.2c,d.
About this article
Cite this article
Qu, L., Li, Y., Xie, P. et al. Cross-modal coherent registration of whole mouse brains. Nat Methods 19, 111–118 (2022). https://doi.org/10.1038/s41592-021-01334-w
This article is cited by
Nature Methods (2023)
Nature Methods (2023)