Abstract
We present MiniVess, the first annotated dataset of rodent cerebrovasculature, acquired using two-photon fluorescence microscopy. MiniVess consists of 70 3D image volumes with segmented ground truths. Segmentations were created using traditional image processing operations, a U-Net, and manual proofreading. Code for image preprocessing steps and the U-Net are provided. Supervised machine learning methods have been widely used for automated image processing of biomedical images. While much emphasis has been placed on the development of new network architectures and loss functions, there has been an increased emphasis on the need for publicly available annotated, or segmented, datasets. Annotated datasets are necessary during model training and validation. In particular, datasets that are collected from different labs are necessary to test the generalizability of models. We hope this dataset will be helpful in testing the reliability of machine learning tools for analyzing biomedical images.
Similar content being viewed by others
Background & Summary
Blood vessel segmentation is often a necessary prerequisite for extracting meaningful analyses from biomedical imaging data. By creating a segmentation, or a mask, that separates vascular from non-vascular pixels, structural information about the vascular system can be acquired, such as diameter, branch order, and blood vessel type. Identification of blood vessels as arterioles, venules, or capillaries can be used to analyze vascular dynamics, such as blood flow and vascular supply. Blood vessel segmentation has clear clinical value. For example, in ischemic stroke studies, vascular segmentation enables detection and quantification of vascular occlusions, which can be helpful in determining therapeutic options1,2. Structural characteristics can also be used as predictors or markers to assist in the diagnosis of diseases, such as Alzheimer’s disease3,4, traumatic brain injury5, brain tumours6, atherosclerosis7, and retinal pathology8,9.
Apart from vascular analyses, blood vessel segmentation is also a necessary preprocessing step for the analysis of cells and pathological entities (Fig. 1). In addition to the endothelial and mural cells that make up the blood vessel proper, various other cell types interact with vascular walls, including astrocyte endfeet processes, perivascular macrophages, and peripheral leukocytes. Such cells and their interactions with vasculature can be identified and analyzed based on distance metrics to vascular walls, a task which is simplified with accurate vascular segmentation masks. Vascular-cellular interactions have been of particular interest in studies focused on diseases. For example, recruitment of peripheral leukocytes to cerebrovasculature has been observed following traumatic brain injury10, middle cerebral artery occlusion11, and in Alzheimer’s disease12. Similar distance metrics can be used to analyze pathological entities, such as perivascular Aβ plaques13 and atherosclerotic plaques14. Thus, segmentation of blood vessels is a necessary preprocessing step that facilitates further vascular and cellular analyses.
In the neurosciences, two-photon fluorescence microscopy (2PFM) is currently the technique of choice for intravital microscopy. While the resolution of 2PFM can be on par with confocal microscopy, the risk of phototoxicity and photobleaching of tissues and fluorophores is substantially reduced because the excitation volume is limited to the focal volume of the microscope15. The use of longer wavelengths also results in less scattering by the neural tissue, allowing imaging at deeper depths within the brain. 2PFM has been extensively used to investigate various phenomena, including neural activity using voltage-sensitive dyes16and calcium indicators17, microglial activity using transgenic animal models18, and vascular dynamics19.
Most methods of automated image processing of 2PFM images rely on proprietary software, such as Imaris (Bitplane, United Kingdom) and Volocity (Quorum Technologies, Canada). Each analysis type, such as vascular segmentation and cell tracking, is generally sold as separate modules. Comprehensive analyses of datasets are therefore functionally limited by the modules available and can become prohibitively expensive. Furthermore, while automated modules produce impressive results for images with high SNR, biomedical images, particularly intravital 2PFM images, are inherently noisy. In practice, substantial manual modifications are required. An open-source alternative is FIJI (Fiji Is Just ImageJ)20. However, FIJI plugins often lack extensive documentation, resulting in a ‘black-box’ nature that may deter and limit use.
Deep learning, such as convolutional neural networks (CNNs) and recent transformer-based model architectures21,22, have been extensively used for automated segmentation tasks in biomedical imaging. For example, the U-Net, a fully CNN, achieves impressive performance in segmenting densely packed neurons in electron microscopy images23. Clinically, CNNs have achieved state-of-the-art performances in segmenting brain vasculature in magnetic resonance angiography24 and retinal vasculature in optical coherence tomography angiography25 datasets, which have been used to assist in the identification of pathological features.
A common challenge in the application of deep learning models to the biomedical imaging field is the generalizability of models. Models are often exclusively trained on datasets that were collected from a single site. Such models often fail to perform when evaluated on datasets collected at different sites due to a so-called ‘domain shift’ (see e.g. Ouyang et al. 202126 for an example in medical image segmentation), caused by differences in tissue preparation, scanner or microscope set-up, and/or inter-user variability in defining labels27,28. The problem is compounded by poor reporting of the number of evaluation sites and samples used29. One way to improve the reliability and transparency of ML models is to use diverse samples during training, and independent data cohorts for testing30. However, the availability of such annotated, publicly available biomedical imaging datasets is limited due to ethical and privacy concerns, particularly in clinical studies. Another strategy is to use synthetic datasets or publicly available non-biomedical datasets (e.g. ImageNet) as part of the training process, and then evaluate the trained model on the real dataset, a process known as ‘transfer learning’31,32. For example, using transfer learning, a CNN that was pre-trained on a synthetic dataset of blood vessels resulted in impressive segmentation of real mouse brain vasculature33. However, the availability of real, annotated, field-specific datasets remains to be a need for evaluating the generalizability of models in the biomedical imaging field. In addition, there has been a recent shift in focus from adjusting model parameters to achieve better performance metrics (‘model-centric’), to improving the quality of datasets to improve performance metrics (‘data-centric’), highlighting the importance of high-quality, publicly available datasets.
Public microscopy datasets have been curated by various research groups world-wide. For example, the Human Protein Atlas shows the distribution and expression of proteins and genes across major organ systems34,35,36, the Broad Bioimage Benchmark Collection contains annotated cell datasets37, and the Allen Brain Cell Types Atlas offers electrophysiological, morphological, and transcriptomic data measured from human and mouse brain. However, vascular datasets have not been as extensively documented. The availability of an annotated 2PFM vascular dataset would assist in diversifying the samples used for training a segmentation model, or in evaluating the performance of segmentation models that were trained on other datasets.
We hereby present MiniVess38, an expert-annotated dataset of 70 3D 2PFM image volumes of rodent cerebrovasculature. The dataset can be used for training segmentation networks39,40, fine-tuning pre-trained networks31,32,41, and as an external validation set for assessing a model’s generalizability42. The 3D volumes in this dataset have been curated to only contain clean XYZ imaging in order to ensure correct and consistent annotations, or segmentations, which has been observed to be integral to the evaluation of machine learning models43. Code for image preprocessing and the U-Net workflow are also provided in the MiniVess project Github page. The U-Net code was written using MONAI, a PyTorch-based framework that was built to encourage best practices for AI development in healthcare research. We hope that the availability of the image volumes and code will assist in evaluating the reliability of models built for the analysis of biomedical images.
Methods
Animal preparation
This dataset consists of 2PFM images of the cortical vasculature in adult male and female mice from the C57BL/6 and CD1 strains (20–30 g), and EGFP Wistar rats (Wistar-TgN(CAG-GFP)184ys) (310–630 g)44. All animal procedures were approved and conducted in compliance with the Animal Care Committee guidelines at Sunnybrook Research Institute, Canada.
To allow optical access to the brain, an acute cranial window was created over the parietal bone (Fig. 1). Detailed protocols on cranial window procedures have been published elsewhere45. Briefly, animals were anesthetized using 1.5–2% isoflurane in a mix of medical air and oxygen. Following fur and scalp removal, a 3–4 mm circle (mice) or rectangle (rats) of bone was removed from the parietal bone using a dental drill, and replaced with a glass cover slip. Due to the thickness of the skull in rats, 1% agarose was deposited onto the brain to prevent air bubbles beneath the cover slip. Animal physiology was monitored using a pulse oximeter, and temperature was maintained using a heating pad with a rectal thermistor. To visualize vasculature, Texas Red 70 kDa dextran (dissolved in PBS, 5 mg/kg; Invitrogen, Canada) was injected through a tail vein catheter. Animals were sacrificed under deep anesthesia using cervical dislocation (mice) or euthanol injection (rats) following the end of imaging.
Imaging
Imaging was conducted using a FV1000MPE multiphoton laser scanning microscope (Olympus Corp., Japan) with an InSight DS tunable laser (Spectra-Physics, USA), or a Ti:Sa laser (MaiTai, Spectra-Physics, Germany). A 25× water-immersion objective lens (XLPN25XWMP2, NA 1.05, WD 2 mm, Olympus Corp., Japan) was used to collect 512 × 512 images with a lateral resolution of 0.621–0.994 μm/pixel, an imaging speed of 2–8 μs/pixel, and a step-size of 1–10 μm, for a maximum depth of 700 μm. Excitation wavelengths of 810 or 900 nm were used. Fluorescent emissions were collected with photo-multiplier tubes preceded by a 575–645 nm bandpass filter. Images were saved in Olympus’s 12-bit .oib or .oir file formats. Acquisition settings were set to utilize the full 12-bit dynamic range (intensity values of 0–4095). Image details are listed in Supplementary Table 1.
File conversions
Image volumes were converted to the NIfTI (.nii) file format to make segmentation model protoyping faster, as it is commonly used in neuroimaging and ML frameworks, such as the Medical Open Network for Artificial Intelligence (MONAI, https://monai.io/). In MONAI, users can create dataloaders that are customized for their data formats by using Python libraries [such as tifffile (available in PyPI), python-bioformats (available in PyPI), and pyometiff (https://github.com/filippocastelli/pyometiff). In the future, we plan to develop a dataloader to allow direct use of microscopy formats, skipping the NIfTI conversion. Here, we provide the code to convert Olympus files (.oib and .oir) to NifTI (.nii) format, with metadata encoded in the NifTI1 header format. NifTI files were further compressed as .gz archive files (.nii.gz). The original Olympus files are 12-bit, and the exported NifTI files are saved as 16-bit images, as a 12-bit data type is not available. The code also provides options to export each channel separately in multichannel image volumes, separate time volumes as single volumes, and remove top and bottom slices. Further details can be found in the GitHub repository https://github.com/ctpn/minivess.
Ground-truth annotation
Pre-processing
To create segmented image volumes, images were first preprocessed in Python. Single channel image volumes were individually processed using histogram equalization (to adjust image contrast; scikit-image Contrast Limited Adaptive Histogram Equalization), median filtering (for smoothing; window size = 3, 5), morphological operators (binary closing, to fill holes), and thresholded into binary images. If present, image slices with poor SNR were removed from the top of a stack. Such slices were present if pial vessels were broken during surgery, causing dye to leak on the surface of the brain. Fine-tuning of binary images was achieved using the Paint, Erase, Smoothing, Islands, and Logical operators effects in the Segment Editor module in 3D Slicer46. 3D Slicer is a free and open-source platform used for 3D image visualization, segmentation, and registration, among others. For manual corrections, emphasis was placed on minimizing manual drawing to reduce human error, and smoothing edges. For example, jagged borders (arrows) observed in the first round of segmentation are smooth by the final segmentation. A general workflow of the pipeline to achieve ground-truth annotations is shown in Fig. 3. Example image pre-processing code is available in the MiniVess Github repository https://github.com/ctpn/minivess.
Machine learning
To improve segmentations, a 2D U-Net23 was trained using raw images and the preprocessed, segmented images. The U-Net consisted of 5 channels, consisting of 16, 32, 64, 128, and 256 filters, a stride of 2, batch normalization, Adam optimization (1e-4 learning rate), and the Dice loss function. The training was split into 80% for training, 10% for validation, and 10% for testing. Outputs from the U-Net were refined through manual corrections in 3D Slicer, using the same ‘Effects’ listed above. Manual corrections were kept to a minimum to ensure consistency in labels within each volume, and mainly consisted of removing false positives (e.g. noise) and conserving smooth boundaries. Final segmented volumes are the result of five rounds of 2D U-Net and manual corrections in 3D Slicer (Fig. 5). Supervised learning was implemented using the PyTorch-based MONAI framework47. MONAI offers open-source, standardized model architectures, dataloaders, and various preprocessing functions that are designed for biomedical imaging. We chose to build use MONAI so that our code can be repurposed to meet other users’ needs. Of note, since the goal was to create a dataset, not a segmentation model, the model used was quite simple.
Data Records
The data is stored in the EBRAINS repository in compressed NiFTi format (*.nii.gz)38. Each raw image stack has an annotated equivalent, designated by a ‘y’ in the file name. Details for each image can be found in the metadata, encoded in the NIfTI1 header format. Each image stack represents a different field-of-view in the cerebrovasculature. Information specific to each image stack can be found in Supplementary Table 1. Maximum projection images of all image volumes are shown in Fig. 4.
Technical Validation
Image volumes were collected and curated by CP (7 years of experience). Ground truth annotations were achieved by using classic image processing tools (see Methods), manual annotations by CP, and a 2D U-Net23. Accuracy of the final annotations were qualitatively confirmed by CP, and then independently confirmed by MFR and HS (Fig. 2) using 3D Slicer. Final segmentations are the result of 5 rounds of manual annotations or corrections and outputs of the U-Net. A comparison between rounds of segmentations can be found in Fig. 5. Quantitatively, the final round of segmentations showed better agreement and less variation with the fourth round of segmentations, than the first round of segmentations (Table 1).
Usage Notes
The MiniVess dataset38 contains image volumes of cerebrovasculature from wild-type mouse, transgenic mouse, and transgenic rat brains. Although small in size, the variety of background strains and species in the MiniVess dataset represents rodent strains that are commonly used in wet labs.
The dataset can be downloaded as NiFTi (.nii.gz) files which can then be easily uploaded into machine learning models, or manipulated using FIJI (Fiji Is Just ImageJ), Python, MATLAB, etc. We provide a tutorial of how to use the MiniVess dataset in a U-Net, built in the MONAI framework (https://github.com/project-monai/monai).The MONAI framework also provides several tutorials using NiFTi images, which can be further explored using the MiniVess dataset.
Limitations of the dataset
A limitation of the MiniVess dataset is the lack of diversity. All images were collected using the same microscope and lens, by the same operator. Only the species (mouse or rat), sex, and genotype (wild type or non-transgenic TgCRND848) (see EBRAINS metadata38). As such, image quality of the raw image stacks will likely differ from aged or disease animal models that have greater tissue autofluorescence, such as transgenic TgCRND8 mice that exhibit human β-amyloid 40 and β-amyloid 41 (non-transgenic animals do not exhibit pathology48).
By making the raw and annotated data available, we hope that the MiniVess dataset can be used as a validation dataset by those evaluating their supervised, semi-supervised, or unsupervised segmentation models, and assist the field to use more data-centric ways to design and evaluate their segmentation models.
Code availability
We provide the Python code to separate multichannel and time series 2PFM image volumes into single volumes, which are easier to manipulate. Multichannel XY, XYZ, XYT, and XYZT images are supported. For multichannel images, the user will be asked to select the channel of interest to export. For images with multi-T volumes (XYT and XYZT), the user has the option of exporting each T-stack separately, or as a single file. We also provide sample code for the image pre-processing tools described above. All code can be accessed at the MiniVess Github repository https://github.com/ctpn/minivess.
References
Meijs, M. et al. Robust segmentation of the full cerebral vasculature in 4D CT of suspected stroke patients. Sci. Rep. 7, 15622, https://doi.org/10.1038/s41598-017-15617-w (2017).
Deshpande, A. et al. Automatic segmentation, feature extraction and comparison of healthy and stroke cerebral vasculature. NeuroImage Clin. 30, 102573, https://doi.org/10.1016/j.nicl.2021.102573 (2021).
Bennett, R. E. et al. Tau induces blood vessel abnormalities and angiogenesis-related gene expression in P301L transgenic mice and human Alzheimer’s disease. Proceedings of the National Academy of Sciences 115, E1289–E1298, https://doi.org/10.1073/pnas.1710329115 (2018).
Shi, H. et al. Retinal vasculopathy in Alzheimer’s disease. Frontiers in Neuroscience 15, 1211, https://doi.org/10.3389/fnins.2021.731614 (2021).
Park, E., Bell, J. D., Siddiq, I. P. & Baker, A. J. An analysis of regional microvascular loss and recovery following two grades of fluid percussion trauma: A role for hypoxia-inducible factors in traumatic brain injury. Journal of Cerebral Blood Flow & Metabolism 29, 575–584, https://doi.org/10.1038/jcbfm.2008.151 (2009).
Jain, R. K. et al. Angiogenesis in brain tumours. Nature Reviews Neuroscience 8, 610–622, https://doi.org/10.1038/nrn2175 (2007).
Kim, B. J. et al. Vascular tortuosity may be related to intracranial artery atherosclerosis. International Journal of Stroke 10, 1081–1086, https://doi.org/10.1111/ijs.12525 (2015).
Laíns, I. et al. Retinal applications of swept source optical coherence tomography and optical coherence tomography angiography. Progress in Retinal and Eye Research 84, 100951, https://doi.org/10.1016/j.preteyeres.2021.100951 (2021).
DeBuc, D. C., Rege, A. & Smiddy, W. E. Use of XyCAM RI for noninvasive visualization and analysis of retinal blood flow dynamics during clinical investigations. Expert Review of Medical Devices 18, 225–237, https://doi.org/10.1080/17434440.2021.1892486 (2021).
Schwarzmaier, S. M. et al. In vivo temporal and spatial profile of leukocyte adhesion and migration after experimental traumatic brain injury in mice. Journal of Neuroinflammation 10, 808, https://doi.org/10.1186/1742-2094-10-32 (2013).
Desilles Jean-Philippe et al. Downstream microvascular thrombosis in cortical venules is an early response to proximal cerebral arterial occlusion. Journal of the American Heart Association 7, e007804, https://doi.org/10.1161/JAHA.117.007804 (2018).
Farkas, E. & Luiten, P. G. M. Cerebral microvascular pathology in aging and Alzheimer’s disease. Progress in Neurobiology 64, 575–611, https://doi.org/10.1016/S0301-0082(00)00068-X (2001).
Koronyo, Y. et al. Retinal amyloid pathology and proof-of-concept imaging trial in Alzheimer’s disease. JCI Insight 2, https://doi.org/10.1172/jci.insight.93621 (2017).
Becher, T. et al. Three-dimensional imaging provides detailed atherosclerotic plaque morphology reveals angiogenesis after carotid artery ligation. Circulation Research 126, 619–632, https://doi.org/10.1161/CIRCRESAHA.119.315804 (2020).
Denk, W., Strickler, J. H. & Webb, W. W. Two-photon laser scanning fluorescence microscopy. Science 248, 73–76, https://doi.org/10.1126/science.2321027 (1990).
Kuhn, B., Denk, W. & Bruno, R. M. In vivo two-photon voltage-sensitive dye imaging reveals top-down control of cortical layers 1 and 2 during wakefulness. Proceedings of the National Academy of Sciences 105, 7588–7593, https://doi.org/10.1073/pnas.0802462105 (2008).
Stosiek, C., Garaschuk, O., Holthoff, K. & Konnerth, A. In vivo two-photon calcium imaging of neuronal networks. Proceedings of the National Academy of Sciences 100, 7319–7324, https://doi.org/10.1073/pnas.1232232100 (2003).
Szalay, G. et al. Microglia protect against brain injury and their selective elimination dysregulates neuronal network activity after stroke. Nature Communications 7, 11499, https://doi.org/10.1038/ncomms11499 (2016).
Cruz Hernández, J. C. et al. Neutrophil adhesion in brain capillaries reduces cortical blood flow and impairs memory function in Alzheimer’s disease mouse models. Nature neuroscience 22, 413–420, https://doi.org/10.1038/s41593-018-0329-4 (2019).
Schindelin, J. et al. Fiji: An open-source platform for biological-image analysis. Nature Methods 9, 676–682, https://doi.org/10.1038/nmeth.2019 (2012).
Hatamizadeh, A., Yang, D., Roth, H. & Xu, D. UNETR: Transformers for 3D medical image segmentation. arXiv:2103.10504 [cs, eess] (2021).
Chen, J. et al. TransUNet: Transformers make strong encoders for medical image segmentation. arXiv:2102.04306 [cs] (2021).
Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional networks for biomedical image segmentation. arXiv:1505.04597 [cs] (2015).
Hilbert, A. et al. BRAVE-NET: Fully automated arterial brain vessel segmentation in patients with cerebrovascular disease. Frontiers in Artificial Intelligence 3, https://doi.org/10.3389/frai.2020.552258 (2020).
Hormel, T. T. et al. Artificial intelligence in OCT angiography. Progress in Retinal and Eye Research 85, 100965, https://doi.org/10.1016/j.preteyeres.2021.100965 (2021).
Ouyang, C. et al. Causality-inspired single-source domain generalization for medical image segmentation. arXiv:2111.12525 [cs] (2021).
Aubreville, M. et al. A completely annotated whole slide image dataset of canine breast cancer to aid human breast cancer research. Scientific Data 7, 417, https://doi.org/10.1038/s41597-020-00756-z (2020).
Bertram, C. A. et al. Are pathologist-defined labels reproducible? Comparison of the TUPAC16 mitotic figure dataset with an alternative set of labels. In Interpretable and Annotation-Efficient Learning for Medical Image Computing, 204–213, https://doi.org/10.1007/978-3-030-61166-8_22 (Springer International Publishing, 2020).
Wu, E. et al. How medical AI devices are evaluated: Limitations and recommendations from an analysis of FDA approvals. Nature Medicine 27, 582–584, https://doi.org/10.1038/s41591-021-01312-x (2021).
Balagurunathan, Y., Mitchell, R. & El Naqa, I. Requirements and reliability of AI in the medical context. Physica Medica 83, 72–78, https://doi.org/10.1016/j.ejmp.2021.02.024 (2021).
Zoph, B. et al. Rethinking pre-training and self-training. arXiv:2006.06882 [cs, stat] (2020).
Azizi, S. et al. Big self-supervised models advance medical image classification. arXiv:2101.05224 [cs, eess] (2021).
Todorov, M. I. et al. Machine learning analysis of whole mouse brain vasculature. Nature Methods 17, 442–449, https://doi.org/10.1038/s41592-020-0792-1 (2020).
Uhlén, M. et al. Tissue-based map of the human proteome. Science 347, https://doi.org/10.1126/science.1260419 (2015).
Thul, P. J. et al. A subcellular map of the human proteome. Science 356, https://doi.org/10.1126/science.aal3321 (2017).
Uhlén, M. et al. A pathology atlas of the human cancer transcriptome. Science 357, https://doi.org/10.1126/science.aan2507 (2017).
Ljosa, V., Sokolnicki, K. L. & Carpenter, A. E. Annotated high-throughput microscopy image sets for validation. Nature Methods 9, 637–637, https://doi.org/10.1038/nmeth.2083 (2012).
Poon, C., Teikari, P., Rachmadi, M. F., Skibbe, H. & Hynynen, K. MiniVess: A dataset of rodent cerebrovasculature from in vivo multiphoton fluorescence microscopy imaging (v1). EBRAINS https://doi.org/10.25493/HPBE-YHK (2022).
Teikari, P., Santos, M., Poon, C. & Hynynen, K. Deep learning convolutional networks for multiphoton microscopy vasculature segmentation. arXiv:1606.02382 [cs] (2016).
Haft-Javaherian, M. et al. Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models. PLoS One 14, e0213539, https://doi.org/10.1371/journal.pone.0213539 (2019).
Reed, C. J. et al. Self-supervised pretraining improves self-supervised pretraining. arXiv:2103.12718 [cs] (2021).
Sanner, A., Gonzalez, C. & Mukhopadhyay, A. How reliable are out-of-distribution generalization methods for medical image segmentation? arXiv:2109.01668 [cs, eess] (2021).
Northcutt, C. G., Athalye, A. & Mueller, J. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv:2103.14749 [cs, stat] (2021).
Hakamata, Y. et al. Green fluorescent protein-transgenic rat: A tool for organ transplantation research. Biochemical and Biophysical Research Communications 286, 779–785, https://doi.org/10.1006/bbrc.2001.5452 (2001).
Holtmaat, A. et al. Long-term, high-resolution imaging in the mouse neocortex through a chronic cranial window. Nat. Protoc. 4, 1128–1144, https://doi.org/10.1038/nprot.2009.89 (2009).
Fedorov, A. et al. 3D Slicer as an image computing platform for the quantitative imaging network. Magn. Reson. Imaging 30, 1323–1341, https://doi.org/10.1016/j.mri.2012.05.001 (2012).
MONAI Consortium. MONAI: Medical open network for AI. Zenodo https://doi.org/10.5281/zenodo.4323058 (2020).
Chishti, M. et al. Early-onset amyloid deposition and cognitive deficits in transgenic mice expressing a double mutant form of amyloid precursor protein 695. Journal of Biological Chemistry 276, 21562–21570, https://doi.org/10.1074/jbc.m100710200 (2001).
Acknowledgements
This work was supported by funding from the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (RO1-EB003268, awarded to K.H.), the Canadian Institutes of Health Research (FDN 154272, awarded to K.H.), and the Temerty Chair in Focused Ultrasound Research at Sunnybrook Health Sciences Centre. This work was also supported by the program for Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) from the Japan Agency for Medical Research and Development AMED (JP15dm0207001).
Author information
Authors and Affiliations
Contributions
C.P. and P.T. contributed equally to this work. P.T. conceived the experiment. C.P. and P.T. wrote code for data conversion and the U-Net. C.P. conducted the experiments and analyzed the results. C.P. and P.T. wrote the manuscript. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Poon, C., Teikari, P., Rachmadi, M.F. et al. A dataset of rodent cerebrovasculature from in vivo multiphoton fluorescence microscopy imaging. Sci Data 10, 141 (2023). https://doi.org/10.1038/s41597-023-02048-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-023-02048-8