Abstract
Mixed reality navigation (MRN) technology is emerging as an increasingly significant and interesting topic in neurosurgery. MRN enables neurosurgeons to “see through” the head with an interactive, hybrid visualization environment that merges virtual- and physical-world elements. Offering immersive, intuitive, and reliable guidance for preoperative and intraoperative intervention of intracranial lesions, MRN showcases its potential as an economically efficient and user-friendly alternative to standard neuronavigation systems. However, the clinical research and development of MRN systems present challenges: recruiting a sufficient number of patients within a limited timeframe is difficult, and acquiring low-cost, commercially available, medically significant head phantoms is equally challenging. To accelerate the development of novel MRN systems and surmount these obstacles, the study presents a dataset designed for MRN system development and testing in neurosurgery. It includes CT and MRI data from 19 patients with intracranial lesions and derived 3D models of anatomical structures and validation references. The models are available in Wavefront object (OBJ) and Stereolithography (STL) formats, supporting the creation and assessment of neurosurgical MRN applications.
Subject terms
Mixed reality, Navigation, Neurosurgery, Intracranial lesion, Computed tomography, Magnetic resonance imaging
Similar content being viewed by others
Background & Summary
Intracranial lesions, pathological alternations within various brain regions, can exert pressure on critical neural structures1, potentially leading to neurological deficits or life-threatening conditions2,3,4. Thus, timely diagnostics and neurosurgical intervention is essential to preserve neurological functions, improve quality of life, and avert risks2,5,6.
Substantial advancements have been made in the methods by which neurosurgeons approach and treat intracranial lesions over the years. For instance, commercial neuronavigation systems have precisely transformed neurosurgical interventions by tracking the patient’s body and surgical instruments7,8,9. While these systems significantly enhance surgical precision, traditional techniques, such as pointer-based navigation, present ergonomic challenges10,11,12,13,14. Neurosurgeons frequently find themselves switching instruments, leading to disruptions, and they must toggle their focus between surgical site, navigation tools and monitors. This continuous shifting of attention and the mental effort required to integrate images into the surgical context can significantly increase cognitive load and mental strain, potentially affecting performance and learning in both surgical and educational settings10,11,12,13,14.
Augmented reality (AR) and, thereof, microscope-based navigation has emerged as a significant breakthrough. It employs the microscope’s optical focus as a virtual guide, superimposing digitally outlined structures directly onto the surgical field8,9,15,16. This reduces the need to shift attention and has proven clinically beneficial, enhancing comfort and understanding of anatomical structures. However, standard combined navigation and AR systems are expensive and require extensive procedural setup, prompting interest in more accessible alternatives. Among these, head-mounted device (HMD)-based AR, especially using optical see-through variants, stands out for its immersive and cost-effective nature11,12,13,17,18.
Shifting the focus to another technological advancement, Mixed Reality (MR), blending the physical and virtual worlds, offers an interactive environment distinct from AR10,12,13,14,18,19,20. MR technology digitizes real-world data, allowing more than overlaying virtual elements. Introduced to the market by Microsoft’s HoloLens, its advanced localization capabilities permit stable integration of three-dimensional (3D) elements into reality. The growth in mixed reality navigation (MRN) research highlights its potential as a cost-effective and user-friendly alternative appraoch to traditional neuronavigation systems10,11,12,14,18,21,22,23,24,25,26.
Essential to MRN’s functionality is the precise alignment of preoperative imaging data with the patient’s physical anatomy. This is achieved through various registration techniques, starting with procedures similar to those from conventional navigation systems, such as landmark-based14,18,27,28,29 and surface-based approaches30,31, and extending to manual alignment12,13,20,32,33,34 and registration based on a laser crosshair simulator (LCS)21,35. A straightforward, reliable, and minimally user-dependent registration method can boost the neurosurgeon’s confidence in using MRN21,35. On the other hand, MRN systems combining accurate anatomical and multimodal imaging data, such as blood flow information and white matter tracts, offer a holistic visualization, minimizing the risk of surgical complications and neurological impairment18,36. In summary, the virtual-physical alignment and the integration of diverse imaging modalities stand out as active fields in MRN research.
While testing MRN systems in clinical settings can directly validate their potential benefits for neurosurgical interventions, numerous challenges exist. Recruiting a sufficient number of patients to verify the clinical feasibility of a new technology often takes a long time37. Obtaining comprehensive data and informed consent from these patients within constrained timeframes poses additional challenges37. Furthermore, securing ethical approval for non-commercial medical device trials adds complexity and delays MRN development due to the rigorous documentation needed for safety and efficacy validation. Some researchers turn to commercially available patient head or skull phantoms, but these are costly. Everyday plastic phantoms serve as a cheaper alternative, but their medical relevance is limited37.
In this way, the contribution of this study to dataset construction is twofold. Firstly, a novel dataset tailored for MRN system development and testing in the neurosurgical domain is introduced. This dataset includes computed tomography (CT) or multimodal magnetic resonance imaging (MRI) data from 19 intracranial lesion patients. These data generated and optimized Wavefront Object (OBJ) files of anatomical structure holograms and Stereolithography (STL) files of the patients’ heads for cost-effective 3D printing. These models are invaluable for testing MRN registration algorithms and refining system functionalities before clinical testing. Secondly, a technical validation, ensuring the dataset’s validity and reliability is generated. This rigorous validation ensures that researchers can easily replicate and apply the findings to optimize their MRN systems, emphasizing the study’s significance and potential impact on the neurosurgical community.
Methods
This section outlines the construction process of the dataset, beginning with case enrollment and data selection (see Fig. 1). It proceeds through a sequence of image processing steps, including anonymization, de-identifiability, image fusion, segmentation, 3D reconstruction, and optimization, to generate 3D models that support holographic visualization and 3D printing tailored for testing MRN systems.
Subject cohort
The study collected preoperative cranial MRI and CT data from 44 consecutive patients diagnosed with intracranial lesions, including neurological neoplasms and hypertensive cerebral hemorrhages, gathered over four years (2018–2021) in two facilities: the First Medical Center of Beijing and the Hainan Hospital in Sanya, both affiliated with the Chinese PLA General Hospital. With the approval of the Institutional Review Board (IRB) of the Chinese PLA General Hospital (Approval number: S2023–142–01), informed consent for using and publishing their potentially identifiable imaging data for research was obtained from each patient or their legal relatives, ensuring that data with uniquely identifiable characteristics were excluded for adequate de-identification to prevent privacy breaches, in accordance with ethical guidelines.
In all cases, more than five adhesive skin markers were attached to the scalp before imaging to establish known landmarks in the physical world within the images. As previously published by the study group14,18,21,35, these markers served as reference points for patient registration and comparing the MRN system with standard navigation systems. The surgeries, conducted under the guidance of standard navigation systems without significant complications, not only adhered to clinical routine standards requiring high-quality preoperative imaging to avoid complications but also served to validate the preoperative imaging and marker configuration. This dual role laid a solid foundation for evaluating the MRN system, and highlighted the data’s relevance and accuracy for assessing this innovative system, despite the surgeries not directly utilizing MRN system.
Image acquisition
MRI data were acquired using a 1.5 T MRI scanner (Espree, Siemens, Erlangen, Germany), while CT data were collected with a 128 multislice CT scanner (SOMATOM, Siemens, Forchheim, Germany). The MRI scanning parameters were: T1-weighted imaging (T1WI) and T1-weighted contrast-enhanced (T1-CE) imaging using a magnetization prepared rapid acquisition gradient echo sequence (MPRAGE) with the administration of gadolinium (repetition time (TR) 1650 msec, echo time (TE) 3.02 msec, matrix size 192 × 256, field of view (FoV) 187.5 × 250 mm2, 176 slices, slice thickness 1.00 mm), T2-weighted sequence (T2WI, TR 5500 msec, TE 93 msec, matrix size 240 × 320, FoV 172.5 × 230 mm, 30 slices, slice thickness 3.90 mm), Diffusion tensor imaging (DTI) data using a single shot spin echo diffusion-weighted echo planar imaging (EPI) sequence (TR 9200 msec, TE 86 ms, matrix size 128 × 128, FoV 250 × 250 mm2, 40 slices, slice thickness 3.51 mm, no intersection gap, 20 diffusion-encoding gradient directions, high b-value 1000 s/mm2). The CT scanning parameters were: tube voltage 120 kVp, current 50 mA, window width 120, window level 40, matrix size 512 × 512, FoV 251 × 251 mm2, and slice thickness 0.625 mm resulting in a voxel size of 0.500 × 0.500 × 0.625 mm3.
Data selection
To maintain the dataset’s integrity and homogeneity, the inclusion criteria for imaging data were stringent, necessitating high-quality, high-resolution imaging with visible intracranial lesion boundaries in at least one image sequence. Imaging data exhibiting significant artifacts or spatial distortion was excluded. Importantly, images lacking complete cranial or skin contours were also discarded, as they were unsuitable for generating comprehensive life-sized head phantoms for 3D printing. Additionally, given the critical role of patient facial features in the registration process for both standard navigation and MRN systems, no algorithms that could potentially modify the original imaging facial features were used. To protect patient privacy, de-identification procedures were applied at the case enrollment stage, involving a thorough examination of patient images to eliminate cases with identifiable facial anomalies or scars. Visual inspections by three independent neurosurgeons (Z.Q., X.C., and J.Z.) confirmed that all selected cases were non-identifiable by facial characteristics. Ultimately, data from 19 patients were chosen based on these criteria, with the remainder excluded. Among the selected patients (female / male: 7 / 12, mean age: 54.4 ± 18.5 years), 15 were subjected to MRI, and four to CT scans. The demographic information can be found in Table 1.
Data anonymization
Data preprocessing was performed using the freely available open-source software platform, 3D Slicer (Version 5.1.0, https://www.slicer.org/)38. Upon importing the data of the selected patients, the imaging sequences were initially converted from the Digital Imaging and Communications in Medicine (DICOM) file format to the Nearly Raw Raster Data (NRRD) file format, which was fully anonymized and stripped of the patient’s metadata. The transition to the NRRD file format ensured complete anonymization and enhanced data handling. Additionally, NRRD could maintain the integrity of the original imaging data without compression or damage, allowing for reconversion back to a metadata-free DICOM format when necessary, ensuring broad compatibility and adherence to privacy protection standards.
Image fusion
Neuroimaging data, if encompassing multimodal sequences acquired at various times, modalities, or scanners, required co-registration to amalgamate comprehensive information based on the multiple image input, thus aiding in precise surgical planning and functional preservation. This essential processing step aligned images from diverse modalities, such as T1-CE, T2WI and DTI, or images of the same modality obtained at different intervals. If imaging comprised only a single modality, such as cases of cerebral hemorrhage undergoing baseline CT scans alone, co-registration was not involved. The highest-resolution scan was used as the reference image (RI) to ensure accurate alignment (see Fig. 2A). It not only allowed the fusion of images for simultaneous observation and analysis (see Fig. 2B) but also harmonized their coordinate systems (i.e., aligning origins, orientations, and scales) to make various image-defined content such as segmentation, models, and trajectories visible, interactive, and modifiable across different images, ensuring a unified and precise integration of all data within a consistent coordinate system (see Fig. 2C). Each case’s neuroimaging information can be found in Table 2.
The “General Registration (Elastix)” extension on the 3D Slicer platform facilitated this process39. The calculated registration matrix was then saved within the 3D Slicer scene files, enabling the transformation of segmented structures or 3D reconstructed models from multiple modal sequences into the unified coordinate system, thus enhancing the precision and applicability of subsequent analyses and surgical planning.
Image post-processing
Image post-processing referred to generating model files from volumetric data suitable for 3D printing or holographic visualization. This could be coarsely divided into two main steps: image segmentation and 3D reconstruction (see Fig. 2C and D).
Various segmentations related to the surgical treatment of intracranial lesions were developed, yielding holographic models visualizable through MRN. In neuro-oncological minimally invasive surgical planning, attention was given to the lesion’s location and three-dimensional structure, the segmentation of lesions, adjacent arteries and veins, and functional relevant structures such as major white matter fiber tracts. These structures were deemed significant by surgeons for surgical planning and execution. In the surgical intervention of intracerebral hemorrhage, the segmentation of the hemorrhage’s three-dimensional structure and the models used for surgical guidance (e.g., puncture pathways to the hemorrhage, endoscopic routes, and craniotomy compatible with port surgery) had been delineated. The structures’ segmentation was performed using the “Segment Editor40,” “UKF Tractography41,42,” “Markups,” and “Curve Maker” extension modules in 3D Slicer software, with the capability for both manual and automatic segmentation. Specifically, structures such as the lesions, the vessels, the hemorrhage, and the ventricles were outlined utilizing automatic segmentation where possible and supplemented by manual adjustments for refinement.
Three-dimensional reconstruction involved layering and aligning segmented two-dimensional images to form a seamless three-dimensional surface, which was essential for holographic visualization or converting the segmented data into a voxel-based format suitable for 3D printing applications (see Fig. 2D). Employing the “Segmentation” and “Model Maker” extensions in the 3D Slicer software, clusters of segmented voxels were converted into detailed 3D models.
Validation reference objects
Validation reference objects were created to assess the accuracy of MRN systems in aligning virtual images with physical reality. This was achieved by establishing reference objects in virtual and physical space to compare their positional correspondence. Two principal reference relationships were provided in the dataset: (1) Landmark-based comparison, where markers affixed to the patient’s scalp during imaging are identified and segmented, allowing their positions to be visualized in both the MRN system’s virtual images and the physical model (see Fig. 3A–C); (2) laser positioning line comparison, where laser lines projected onto the patient’s skin by the scanner’s frame represented three orthogonal reference planes in the images, corresponding to specific planes in the computer-generated images where the principal axis coordinate value was zero (see Fig. 3D–F). For the implementation, virtual validation objects for import into the MRN system and their corresponding 3D-printed physical models were created. Markers were segmented and modeled, and their centroids were extracted using the “Segment Editor” and “Segment Statistics” extension modules (see Fig. 3A)40. Laser positioning lines were modeled using the “Markups” and “Curve Maker” extension modules (see Fig. 3E), with a “Scalp quadrants” virtual model designed via the “Easy Clip” extension module to enhance the visual representation of laser lines in virtual space (see Fig. 3D).
3D-printed phantom generation
The STL files used for 3D printing, derived from segmented skin surfaces within reference CT/MRI data, underwent a 3D reconstruction process. This involved a standardized method using the “Segment Editor” extension’s tools (e.g., threshold, paintbrush, scissors, islands, hollowing, and smoothing) to extract a 3D skin surface with a designated thickness of 1 mm21,35,40. However, directly using these raw STL files for 3D printing posed several challenges, including surface roughness from noise, discontinuities such as gaps or holes, potentially hazardous sharp spikes or edges from anatomical structures, and an uneven bottom or inclined phantom stance that could complicate the printing process, increase material usage, and extend printing time.
To address these challenges and enhance the continuity of the process from segmentation to printing, the STL file generation was refined for optimal efficiency and quality. The initial step involved applying Gaussian smoothing with a minimal voxel size of approximately 1 × 1 × 1 mm \({}^{3}\) during segmentation, significantly reducing surface noise while maintaining anatomical accuracy. Subsequently, a rectangular cropping/filling technique was employed using the “scissor” tool to create a flat bottom surface aligned with the axial standard plane to ensure a stable base for printing. Critical attention was given to smoothing sharp edges to ensure model quality. This comprehensive approach addressed the initial challenges and produced cost-effective, high-quality, and researcher-friendly 3D skin surfaces.
To accommodate various research and testing objectives for MRN systems, two variants of head phantoms were designed by integrating the 3D skin surface with validation reference objects. These variants include one with the 3D skin surface and markers (see Fig. 3B) and another with the 3D skin surface, markers, as well as positioning lines (see Fig. 3E). The integration process was facilitated through the “Merge Models” extension module. Notably, no transformations were applied throughout the generation and optimization of the 3D skin surface, preventing any misalignment with the validation reference objects.
Data Records
All imaging data sets and generated meta-data are publicly available at https://doi.org/10.6084/m9.figshare.24550732.v6, stored in FigShare repository43. This collection features 47 raw CT/MRI datasets in NRRD and anonymized DICOM archive format from 19 patients, 240 holograms in 133 OBJ files, 19 pairs of STL files (with or without positioning lines) for 3D printing, and 19 scene files in medical reality bundle (MRB) format tailored for processing and generating the aforementioned files within 3D Slicer. Additionally, each case’s marker centroid coordinates are encapsulated within their respective MRB files for precise accuracy assessment and analysis. The data within the dataset is methodically organized into hierarchical directories based on patient ID and file type, exemplified by “case_01” (see Fig. 4). Cross-referencing between the patient IDs in the directory or file names and Tables 1 & 2 in the main manuscript is facilitated. Documented pathological data includes post-operative histopathological results and anatomical location, with lesion volumes automatically calculated via the “Segment statistics” extension and lesion depths determined through the “Model to Model Distance” extension in 3D Slicer (See Table 1). Surgical data encapsulates patient surgical positioning and segmented anatomical structures pertinent to surgical intervention or navigation system co-registration. Voxel and resolution parameters are chronicled in the datasheet for each case’s RI (see Table 2). The 3D printed phantoms’ sizes, material consumption, and anticipated printing durations are reported, enabling researchers to select an appropriate 3D printer and estimate time and financial expenditures (see Table 4).
Technical Validation
The dataset creation process encompassed four stages: 3D medical imaging, image processing, 3D printing, and the creation of validation objects. Quality control measures were implemented at each stage to ensure rigor and reliability.
De-identification, anonymization, and integrity of imaging data collection
The CT/MRI scanners used for data collection are certified commercial products routinely employed in clinical settings, operated and maintained by qualified physicians or technicians who also perform regular quality control checks. During the data selection phase, subjects with highly recognizable facial features were excluded, and the non-identifiability of facial characteristics in the retained subjects was confirmed through visual inspection (by Z.Q., X.C., and J.Z.). Subsequently, patient metadata was removed during the data conversion step (from DICOM to NRRD format) to achieve anonymization. Furthermore, each case was visually inspected (by Z.Q.) to ensure that the original imaging data were neither compressed nor corrupted, maintaining the integrity of the dataset.
Validity and usability of holograms
Image processing was conducted using the open-source platform 3D Slicer to guarantee a replicable model generation process. User-dependent operations, such as segmentation, annotations, and white matter fiber tract reconstruction, were performed by a neurosurgeon (Z.Q., an attending physician with 6 years of experience) with extensive software and neurosurgical expertise. The time required for segmentation operations is detailed in Table 3. Generating data packages for each case of neurological neoplasms took approximately 60 minutes, while for each case of hypertensive cerebral hemorrhage, it took around 40 minutes. The final surgical plans were reached through consensus after discussions within the treatment team, including two independent senior neurosurgeons (X.C. and J.Z., chef physicians with more than 20 years of experience each). In prior MRN studies, the MRN system based on Microsoft HoloLens-2 (Microsoft, Redmond, WA, USA) demonstrated fundamental consistency with co-registered commercial navigation systems, validating the clinical effectiveness of the segmentation process. Specifically, the successful visualization of all 240 holograms substantiates the usability of 133 OBJ files within the MRN system in the previous study21.
Validity and usability of 3D-printable head phantoms
To ensure accurate 3D printing of the phantom heads, a commercial 3D printer, A5S (Shenzhen Aurora Technology Co., Ltd, China), was used to create 1:1 scale models for all 19 cases (parameters: nozzle temperature: 210 °C, platform temperature: 50 °C, material: polylactic acid (PLA), resolution: 0.3 mm, fill level: 10%). All 19 models with positioning lines were successfully printed21, with an average duration of 22.4 ± 3.1 hours and an average cost significantly lower than commercial head phantoms, demonstrating the process’s efficiency and cost-effectiveness (See Table 4). While models without positioning lines were not individually validated through 3D printing, their simpler design compared to those with positioning lines suggests they could also be successfully printed.
Usability of validation reference objects
Positioning lines and markers were generated using a semi-automated method, whereas the extraction of marker centroids and the calculation of their coordinates were automated, ensuring high reproducibility. In prior research by the study group, positioning lines served as a visual reference for MRN system alignment assessment, and markers were used for quantitative evaluations. Specifically, they acted as known points in space (i.e., the ground truth) to provide references for measured points in experiments, aiding in the calculation of metrics critical for assessing MRN system accuracy, such as fiducial localization error (FLE), fiducial registration error (FRE), and target registration error (TRE). The centroid, virtual point, and physical point coordinates were collected for all markers in the study21, accumulating a total of 124 coordinate pairs. Across all measurements, the FLE was 1.9 ± 1.0 mm, TRE was 3.0 ± 1.1 mm, and FRE was 2.1 ± 0.6 mm. Given these outcomes, it’s reasonable to assert that the dataset quantitatively reflects the accuracy of the MRN system. Measurements, albeit user-dependent, are consistently reliable. The geometric congruence between the virtual and physical models is profound, thereby not significantly influencing the accuracy evaluation of the MRN system or analogous systems.
Dataset scalability
The dataset exhibits commendable scalability. In the context of MRN, scalability pertains to the potential of the dataset to be extrapolated and applied in environments, devices, or algorithms different from the original research scenario, effectively facilitating other researchers in developing and testing their MRN systems. Otherwise, its scalability becomes limited if the dataset solely applies to specific research scenarios. Hence, during dataset creation, this study opted for representative samples and configurations to ensure broad applicability. Firstly, cases encompassed in this dataset span diverse lesion localizations, surgical positions, and neurosurgical intervention plans, ensuring clinical balance and mitigating case selection biases during new system testing. Secondly, this dataset is conducive to validating other MRN or AR systems, e.g., AR systems mounted on smartphones or tablets. As long as researchers integrate quantitative measurement modules (e.g., virtual probes, rulers, or protractors) within their systems, they can conduct quantitative assessments on known marker points based on their requirements. Lastly, the dataset is compatible with various MRN registration methods. For example, known markers on the 3D-printed phantom facilitate research and evaluation of landmark-based registration, while phantoms with and without positioning lines are congruent with LCS registration and surface-based registration. This dataset offers comprehensive generalizability across cases, devices, and algorithms, manifesting technical and economic efficiencies.
Usage Notes
Any individual or institution may freely download, share, copy, or republish the data in any medium or format for reasonable research purposes. The dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0) (https://creativecommons.org/licenses/by/4.0/). Additionally, our data permits researchers to adapt, adjust, modify, or transform according to their research objectives. We aim to offer minimally user-dependent models in the public dataset, allowing researchers to test and optimize their MRN systems.
Medical image processing
NRRD is a widely-used file storage format for medical imaging, supported by various free and open-source medical imaging software such as 3D Slicer (https://www.slicer.org/), ITK-SNAP (https://www.itksnap.org/https://www.itksnap.org/), MeVisLab (https://www.mevislab.de/), Studierfenster (www.studierfenster.at), DicomWorks (https://www.dicomworks.com/), etc. It is also supported by programming languages and platforms such as MATLAB (https://www.mathworks.com/), Python (https://www.python.org/), and the VTK (https://www.vtk.org/). Commercial image processing software can further process or analyze the areas or structures of interest.
In this study, the processing of medical images was conducted entirely within the 3D Slicer platform. 3D Slicer is a powerful open-source software platform for medical image processing and computer-assisted surgery. With its robust integrative and modular design38, users can select desired extension modules for expansion or integrate renowned external tools and libraries (e.g., VTK, Insight Toolkit (ITK) (http://www.itk.org), Python libraries). Furthermore, 3D Slicer boasts an active developer and user community, providing abundant educational and training resources, significantly enhancing the possibility and flexibility for clinicians and researchers to obtain free support. We encourage clinicians and researchers to customize their medical image processing methodologies, data, and models using the 3D Slicer platform as per their requirements. To facilitate this, well-organized MRB files are provided for each case in the dataset. MRB, a binary format, encapsulates all data within a 3D Slicer scene and is directly supported by the 3D Slicer software. Moreover, it can be transformed into a .zip file by simply changing the extension, allowing users direct access to the internal data.
Holographic visualization & 3D printing
OBJ and STL are widely accepted standard file formats in the 3D graphics industry, gaining popularity among many 3D modeling and computer graphics communities due to their simplicity, flexibility, and extensive support. In the dataset, it’s noteworthy that each OBJ file is accompanied by a corresponding material library (MTL) file within the same folder. The MTL is a ubiquitous file format that applies color and material information to the OBJ files, allowing researchers to open and use the OBJ files more quickly and conveniently. There are numerous platforms and libraries supporting OBJ and STL, including but not limited to, open-source platforms such as 3D Slicer, CloudCompare (https://www.cloudcompare.org/main.html), Blender (https://www.blender.org/), Three.js (https://threejs.org/), commercial platforms such as AutoCAD (https://www.autodesk.com/), Maya (https://www.autodesk.com/products/maya/overview), 3ds Max (https://www.autodesk.com/products/3ds-max/overview), Cinema 4D (https://www.maxon.net/en/cinema-4d), and those in between such as Unity (https://www.unity.comhttps://unity.com), SketchUp (https://www.sketchup.com), and Unreal Engine (https://www.unrealengine.com/). Users can choose their desired platform for further editing or rendering based on their needs. In the context of MRN system development and testing, OBJ and STL are natively supported files by mainstream MR HMDs, allowing direct importation and visualization without further operations. Additionally, most commercial 3D printer software platforms support the STL format, making the provided STL in this dataset directly usable for printing.
Code availability
The creation of the dataset was entirely based on the open-source software platform, 3D Slicer, without the use of custom code.
References
Hawryluk, G. W. et al. Intracranial pressure: current perspectives on physiology and monitoring. Intensive care medicine 48, 1471–1481, https://doi.org/10.1007/s00134-022-06786-y (2022).
Buckner, J. C. et al. Central nervous system tumors. Mayo Clinic Proceedings 82, 1271–1286, https://doi.org/10.4065/82.10.1271 (2007).
Wrensch, M., Minn, Y., Chew, T., Bondy, M. & Berger, M. S. Epidemiology of primary brain tumors: Current concepts and review of the literature. Neuro-Oncology 4, 278–299, https://doi.org/10.1093/neuonc/4.4.278 (2002).
DeAngelis, L. M. Brain tumors. New England Journal of Medicine 344, 114–123, https://doi.org/10.1056/NEJM200101113440207 (2001).
Aziz, P. A. et al. Supratotal resection: An emerging concept of glioblastoma multiforme surgery–systematic review and meta-analysis. World Neurosurgery 179, e46–e55, https://doi.org/10.1016/j.wneu.2023.07.020 (2023).
Lara-Velazquez, M. et al. Advances in brain tumor surgery for glioblastoma in adults. Brain Sciences 7, https://doi.org/10.3390/brainsci7120166 (2017).
Watanabe, Y. et al. Evaluation of errors influencing accuracy in image-guided neurosurgery. Radiological physics and technology 2, 120–125, https://doi.org/10.1007/s12194-009-0053-6 (2009).
Bopp, M. H. A. et al. Augmented reality to compensate for navigation inaccuracies. Sensors 22, https://doi.org/10.3390/s22249591 (2022).
Carl, B. et al. Reliable navigation registration in cranial and spine surgery based on intraoperative computed tomography. Neurosurgical Focus FOC 47, E11, https://doi.org/10.3171/2019.8.FOCUS19621 (2019).
Incekara, F., Smits, M., Dirven, C. & Vincent, A. Clinical feasibility of a wearable mixed-reality device in neurosurgery. World neurosurgery 118, e422–e427, https://doi.org/10.1016/j.wneu.2018.06.208 (2018).
van Doormaal, T. P., van Doormaal, J. A. & Mensink, T. Clinical accuracy of holographic navigation using point-based registration on augmented-reality glasses. Operative Neurosurgery 17, 588, https://doi.org/10.1093/ons/opz094 (2019).
Li, Y. et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. Journal of Neurosurgery JNS 131, 1599–1606, https://doi.org/10.3171/2018.4.JNS18124 (2019).
Li, Y., Zhang, W. & Wang, N. Wearable mixed-reality holographic guidance for catheter-based basal ganglia hemorrhage treatment. Interdisciplinary Neurosurgery 34, 101821, https://doi.org/10.1016/j.inat.2023.101821 (2023).
Qi, Z. et al. Holographic mixed-reality neuronavigation with a head-mounted device: technical feasibility and clinical application. Neurosurgical Focus 51, E22, https://doi.org/10.3171/2021.5.FOCUS21175 (2021).
Léger, É., Drouin, S., Collins, D. L., Popa, T. & Kersten-Oertel, M. Quantifying attention shifts in augmented reality image-guided neurosurgery. Healthcare technology letters 4, 188–192, https://doi.org/10.1049/htl.2017.0062 (2017).
Bopp, M. H. et al. Use of neuronavigation and augmented reality in transsphenoidal pituitary adenoma surgery. Journal of Clinical Medicine 11, 5590, https://doi.org/10.3390/jcm11195590 (2022).
Abe, Y. et al. A novel 3d guidance system using augmented reality for percutaneous vertebroplasty. Journal of Neurosurgery: Spine 19, 492–501, https://doi.org/10.3171/2013.7.SPINE12917 (2013).
Qi, Z. et al. [implement of mixed reality navigation based on multimodal imaging in the resection of intracranial eloquent lesions]. Zhonghua wai ke za zhi [Chinese Journal of Surgery] 60, 1100–1107, https://doi.org/10.3760/cma.j.cn112139-20220531-00248 (2022).
Hayasaka, T. et al. Comparison of accuracy between augmented reality/mixed reality techniques and conventional techniques for epidural anesthesia using a practice phantom model kit. BMC anesthesiology 23, 171, https://doi.org/10.1186/s12871-023-02133-w (2023).
McJunkin, J. L. et al. Development of a mixed reality platform for lateral skull base anatomy. Otology & Neurotology: Official Publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology 39, e1137, https://doi.org/10.1097/MAO.0000000000001995 (2018).
Qi, Z. et al. The feasibility and accuracy of holographic navigation with laser crosshair simulator registration on a mixed-reality display. Sensors 24, https://doi.org/10.3390/s24030896 (2024).
Gharios, M. et al. The use of hybrid operating rooms in neurosurgery, advantages, disadvantages, and future perspectives: a systematic review. Acta Neurochirurgica 165, 2343–2358, https://doi.org/10.1007/s00701-023-05756-7 (2023).
Fick, T., van Doormaal, J. A., Hoving, E. W., Willems, P. W. & van Doormaal, T. P. Current accuracy of augmented reality neuronavigation systems: systematic review and meta-analysis. World neurosurgery 146, 179–188, https://doi.org/10.1016/j.wneu.2020.11.029 (2021).
Fick, T. et al. Comparing the influence of mixed reality, a 3d viewer, and mri on the spatial understanding of brain tumours. Frontiers in Virtual Reality 4, 1214520, https://doi.org/10.3389/frvir.2023.1214520 (2023).
Colombo, E., Bektas, D., Regli, L. & Van Doormaal, T. Case report: Impact of mixed reality on anatomical understanding and surgical planning in a complex fourth ventricular tumor extending to the lamina quadrigemina. Frontiers in Surgery 10, https://doi.org/10.3389/fsurg.2023.1227473 (2023).
Jean, W. C., Piper, K., Felbaum, D. R. & Saez-Alegre, M. The inaugural “century” of mixed reality in cranial surgery: Virtual reality rehearsal/augmented reality guidance and its learning curve in the first 100-case, single-surgeon series. Operative Neurosurgery 26, 28–37, https://doi.org/10.1227/ons.0000000000000908 (2024).
Zhou, Z. et al. Validation of a surgical navigation system for hypertensive intracerebral hemorrhage based on mixed reality using an automatic registration method. Virtual Reality 1–13, https://doi.org/10.1007/s10055-023-00790-3 (2023).
Akulauskas, M., Butkus, K., Rutkūnas, V., Blažauskas, T. & Jegelevičius, D. Implementation of augmented reality in dental surgery using hololens 2: An in vitro study and accuracy assessment. Applied Sciences 13, 8315, https://doi.org/10.3390/app13148315 (2023).
Schneider, M. et al. Augmented reality–assisted ventriculostomy. Neurosurgical focus 50, E16, https://doi.org/10.3171/2020.10.FOCUS20779 (2021).
Liebmann, F. et al. Pedicle screw navigation using surface digitization on the microsoft hololens. International journal of computer assisted radiology and surgery 14, 1157–1165, https://doi.org/10.1007/s11548-019-01973-7 (2019).
Pepe, A. et al. A marker-less registration approach for mixed reality–aided maxillofacial surgery: a pilot evaluation. Journal of digital imaging 32, 1008–1018, https://doi.org/10.1007/s10278-019-00272-6 (2019).
Gibby, J. T., Swenson, S. A., Cvetko, S., Rao, R. & Javan, R. Head-mounted display augmented reality to guide pedicle screw placement utilizing computed tomography. International journal of computer assisted radiology and surgery 14, 525–535, https://doi.org/10.1007/s11548-018-1814-7 (2019).
Li, C. et al. Augmented reality-guided positioning system for radiotherapy patients. Journal of Applied Clinical Medical Physics 23, e13516, https://doi.org/10.1002/acm2.13516 (2022).
Marrone, S. et al. Improving mixed-reality neuronavigation with blue-green light: A comparative multimodal laboratory study. Neurosurgical Focus 56, E7, https://doi.org/10.3171/2023.10.FOCUS23598 (2024).
Qi, Z. et al. A novel registration method for a mixed reality navigation system based on a laser crosshair simulator: A technical note. Bioengineering 10, https://doi.org/10.3390/bioengineering10111290 (2023).
Chiacchiaretta, P. et al. A dedicated tool for presurgical mapping of brain tumors and mixed-reality navigation during neurosurgery. Journal of Digital Imaging 35, 704–713, https://doi.org/10.1007/s10278-022-00609-8 (2022).
Gsaxner, C., Wallner, J., Chen, X., Zemann, W. & Egger, J. Facial model collection for medical augmented reality in oncologic cranio-maxillofacial surgery. Scientific data 6, 310, https://doi.org/10.1038/s41597-019-0327-8 (2019).
Fedorov, A. et al. 3d slicer as an image computing platform for the quantitative imaging network. Magnetic resonance imaging 30, 1323–1341, https://doi.org/10.1016/j.mri.2012.05.001 (2012).
Klein, S., Staring, M., Murphy, K., Viergever, M. A. & Pluim, J. P. W. elastix: A toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging 29, 196–205, https://doi.org/10.1109/TMI.2009.2035616 (2010).
Pinter, C., Lasso, A. & Fichtinger, G. Polymorph segmentation representation for medical image computing. Computer methods and programs in biomedicine 171, 19–26, https://doi.org/10.1016/j.cmpb.2019.02.011 (2019).
Zhang, F. et al. Slicerdmri: diffusion mri and tractography research software for brain cancer surgery planning and visualization. JCO clinical cancer informatics 4, 299–309, https://doi.org/10.1200/CCI.19.00141 (2020).
Norton, I. et al. SlicerDMRI: Open Source Diffusion MRI Software for Brain Cancer Research. Cancer Research 77, e101–e103, https://doi.org/10.1158/0008-5472.CAN-17-0332 (2017).
Qi, Z. et al. Head model collection for mixed reality navigation in neurosurgical intervention for intracranial lesions. figshare https://doi.org/10.6084/m9.figshare.24550732.v6 (2024).
Acknowledgements
Open Access funding provided by the Open Access Publishing Fund of Philipps-Universität Marburg. We would like to sincerely thank Hui Zhang for her invaluable assistance during this research.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
Data collection, Z.Q., H.J., X.C., X.X., Q.W., Z.G., R.X., S.Z., M.L., J.W., X.D., and J.Z.; data screening and selection, Z.Q., X.C., and J.Z.; data processing, Z.Q., X.C., X.X., Q.W., Z.G., R.X., S.Z., X.D., and J.Z.; methodology, Z.Q., M.H.A.B., N.C., X.C., X.X, Q.W., and J.Z.; validation, Z.Q., M.H.A.B., and J.Z.; formal analysis, Z.Q., and J.Z.; investigation, Z.Q.; resources, X.X.; data curation, Z.Q., and J.Z.; writing—original draft preparation, Z.Q.; writing—review and editing, M.H.A.B.; visualization, Z.Q.; supervision, M.H.A.B, C.N., X.C., and J.Z.; project administration, M.H.A.B., C.N., X.C., and J.Z.; funding acquisition, Z.Q., X.C., and J.Z. All authors have read and agreed to the published version of the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no conflict of interest. The funders had no role in the design of the study, the collection, analysis, or interpretation of data, the writing of the manuscript, or the decision to publish the results.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Qi, Z., Jin, H., Xu, X. et al. Head model dataset for mixed reality navigation in neurosurgical interventions for intracranial lesions. Sci Data 11, 538 (2024). https://doi.org/10.1038/s41597-024-03385-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-024-03385-y