We present a 3D deep learning framework that can generate a complete cranial model using a defective one. The Boolean subtraction between these two models generates the geometry of the implant required for surgical reconstruction. There is little or no need for post-processing to eliminate noise in the implant model generated by the proposed approach. The framework can be used to meet the repair needs of cranial imperfections caused by trauma, congenital defects, plastic surgery, or tumor resection. Traditional implant design methods for skull reconstruction rely on the mirror operation. However, these approaches have great limitations when the defect crosses the plane of symmetry or the patient's skull is asymmetrical. The proposed deep learning framework is based on an enhanced three-dimensional autoencoder. Each training sample for the framework is a pair consisting of a cranial model converted from CT images and a corresponding model with simulated defects on it. Our approach can learn the spatial distribution of the upper part of normal cranial bones and use flawed cranial data to predict its complete geometry. Empirical research on simulated defects and actual clinical applications shows that our framework can meet most of the requirements of cranioplasty.
Cranioplasty1,2 is a surgical procedure in which cranial implants, or prostheses, are used to repair skull defects caused by trauma, congenital defects, plastic surgery, or tumor resection. The cranial implants must have an appropriate convex shape and fit accurately to the boundary of the defect. Their design usually involves time-consuming human–computer interaction using specific software and requires expertise in the medical field. For instance, Chen et al.3 utilized the geometry information of the mirrored model as the base to generate the implant model.
Considering that cranial defects may cross the plane of symmetry, and human cranial bones are usually asymmetrical, it is impractical to use the mirroring operation to generate the implant geometry. Therefore, there is a great need for automatic design of cranial implants.
In recent years, there have been substantial progresses in the image inpainting technology based on deep learning4. Image inpainting is the process of completing or repairing missing areas in a two-dimensional image. For example, Yan et al.5 introduced a shift-connection layer to the U-Net architecture6 for image completion that exhibits fast speed with promising fine details. Liao et al.7 proposed a deep convolutional neural networks scheme to explicitly separate content and style that generates fine-detailed and perceptually realistic inpainting results for structural and natural images. Besides, Pathak et al.8 combined the autoencoder network model9,10,11 with Generative Adversarial Network12 (GAN) to repair images and found that, in addition to reconstruction loss, an adversarial loss is beneficial in producing clear results. The schemes of Iizuka et al.13, Wang et al.14 and Jiang et al.15 are all based on the combination of autoencoder model9,10,11 and GAN12, in which a global context identifier and a local context identifier are used.
Compared with 2D images, 3D geometric models require more computing power to process16,17. In the inpainting of 3D models, the neural network architecture of Han et al.18 is divided into two parts, where the “Global Structure Inference” is responsible for the restoration of 32 × 32 × 32 low-resolution data, and the “Local Structure Refinement” part is responsible for refinement. Wang et al.19 also used GAN12 to train an Encoder–Decoder network9,10,11 to repair defects in 3D images with a resolution of 32 × 32 × 32 voxels. Dai et al.20 used a 3D encoder-predictor network to repair the defects of 3D images with a volumetric resolution of 32 × 32 × 32. The images are then replaced by higher resolution data by direct search. In addition, Wang et al.21 proposed a scheme that contains a local GAN12 and a global GAN12 to repair 3D mesh model in 80 × 80 × 80 voxels. The performance demonstrations of these contributions, however, are all based on simple geometric shapes such as airplanes, desks, and chairs.
Recently, Morais et al.22 proposed a deep learning approach, called Volumetric Convolutional Denoising Autoencoder, to perform 3D shape completion on defected skull models. This approach was evaluated on a full-skull reconstruction task and no verification of the generated implant geometry was provided. The deep learning approach of Li et al.23 is carried out in two steps using two neural networks. First, a network is trained to reconstruct the low-resolution version to locate the defective area. Second, another neural network is trained to make detailed implant predictions.
In addition, Shi and Chen24 proposed a convolutional neural network of the autoencoder9,10,11 structure with an auxiliary path to predict the 3D implant from inpainting 2D slices of different axes. Matzkin et al.25 used a 3D version of the standard U-Net architecture6 to compare two different approaches: direct estimation of the implant, and the reconstruct-and-subtract strategy, where the complete skull is first reconstructed, and then the defective model is subtracted from it to generate the implant. Before training, all the images were registered to an atlas space which is constructed by averaging several healthy head CT images. They concluded that the latter tends to generate noise in the implant models. In the succeeding work of Matzkin et al.26, an approximate shape prior, which is constructed by averaging several healthy head CT images, is concatenated with the input model to provide supplementary context information to the network. This modification is reported to facilitate the robustness of the model for out-of-distribution cases.
Nevertheless, these skull repair techniques are limited in feasible resolution, and the defects are all regular shapes produced by spherical or cubic masks. These shortcomings reduce its applicability in clinical practice.
The purpose of this research is to develop practical 3D inpainting techniques to automatically generate the geometry of the cranial implant, thereby eliminating subjectivity.
As shown in Fig. 1, the proposed cranioplasty procedure begins with integrating a defective 3D skull model using a CT-scanned image dataset. A completed cranial model is then automatically created by the proposed deep learning system. To reduce the computational burden, this study reduces the resolution of the 3D model and only generates the upper part of the cranium with a volumetric resolution of 112 × 112 × 40.
After that, an implant model is obtained by subtracting the defective model from the completed model. Subsequently, a template is made using 3D printing technology. The molding process is then applied to create the implant required for the repair surgery, which is made of bone cement in our surgical implementation.
For clinical practice, we resample and smooth the completed implant model to a volumetric resolution of 448 × 448 × 40. By subtracting the original defective model from the model again to remove residual voxels, a sufficiently smooth implant model can be obtained for 3D printing.
In the casting and molding process to create the implant, silicone rubber was used to make the mold to capture geometric details. We had chosen bone cement to make hand-crafted skull patches for more than 16 years and found the material satisfactory. Other biocompatible materials1 can also be molded to match the shape of the defect in the same way.
The main contributions of this manuscript can be summarized as:
We propose an effective deep-learning-based 3D inpainting solution to meet the requirements of cranioplasty.
Little or no post-processing is required to eliminate noise in the implant geometry model generated by the proposed approach.
The proposed system is computationally efficient and only requires a desktop PC equipped with a GPU accelerated graphics card to perform calculations.
This section uses simulation cases to investigate the quantitative performance of the proposed framework.
Figure 2 demonstrates four automatic cranial implant design cases. The upper parts of the defective skulls are displayed in the top view and isometric view in the first and third rows, respectively. The second and fourth rows present the complete skulls generated by the proposed system. The ideal (ground-truth) implants and the created implants are shown in the fifth and sixth rows, respectively.
In the numerical evaluation, we created regular and irregular holes on intact 3D cranium models. The difference between cylindrical and ellipsoidal defects lies in the boundary of the defects. The former is parallel to the axial direction, while the latter is curved, as shown in the flawed skulls of Fig. 2. Please note that all defects pass through the central plane in these cases and therefore cannot be created based on the traditional symmetry assumption.
The implants in the sixth row are obtained by subtracting the original flaw skull models from the generated complete models. If a generated implant is denoted as P* and its corresponding ideal one is expressed as P, the volumetric error rate, denoted as r, is defined as
where the 1-norm is used. The last row of Fig. 2 quantitatively summarizes the repair performance of the proposed scheme. We can find that the proposed deep learning system achieves a volumetric error rate of less than 8.2% in this case study.
In addition, to understand the limitations of the repair ability of the proposed scheme, we created defects of various sizes and positions on the skull model for numerical study. According to the numerical investigation, detailed in the Supplementary Material, the system can produce satisfactory implants for defects up to 35% in volume.
The proposed deep learning system has been used in implant generation for clinical applications. This section describes one of these successful implementations.
A 12-year-old boy with a congenital craniofacial defect sought surgical treatment. Computed tomography showed that the longest crack in his sagittal suture was 124 mm in diameter. As shown in Fig. 3, the proposed deep learning system generated an adequate 3D geometry of the implant required to repair the defect.
It is worth mentioning that although the system has been trained on simplified cylindrical and elliptical defects, the geometry of the generated implant is satisfactory for actual implants with irregular boundaries.
Skull implant design usually requires time-consuming human–computer interaction and requires expertise in the medical field. The motivation for this work is the need to automate this process and improve the quality of medical care. We proposed a 3D deep learning network to automatically complete defect models in this study.
Several state-of-the-art deep learning models have achieved great success in the field of computer vision. However, these 2D results cannot be directly extended to 3D problems. For example, the stable training of GANs is more challenging for 3D imaging tasks involving more spatial features.
The performance of the proposed neural network was investigated in both simulated and clinical cases to verify its applicability. According to the numerical study, the proposed deep learning system achieves a volumetric error rate of less than 8.2%. Furthermore, the system can produce satisfactory implants for defects up to 35% in volume. Surgical implementation also showed that the geometry of the resulting implant was satisfactory for actual implants with irregular boundaries.
The capability of the proposed network is made possible through its concise and effective architecture and training methods. The network effectively integrates twelve 3D convolutional layers into a skip-connected autoencoder structure, which includes four dilated convolutional layers. We did not introduce the drop-out mechanism in the network, nor did we introduce batch normalization.
Effective and well-organized training data is also essential for efficient training on such a high-resolution 3D problem. The network inputs are defective 3D models, and the target outputs are the corresponding intact models. Training is efficient because it is based on supervised learning, rather than relying on indirect information, such as the feedback signal provided by the discriminator in the GANs12 scheme. The proposed network only requires a graphics card-enhanced desktop PC to compute, which makes the system a vast potential in many clinical applications.
There are several limitations of the proposed approach, however. First, 7154 sets of skull models for the deep learning system training were created through the data augmentation technology from 73 skull models. Although case studies strongly support this approach, further clinical trials are needed to evaluate its feasibility for diverse patients. Second, due to the shortage of computing resources, the repairable area is limited to the upper part of the cranium with a volumetric resolution of 112 × 112 × 40. For clinical practice, several post-processing procedures, including resampling and smoothing, are required to provide a smooth implant model for 3D printing.
Regarding these limitations, this study is a preliminary work, and we believe that it can provide incentive for future advanced research. Future work can focus on increasing the number of skull models, combined with appropriate data enhancement technology and network architecture arrangements, to improve the training quality of the system, and increase the volumetric resolution to 448 × 448 × 160. It is also possible to conduct further studies on skull defects of different sizes and positions, such as the cheekbones and temporal bone regions, to reduce the limitation on the system's repair capabilities.
The dataset used for this study is the DICOM (Digital Imaging and Communications in Medicine) metadata collected in the Department of Neurosurgery, Chang Gung Memorial Hospital, Taoyuan, Taiwan. Being authorized by the Institutional Review Board with IRB No. 201900991B0 and Clinical trial/research Consent No. 201801697B0C601, any protected health information was removed from the DICOM metadata.
Each computed tomography (CT) image is with a resolution of 512 × 512 pixels, but the interval between the images can be 0.3 mm, 0.435 mm, 0.5 mm, 0.8 mm, 1.0 mm, 1.25 mm, or 3.0 mm. CT data contains bones and other tissues, and each patient’s taking conditions are different. We set the intensity threshold in the interval of [1200, 1817] according to the Hounsfield unit27 to preserve the bone tissue in the data.
Also, the number of parameters in the network is proportional to the complexity of the inpainting task. Therefore, enough examples, at least thousands of data sets, are needed to train the network. Unfortunately, after sifting through 327 sets of collected data, only 73 sets are usable, because many of them are incomplete or applied with bone screws. Hence, we rotate, tilt, and vertically translate the 3D medical images, resulting in 73 × 7 × 7 × 2 = 7154 sets of augmented data28. The operations are with intervals of 2 degrees for the rotation and tilting, each with 7 alternatives, and 2 voxels for translation.
Due to calculation efficiency considerations, down-sampling is usually required. The original resolution of all collected DICOM metadata on the XY plane is 512 × 512 pixels. After weighing the conflict between modeling quality and calculation requirements, we have that at least a 112 × 112 plane resolution should be maintained. To further alleviate the computational burden, we only cropped the upper part of the skull models, resulting in normalized datasets with a volumetric resolution of 112 × 112 × 40.
The proposed 3D deep learning network
Although 2D image completion technology has made significant progress recently, 3D shape processing involves higher dimensions and is still very challenging. Considering that the human skulls have a similar topology, we manage the system to be trained through supervised learning.
In each pair of training sample, the input to the network is a flawed 3D cranial model with a volumetric resolution of 112 × 112 × 40, and the output is the corresponding intact model.
The system is basically a high-dimensional autoencoder9,10,11 augmented with skip-connections. Because of its shape, this architecture is also called U-Net6 or V-Net29. The autoencoder architecture contains two parts, the encoder and the decoder. The basic autoencoder is dedicated to compressing or reducing information to lower dimensions, denoted as the latent space, in the encoder part, and restored in the decoder part. It has been a mature backbone of many generative tasks5,6,7,8,9,10,11,12.
The encoder part of the proposed scheme contains three 3D convolution layers, each is equipped with the Rectified Linear Unit (ReLU)30 and is succeeded with a maximum pooling (or max pooling) layer. This part reduces the data size initially to the bottleneck, also known as latent space.
Between the encoder and decoder parts, we use four layers of 3D dilated convolutional layers31,32 instead of fully connected layers. Dilated convolution introduces spacings between input values called dilation rate in the kernel of the convolutional layer. For example, in the 3D dilated convolution, a 3 × 3 × 3 kernel with a dilation rate of 2 has the same field of view as a 5 × 5 × 5 kernel, using only 27 parameters. Besides, each dilation layer is equipped with the ReLU activation function.
The use of dilated convolution provides a wide field of view while avoiding multiple convolutions or larger kernels. In other words, the dilation mechanism supports expansion of the receptive field without increasing the number of kernel parameters. These 3D dilated convolutional layers are important for collecting more structural information surrounding the missing parts to generate the patch geometry.
The decoder part contains four 3D convolution layers, each is equipped with the ReLU activation function except the last layer and is succeeded with an up-sampling layer to expand the output to higher resolution. The last layer is equipped with a sigmoid function to normalize the output to the range [0, 1].
There are 8 skip-connections33 in the network between the corresponding encoder and decoder layers, and between the neighboring mid-layers. The skip-connections help to enhance the prediction ability of the decoding process and prevent the gradient vanishing in the deep neural network. This structure is similar to the scheme described by Devalla et al.34, which is a dilated-residual U-Net for 2D medical image segmentation.
In summary, the deep learning system consists of twelve 3D convolutional layers, including four 3D expansion layers, three max-pooling layers, three up-sampling layers and eight skip connections. Table 1 and Fig. 4 give an overview of its architecture. The architecture forms a network with a total of 8269 trainable parameters. This concise neural network model is realized by reducing the number of kernels in the convolutional layers to its performance limit.
We can visualize the internal data corresponding to a specific input model to explore the computational behavior of the deep learning system. In 3D convolutions, kernels can move in 3 directions and thus the feature maps obtained are also 3D. Figure 5 shows the 3D feature maps generated before and after the 4 dilated convolutional layers. Note that there are 4 dilated convolutional layers in the system, and each layer is equipped with 4 kernels. We can see from Fig. 5 that, as the data is processed along the layers, the defective region reduces its size.
The input required to create the 3D feature maps of Fig. 5 are described in the Supplementary Material. Diagrams of kernels and more feature maps are also presented in it for further investigation.
The data size of each 112 × 112 × 40 skull model is 2 MB, and the 7150 training sets amount to 14.35 GB. To provide defective skull models for training, we randomly apply six types of 3D masks with equal probability: symmetrical ellipsoid, ellipsoid, mixed ellipsoid, cylinder, elliptical cylinder, and mixed elliptical cylinder, as shown in Fig. 6.
In training the network, a batch size of 10 models was applied, and we used Adadelta35 as the optimizer and Binary Cross entropy as the cost function. Adadelta35 is an extension of Adagrad36, which can dynamically adjust learning rate over time without setting parameters. The main difference between these two optimizers is that Adagrad accumulates all previous gradient squares, while Adadelta only accumulates a fixed number of values. The same settings were applied to train and evaluate our model. The enhanced data is randomly divided into a training set and a validation set, and the validation split is 0.1. In other words, the validation set is 10% of the available data.
The time required for a training session of 1200 epochs took 58.4 h. Once trained, a completion task takes only 8.6 s. Details of the computational settings and the training history of the proposed deep learning model are provided in the Supplementary Material.
Database The DICOM data set used in this study was collected in the Department of Neurosurgery, Chang Gung Memorial Hospital, Taoyuan, Taiwan from 2012 to 2021. Being authorized by the Institutional Review Board with IRB No. 201900991B0 and Clinical trial/research Consent No. 201801697B0C601, any protected health information was removed from the DICOM metadata.
Software All the images shown in this article were created using MathWorks' MATLAB® 2020b, and the graphic of Fig. 4 were created using Microsoft Office 365.
Alkhaibary, A. et al. Cranioplasty: A comprehensive review of the history, materials, surgical aspects, and complications. World Neurosurg. 139, 445–452 (2020).
Sanan, A. & Haines, S. J. Repairing holes in the head: A history of cranioplasty. Neurosurgery 40, 588–603 (1997).
Chen, X., Xu, L., Li, X. & Egger, J. Computer-aided implant design for the restoration of cranial defects. Sci. Rep. 23, 4199 (2017).
Elharrouss, O., Almaadeed, N., Al-Maadeed, S. & Akbari, Y. Image inpainting: A review. Neural Process. Lett. 51, 2007–2028 (2020).
Yan, Z., Li, X., Li, M., Zuo W. & Shan, S. Shift-net: Image inpainting via deep feature rearrangement. In The European Conference on Computer Vision (ECCV). arXiv preprint arXiv:1801.09392 (2018).
Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), LNCS, Vol. 9351, 234–241 (Springer, 2015).
Liao, L., Hu, R., Xiao, J. & Wang, Z. Artist-net: Decorating the inferred content with unified style for image inpainting. IEEE Access 7, 36921–36933 (2019).
Pathak, D. et al. Context encoders: Feature learning by inpainting. arXiv preprint arXiv:1604.07379 (2016).
Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006).
Vincent, P., Larochelle, H., Bengio, Y. & Manzagol, P. A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning (ICML '08), 1096–1103 (2008).
Baldi, P. Autoencoders, unsupervised learning, and deep architectures. In ICML Workshop on Unsupervised and Transfer Learning, Vol. 27, 37–49 (2012).
Goodfellow, I. L. et al. Generative adversarial networks. In International Conference on Neural Information Processing System (NIPS 2014), 2672–2680 (2014).
Iizuka, S., Edgar, S.-S. & Ishikawa, H. Globally and locally consistent image completion. ACM Trans. Graph. 36, 107. https://doi.org/10.1145/3072959.3073659 (2017).
Wang, Q., Fan, H., Zhu, L. & Tang, Y. Deeply supervised face completion with multi-context generative adversarial network. IEEE Signal Process. Lett. 26, 400–404 (2019).
Jiang, Y., Xu, J., Yang, B., Xu, J. & Zhu, J. Image inpainting based on generative adversarial networks. IEEE Access 8, 22884–22892 (2020).
Masouleh, M. K. & Sadeghian, S. Deep learning-based method for reconstructing three-dimensional building cadastre models from aerial images. J. Appl. Remote Sens. 13, 1. https://doi.org/10.1117/1.JRS.13.024508 (2019).
Ravi, N. et al. Accelerating 3D deep learning with PyTorch3D. arXiv preprint arXiv:2007.08501v1 [cs.CV] (2020).
Han, X. et al. High-resolution shape completion using deep neural networks for global structure and local geometry inference. In Proceedings of IEEE International Conference on Computer Vision (ICCV). https://doi.org/10.1109/ICCV.2017.19 (2017).
Wang, W. et al. Shape inpainting using 3D generative adversarial network and recurrent convolutional networks. In Proceedings of IEEE International Conference on Computer Vision (ICCV). https://doi.org/10.1109/ICCV.2017.252 (2017).
Dai, A. et al. Shape completion using 3D-encoder–predictor CNNs and shape synthesis. In Proceedings of IEEE International Conference on Computer Vision (ICCV). https://doi.org/10.1109/CVPR.2017.693 (2017).
Wang, X., Xu, D. & Gu, F. 3D model inpainting based on 3D deep convolutional generative adversarial network. IEEE Access. 8, 170355–170363 (2020).
Morais, A., Egger, J. & Alves, V. Automated computer-aided design of cranial implants using a deep volumetric convolutional denoising autoencoder. In World Conference on Information Systems and Technologies, 151–160 (2019).
Li, J. et al. A baseline approach for AutoImplant: The MICCAI 2020 Cranial Implant Design Challenge. arXiv preprint arXiv:2006.12449 (2020).
Shi, H. & Chen, X. Cranial implant design through multiaxial slice inpainting using deep learning. In AutoImplant 2020, LNCS, Vol. 12439, 28–36 (Springer, 2020).
Matzkin, F. et al. Self-supervised skull reconstruction in brain CT Images with decompressive craniectomy. In Medical Image Computing and Computer-Assisted Intervention, LNCS, Vol. 12262, 390–399 (Springer, 2020).
Matzkin, F., Newcombe, V., Glocker, B. & Ferrante, E. Cranial implant design via virtual craniectomy with shape priors. arXiv preprint arXiv:2009.13704 [eess.IV] (2020).
Seeram, E. Computed Tomography: Physical Principles, Clinical Applications, and Quality Control (Elsevier Health Sciences, 2015).
Shorten, C. & Khoshgoftaar, T. M. A survey on image data augmentation for deep learning. J. Big Data 6, 60 (2019).
Milletari, F., Navab, N. & Ahmadi, S.-A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. arXiv preprint arXiv:1606.04797v1 [cs.CV] (2016).
Agarap, A. F. Deep learning using rectified linear units (ReLU). arXiv preprint arXiv:1803.08375v2 [cs.NE] (2018).
Yu, F. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122v3 [cs.CV] (2016).
Gupta, A. & Rush, A. M. Dilated convolutions for modeling long-distance genomic dependencies. arXiv preprint arXiv:1710.01278 (2017).
Wu, D., Wang, Y., Xia, S.-T., Bailey, J. & Ma, X. Skip connections matter: On the transferability of adversarial examples generated with ResNets. arXiv preprint arXiv:2002.05990v1 [cs.LG] (2020).
Devalla, S. K. et al. DRUNET: A dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images. Biomed. Opt. Express 9, 3244–3265 (2018).
Zeiler, M. D. ADADELTA: An adaptive learning rate method. arXiv preprint arXiv:1212.5701v1 [cs.LG] (2012).
Duchi, J., Hazan, E. & Singer, Y. Adaptive subgradient methods for online leaning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011).
This work was supported by Grants from the Ministry of Science and Technology, Taiwan, under Grant Numbers MOST 108-2221-E-182-061, MOST 109-2221-E-182-025 and MOST 110-2221-E-182-034; and Chang Gung Memorial Hospital, Taiwan, under Grant Numbers CORPD2J0041, CORPD2J0042, CORPD2H0011 and CORPD2H0012.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Wu, CT., Yang, YH. & Chang, YZ. Three-dimensional deep learning to automatically generate cranial implant geometry. Sci Rep 12, 2683 (2022). https://doi.org/10.1038/s41598-022-06606-9