Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning

Abstract

Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: 3D image reconstruction with ultra-sparse projection-view data.
Fig. 2: Architecture of the deep-learning network.
Fig. 3: Training-loss and validation-loss curves for the abdominal CT and lung CT cases.
Fig. 4: Examples from the abdominal CT and lung CT cases.
Fig. 5: Examples from the head-and-neck CT case.
Fig. 6: Analysis of feature maps.

Data availability

The authors declare that the main data supporting the results in this study are available within the paper and its Supplementary Information. The raw datasets from Stanford Hospital are protected because of patient privacy yet can be made available upon request provided that approval is obtained after an Institutional Review Board procedure at Stanford.

Code availability

The source code of the deep-learning algorithm is available for research uses at https://github.com/liyues/PatRecon.

References

  1. 1.

    Candes, E. J., Romberg, J. K. & Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006).

  2. 2.

    Lustig, M., Donoho, D. & Pauly, J. M. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58, 1182–1195 (2007).

  3. 3.

    Sidky, E. Y. & Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 53, 4777–4807 (2008).

  4. 4.

    Chen, G. H., Tang, J. & Leng, S. Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med. Phys. 35, 660–663 (2008).

  5. 5.

    Yu, H. & Wang, G. Compressed sensing based interior tomography. Phys. Med. Biol. 54, 2791–2805 (2009).

  6. 6.

    Choi, K., Wang, J., Zhu, L., Suh, TS., Boyd, S. & Xing, L. Compressed sensing based cone-beam computed tomography reconstruction with a first-order method. Med. Phys. 37, 5113–5125 (2010).

  7. 7.

    Fessler, J. A. & Rogers, W. L. Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs. IEEE Trans. Image Process 5, 1346–1358 (1996).

  8. 8.

    Ji, S., Xue, Y. & Carin, L. Bayesian compressive sensing. IEEE Trans. Signal Process. 56, 2346–2356 (2008).

  9. 9.

    Engl, H. W., Hanke, M. & Neubauer, A. Regularization of inverse problems, Vol. 375 (Springer Science & Business Media, 1996).

  10. 10.

    Stayman, J. W. & Fessler, J. A. Regularization for uniform spatial resolution properties in penalized-likelihood image reconstruction. IEEE Trans. Med. Imaging 19, 601–615 (2000).

  11. 11.

    Jiang, M. & Wang, G. Convergence studies on iterative algorithms for image reconstruction. IEEE Trans. Med. Imaging 22, 569–579 (2003).

  12. 12.

    Wang, J., Li, T., Lu, H. & Liang, Z. Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography. IEEE Trans. Med. Imaging 25, 1272–1283 (2006).

  13. 13.

    Xu, Q. et al. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging 31, 1682–1697 (2012).

  14. 14.

    Preiswerk, F. et al. Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging. Magn. Reson. Med. 78, 897–908 (2017).

  15. 15.

    Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R. & Rosen, M. S. Image reconstruction by domain-transform manifold learning. Nature 555, 487–492 (2018).

  16. 16.

    Henzler, P., Rasche, V., Ropinski, T. & Ritschel, T. Single-image tomography: 3D volumes from 2D cranial X-rays. Computer Graph. Forum 37, 377–388 (2018).

  17. 17.

    Montoya, J. C., Zhang, C., Li, K. & Chen, G. Volumetric scout CT images reconstructed from conventional two-view radiograph localizers using deep learning. In Proc. SPIE Medical Imaging 2019: Physics of Medical Imaging (eds Schmidt, T. G. et al) 1094825 (SPIE, 2019).

  18. 18.

    Nomura, Y., Xu, Q., Shirato, H., Shimizu, S. & Xing, L. Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network. Med. Phys. 46, 3142–3155 (2019).

  19. 19.

    Wu, Y. et al. Incorporating prior knowledge via volumetric deep residual network to optimize the reconstruction of sparsely sampled MRI. Magn. Reson. Imaging https://doi.org/10.1016/j.mri.2019.03.012 (2019).

  20. 20.

    Eslami, S. A. et al. Neural scene representation and rendering. Science 360, 1204–1210 (2018).

  21. 21.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

  22. 22.

    Schmidhuber, J. Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015).

  23. 23.

    Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Proc 25th Conf. on Advances in Neural Information Processing Systems (eds Pereira, F. et al.) 1097–1105 (NIPS, 2012).

  24. 24.

    Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proc. 3rd International Conference on Learning Representations (ICLR, 2015).

  25. 25.

    Shen, L., Yeung, S., Hoffman, J., Mori, G. & Fei-Fei, L. Scaling Human-Object Interaction Recognition through Zero-Shot Learning. In 2018 Winter Conference on Applications of Computer Vision 1568–1576 (IEEE, 2018).

  26. 26.

    Chen, C., Seff, A., Kornhauser, A. & Xiao, J. Deepdriving: Learning affordance for direct perception in autonomous driving. In 2015 International Conference on Computer Vision 2722–2730 (IEEE, 2015).

  27. 27.

    Collobert, R. & Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. 25th International Conference on Machine Learning (eds Cohen, W. et al.) 160–167 (ACM, 2008).

  28. 28.

    Ibragimov, B., Toesca, D., Chang, D., Koong, A. & Xing, L. Development of deep neural network for individualized hepatobiliary toxicity prediction after liver SBRT. Med. Phys. 45, 4763–4774 (2018).

  29. 29.

    Poplin, R. et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2, 158–164 (2018).

  30. 30.

    Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017).

  31. 31.

    Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402–2410 (2016).

  32. 32.

    Ting, D. S. W. et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318, 2211–2223 (2017).

  33. 33.

    Liu, F. et al. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging. Magn. Reson. Med. 79, 2379–2391 (2018).

  34. 34.

    Zhao, W. et al. Incorporating imaging information from deep neural network layers into image guided radiation therapy (IGRT). Radiother. Oncol. 140, 167–174 (2019).

  35. 35.

    Liu, F., Feng, L. & Kijowski, R. MANTIS: Model-Augmented neural network with incoherent k-space sampling for efficient mr parameter mapping. Magn. Reson. Med. 82, 174–188 (2019).

  36. 36.

    Zhao, W. et al. Markerless pancreatic tumor target localization enabled by deep learning. Int. J. Radiat. Oncol. Biol. Phys. 105, 432–439 (2019).

  37. 37.

    Hoo-Chang, S. et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35, 1285–1298 (2016).

  38. 38.

    van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).

  39. 39.

    Papernot, N., McDaniel, P. & Goodfellow, I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. Preprint at https://arxiv.org/abs/1605.07277 (2016).

  40. 40.

    Eykholt, K. et al. Robust physical-world attacks on deep learning visual classification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1625–1634 (IEEE, 2018).

  41. 41.

    Metzen, J. H., Genewein, T., Fischer, V. & Bischoff, B. On detecting adversarial perturbations. In Proc. 5th International Conference on Learning Representations (ICLR, 2017).

  42. 42.

    Lee, K., Lee, H., Lee, K. & Shin, J. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In Proc. 6th International Conference on Learning Representations (ICLR, 2018).

  43. 43.

    Akhtar, N. & Mian, A. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018).

  44. 44.

    Lee, K., Lee, K., Lee, H. & Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Proc. 31st Conference on Advances in Neural Information Processing Systems (eds Bengjo, S. et al.) 7167–7177 (NIPS, 2018).

  45. 45.

    Su, J., Vargas, D. V. & Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23, 828–841 (2019).

  46. 46.

    Yuan, X., He, P., Zhu, Q. & Li, X. Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30, 2805–2824 (2019).

  47. 47.

    Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006).

  48. 48.

    He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  49. 49.

    Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proc. 32nd International Conference on Machine Learning, Vol. 37, 448–456 (JMLR, 2015).

  50. 50.

    Nair, V. & Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on Machine Learning 807–814 (ICML, 2010).

  51. 51.

    Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).

  52. 52.

    Paszke, A. et al. Automatic differentiation in pytorch. In Proc. 30th Conference on Advances in Neural Information Processing Systems Autodiff. Workshop (NIPS, 2017).

  53. 53.

    Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. In Proc. 3rd International Conference on Learning Representations (ICLR, 2015).

  54. 54.

    Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process 13, 600–612 (2004).

  55. 55.

    Li, R. et al. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Med. Phys. 37, 2822–2826 (2010).

  56. 56.

    Li, R. et al. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy. Med. Phys. 38, 2783–2794 (2011).

  57. 57.

    Xu, Y. et al. A method for volumetric imaging in radiotherapy using single x-ray projection. Med. Phys. 42, 2498–2509 (2015).

Download references

Acknowledgements

This research is partially supported by the National Institutes of Health (R01CA176553 and R01EB016777). The contents of this article are solely the responsibility of the authors and do not necessarily represent the official NIH views.

Author information

L.X. proposed the original notion of single-view reconstruction for tomographic imaging and supervised the research, L.S. designed and implemented the algorithm. W.Z. designed the experiments and implemented the data generation process. L.S. and W.Z. carried out experimental work. L.X., L.S. and W.Z. wrote the manuscript. All the authors reviewed the manuscript.

Correspondence to Lei Xing.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Methods, Figures and Tables.

Reporting Summary

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Shen, L., Zhao, W. & Xing, L. Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning. Nat Biomed Eng 3, 880–888 (2019). https://doi.org/10.1038/s41551-019-0466-4

Download citation