Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning

Abstract

Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: 3D image reconstruction with ultra-sparse projection-view data.
Fig. 2: Architecture of the deep-learning network.
Fig. 3: Training-loss and validation-loss curves for the abdominal CT and lung CT cases.
Fig. 4: Examples from the abdominal CT and lung CT cases.
Fig. 5: Examples from the head-and-neck CT case.
Fig. 6: Analysis of feature maps.

Similar content being viewed by others

Data availability

The authors declare that the main data supporting the results in this study are available within the paper and its Supplementary Information. The raw datasets from Stanford Hospital are protected because of patient privacy yet can be made available upon request provided that approval is obtained after an Institutional Review Board procedure at Stanford.

Code availability

The source code of the deep-learning algorithm is available for research uses at https://github.com/liyues/PatRecon.

References

  1. Candes, E. J., Romberg, J. K. & Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006).

    Article  Google Scholar 

  2. Lustig, M., Donoho, D. & Pauly, J. M. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58, 1182–1195 (2007).

    Article  Google Scholar 

  3. Sidky, E. Y. & Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 53, 4777–4807 (2008).

    Article  Google Scholar 

  4. Chen, G. H., Tang, J. & Leng, S. Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med. Phys. 35, 660–663 (2008).

    Article  Google Scholar 

  5. Yu, H. & Wang, G. Compressed sensing based interior tomography. Phys. Med. Biol. 54, 2791–2805 (2009).

    Article  Google Scholar 

  6. Choi, K., Wang, J., Zhu, L., Suh, TS., Boyd, S. & Xing, L. Compressed sensing based cone-beam computed tomography reconstruction with a first-order method. Med. Phys. 37, 5113–5125 (2010).

    Article  Google Scholar 

  7. Fessler, J. A. & Rogers, W. L. Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs. IEEE Trans. Image Process 5, 1346–1358 (1996).

    Article  CAS  Google Scholar 

  8. Ji, S., Xue, Y. & Carin, L. Bayesian compressive sensing. IEEE Trans. Signal Process. 56, 2346–2356 (2008).

    Article  Google Scholar 

  9. Engl, H. W., Hanke, M. & Neubauer, A. Regularization of inverse problems, Vol. 375 (Springer Science & Business Media, 1996).

  10. Stayman, J. W. & Fessler, J. A. Regularization for uniform spatial resolution properties in penalized-likelihood image reconstruction. IEEE Trans. Med. Imaging 19, 601–615 (2000).

    Article  CAS  Google Scholar 

  11. Jiang, M. & Wang, G. Convergence studies on iterative algorithms for image reconstruction. IEEE Trans. Med. Imaging 22, 569–579 (2003).

    Article  Google Scholar 

  12. Wang, J., Li, T., Lu, H. & Liang, Z. Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography. IEEE Trans. Med. Imaging 25, 1272–1283 (2006).

    Article  Google Scholar 

  13. Xu, Q. et al. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging 31, 1682–1697 (2012).

    Article  Google Scholar 

  14. Preiswerk, F. et al. Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging. Magn. Reson. Med. 78, 897–908 (2017).

    Article  CAS  Google Scholar 

  15. Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R. & Rosen, M. S. Image reconstruction by domain-transform manifold learning. Nature 555, 487–492 (2018).

    Article  CAS  Google Scholar 

  16. Henzler, P., Rasche, V., Ropinski, T. & Ritschel, T. Single-image tomography: 3D volumes from 2D cranial X-rays. Computer Graph. Forum 37, 377–388 (2018).

    Article  Google Scholar 

  17. Montoya, J. C., Zhang, C., Li, K. & Chen, G. Volumetric scout CT images reconstructed from conventional two-view radiograph localizers using deep learning. In Proc. SPIE Medical Imaging 2019: Physics of Medical Imaging (eds Schmidt, T. G. et al) 1094825 (SPIE, 2019).

  18. Nomura, Y., Xu, Q., Shirato, H., Shimizu, S. & Xing, L. Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network. Med. Phys. 46, 3142–3155 (2019).

    PubMed  Google Scholar 

  19. Wu, Y. et al. Incorporating prior knowledge via volumetric deep residual network to optimize the reconstruction of sparsely sampled MRI. Magn. Reson. Imaging https://doi.org/10.1016/j.mri.2019.03.012 (2019).

  20. Eslami, S. A. et al. Neural scene representation and rendering. Science 360, 1204–1210 (2018).

    Article  CAS  Google Scholar 

  21. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  CAS  Google Scholar 

  22. Schmidhuber, J. Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015).

    Article  Google Scholar 

  23. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Proc 25th Conf. on Advances in Neural Information Processing Systems (eds Pereira, F. et al.) 1097–1105 (NIPS, 2012).

  24. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proc. 3rd International Conference on Learning Representations (ICLR, 2015).

  25. Shen, L., Yeung, S., Hoffman, J., Mori, G. & Fei-Fei, L. Scaling Human-Object Interaction Recognition through Zero-Shot Learning. In 2018 Winter Conference on Applications of Computer Vision 1568–1576 (IEEE, 2018).

  26. Chen, C., Seff, A., Kornhauser, A. & Xiao, J. Deepdriving: Learning affordance for direct perception in autonomous driving. In 2015 International Conference on Computer Vision 2722–2730 (IEEE, 2015).

  27. Collobert, R. & Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. 25th International Conference on Machine Learning (eds Cohen, W. et al.) 160–167 (ACM, 2008).

  28. Ibragimov, B., Toesca, D., Chang, D., Koong, A. & Xing, L. Development of deep neural network for individualized hepatobiliary toxicity prediction after liver SBRT. Med. Phys. 45, 4763–4774 (2018).

    Article  Google Scholar 

  29. Poplin, R. et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2, 158–164 (2018).

    Article  Google Scholar 

  30. Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017).

    Article  CAS  Google Scholar 

  31. Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402–2410 (2016).

    Article  Google Scholar 

  32. Ting, D. S. W. et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318, 2211–2223 (2017).

    Article  Google Scholar 

  33. Liu, F. et al. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging. Magn. Reson. Med. 79, 2379–2391 (2018).

    Article  Google Scholar 

  34. Zhao, W. et al. Incorporating imaging information from deep neural network layers into image guided radiation therapy (IGRT). Radiother. Oncol. 140, 167–174 (2019).

    Article  Google Scholar 

  35. Liu, F., Feng, L. & Kijowski, R. MANTIS: Model-Augmented neural network with incoherent k-space sampling for efficient mr parameter mapping. Magn. Reson. Med. 82, 174–188 (2019).

    Article  Google Scholar 

  36. Zhao, W. et al. Markerless pancreatic tumor target localization enabled by deep learning. Int. J. Radiat. Oncol. Biol. Phys. 105, 432–439 (2019).

    Article  Google Scholar 

  37. Hoo-Chang, S. et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35, 1285–1298 (2016).

    Article  Google Scholar 

  38. van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).

    Google Scholar 

  39. Papernot, N., McDaniel, P. & Goodfellow, I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. Preprint at https://arxiv.org/abs/1605.07277 (2016).

  40. Eykholt, K. et al. Robust physical-world attacks on deep learning visual classification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1625–1634 (IEEE, 2018).

  41. Metzen, J. H., Genewein, T., Fischer, V. & Bischoff, B. On detecting adversarial perturbations. In Proc. 5th International Conference on Learning Representations (ICLR, 2017).

  42. Lee, K., Lee, H., Lee, K. & Shin, J. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In Proc. 6th International Conference on Learning Representations (ICLR, 2018).

  43. Akhtar, N. & Mian, A. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018).

    Article  Google Scholar 

  44. Lee, K., Lee, K., Lee, H. & Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Proc. 31st Conference on Advances in Neural Information Processing Systems (eds Bengjo, S. et al.) 7167–7177 (NIPS, 2018).

  45. Su, J., Vargas, D. V. & Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23, 828–841 (2019).

    Article  Google Scholar 

  46. Yuan, X., He, P., Zhu, Q. & Li, X. Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30, 2805–2824 (2019).

    Article  Google Scholar 

  47. Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006).

    Article  CAS  Google Scholar 

  48. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  49. Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proc. 32nd International Conference on Machine Learning, Vol. 37, 448–456 (JMLR, 2015).

  50. Nair, V. & Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on Machine Learning 807–814 (ICML, 2010).

  51. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).

  52. Paszke, A. et al. Automatic differentiation in pytorch. In Proc. 30th Conference on Advances in Neural Information Processing Systems Autodiff. Workshop (NIPS, 2017).

  53. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. In Proc. 3rd International Conference on Learning Representations (ICLR, 2015).

  54. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process 13, 600–612 (2004).

    Article  Google Scholar 

  55. Li, R. et al. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Med. Phys. 37, 2822–2826 (2010).

    Article  Google Scholar 

  56. Li, R. et al. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy. Med. Phys. 38, 2783–2794 (2011).

    Article  Google Scholar 

  57. Xu, Y. et al. A method for volumetric imaging in radiotherapy using single x-ray projection. Med. Phys. 42, 2498–2509 (2015).

    Article  Google Scholar 

Download references

Acknowledgements

This research is partially supported by the National Institutes of Health (R01CA176553 and R01EB016777). The contents of this article are solely the responsibility of the authors and do not necessarily represent the official NIH views.

Author information

Authors and Affiliations

Authors

Contributions

L.X. proposed the original notion of single-view reconstruction for tomographic imaging and supervised the research, L.S. designed and implemented the algorithm. W.Z. designed the experiments and implemented the data generation process. L.S. and W.Z. carried out experimental work. L.X., L.S. and W.Z. wrote the manuscript. All the authors reviewed the manuscript.

Corresponding author

Correspondence to Lei Xing.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Methods, Figures and Tables.

Reporting Summary

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, L., Zhao, W. & Xing, L. Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning. Nat Biomed Eng 3, 880–888 (2019). https://doi.org/10.1038/s41551-019-0466-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41551-019-0466-4

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing