Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation

Abstract

Deep learning has led to tremendous progress in the field of medical artificial intelligence. However, training deep-learning models usually require large amounts of annotated data. Annotating large-scale datasets is prone to human biases and is often very laborious, especially for dense prediction tasks such as image segmentation. Inspired by semi-supervised algorithms that use both labelled and unlabelled data for training, we propose a dual-view framework based on adversarial learning for segmenting volumetric images. In doing so, we use critic networks to allow each view to learn from high-confidence predictions of the other view by measuring a notion of uncertainty. Furthermore, to jointly learn the dual-views and the critics, we formulate the learning problem as a min–max problem. We analyse and contrast our proposed method against state-of-the-art baselines, both qualitatively and quantitatively, on four public datasets with multiple modalities (for example, computerized topography and magnetic resonance imaging) and demonstrate that the proposed semi-supervised method substantially outperforms the competing baselines while achieving competitive performance compared to fully supervised counterparts. Our empirical results suggest that an uncertainty-guided co-training framework can make two neural networks robust to data artefacts and have the ability to generate plausible segmentation masks that can be helpful for semi-automated segmentation processes.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Overview of the uncertainty-guided dual-view learning network architecture.
Fig. 2: Experimental results analysis with semi-supervised and fully supervised baselines on the NIH Pancreas CT dataset.
Fig. 3: More experimental results on the Co-BioNet framework.
Fig. 4: In-depth analysis of model accuracy.
Fig. 5: Ablation studies performed on the Pancreas CT dataset.

Similar content being viewed by others

Data availability

To evaluate our pipeline we used publicly available datasets: the National Institutes of Health (NIH) Pancreas CT Dataset28,29, the 2018 Left Atrial Segmentation Challenge Dataset30, and the Medical Segmentation Decathlon (MSD) BraTS dataset and BraTS Challenge 2022 multi-institutional routine clinically acquired multi-parametric MRI (mpMRI) dataset. The NIH Panceas CT dataset is available at https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT refs. 29,52,54, the 2018 Left Atrial (LA) Segmentation Challenge dataset is available at http://atriaseg2018.cardiacatlas.org/ ref. 55, the MSD BraTS dataset is available at http://medicaldecathlon.com/ ref. 31, and the BraTS Challenge 2022 mpMRI dataset is available at http://braintumorsegmentation.org/ refs. 37,38,39,40,41. Preprocessing scripts for the NIH Pancreas CT dataset are available at https://github.com/ycwu1997/MC-Net ref. 23. The preprocessed LA dataset is available at https://github.com/yulequan/UA-MT ref. 26.

Code availability

The code was implemented in Python using the deep learning framework PyTorch56. The code is publicly available at https://github.com/himashi92/Co-BioNet ref. 57. All the pre-trained model weights and supplementary results can be found in Figshare58. The source code is provided under an MIT licence.

References

  1. Doi, K. Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput. Med. Imag. Graph. 31, 198–211 (2007).

    Article  Google Scholar 

  2. Shen, D., Wu, G. & Suk, H.-I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017).

    Article  Google Scholar 

  3. Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J. & Maier-Hein, K. H. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2021).

    Article  Google Scholar 

  4. Rajpurkar, P., Chen, E., Banerjee, O. & Topol, E. J. AI in health and medicine. Nat. Med. 28, 31–38 (2022).

    Article  Google Scholar 

  5. Shad, R., Cunningham, J. P., Ashley, E. A., Langlotz, C. P. & Hiesinger, W. Designing clinically translatable artificial intelligence systems for high-dimensional medical imaging. Nat. Mach. Intell. 3, 929–935 (2021).

    Article  Google Scholar 

  6. Schoppe, O. et al. Deep learning-enabled multi-organ segmentation in whole-body mouse scans. Nat. Commun. 11, 5626 (2020).

    Article  Google Scholar 

  7. Holmberg, O. G. et al. Self-supervised retinal thickness prediction enables deep learning from unlabelled data to boost classification of diabetic retinopathy. Nat. Mach. Intell. 2, 719–726 (2020).

    Article  Google Scholar 

  8. Xu, C., Tao, D. & Xu, C. A survey on multi-view learning. Preprint at https://arxiv.org/abs/1304.5634 (2013).

  9. Dasgupta, S., Littman, M. L. & McAllester, D. PAC generalization bounds for co-training. In Proc. 14th International Conference on Neural Information Processing Systems: Natural and Synthetic NIPS'01 375–382 (ACM, 2002).

  10. Blum, A. & Mitchell, T. Combining labeled and unlabeled data with co-training. In Proc. 11th Annual Conference on Computational Learning Theory 92–100 (ACM, 1998).

  11. Sindhwani, V., Niyogi, P. & Belkin, M. A co-regularization approach to semi-supervised learning with multiple views. In Proc. ICML Workshop on Learning with Multiple Views Vol. 2005, 74–79 (Citeseer, 2005).

  12. Sindhwani, V. & Rosenberg, D. S. An RKHS for multi-view learning and manifold co-regularization. In Proc. 25th International Conference on Machine Learning 976–983 (ACM, 2008).

  13. Nigam, K. & Ghani, R. Analyzing the effectiveness and applicability of co-training. In Proc. Ninth International Conference on Information and Knowledge Management 86–93 (ACM, 2000).

  14. Muslea, I., Minton, S. & Knoblock, C. A. Active learning with multiple views. J. Artif. Intell. Res. 27, 203–233 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  15. Kiritchenko, S. & Matwin, S. Email classification with co-training. In Proc. 2001 Conference of the Centre for Advanced Studies on Collaborative Research 8 (Citeseer, 2001).

  16. Wan, X. Co-training for cross-lingual sentiment classification. In Proc. Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP 235–243 (ACM, 2009).

  17. Xia, Y. et al. Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation. Med. Image Anal. 65, 101766 (2020).

    Article  Google Scholar 

  18. Qiao, S., Shen, W., Zhang, Z., Wang, B. & Yuille, A. Deep co-training for semi-supervised image recognition. In Proc. of the IEEE/CVF European Conference on Computer Vision 135–152 (IEEE, 2018).

  19. Peng, J., Estrada, G., Pedersoli, M. & Desrosiers, C. Deep co-training for semi-supervised image segmentation. Phys. Rev. 107, 107269 (2020).

    Google Scholar 

  20. Peiris, H., Chen, Z., Egan, G. & Harandi, M. Duo-SegNet: adversarial dual-views for semi-supervised medical image segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 428–438 (Springer, 2021).

  21. Miyato, T., Maeda, S.-i., Koyama, M. & Ishii, S. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell 41, 1979–1993 (2018).

    Article  Google Scholar 

  22. Wu, Y., Xu, M., Ge, Z., Cai, J. & Zhang, L. Semi-supervised left atrium segmentation with mutual consistency training. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 297–306 (Springer, 2021).

  23. Wu, Y. et al. Mutual consistency learning for semi-supervised medical image segmentation. Med. Image Anal. 81, 102530 (2022).

    Article  Google Scholar 

  24. Zheng, X. et al. Uncertainty-aware deep co-training for semi-supervised medical image segmentation. Comput. Biol. Med. 149, 106051 (2022).

    Article  Google Scholar 

  25. Luo, X., Chen, J., Song, T. & Wang, G. Semi-supervised medical image segmentation through dual-task consistency. In Proc. AAAI Conference on Artificial Intelligence Vol. 35, 8801–8809 (AAAI, 2021).

  26. Yu, L., Wang, S., Li, X., Fu, C.-W. & Heng, P.-A. in Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 (eds. Shen, D. et al.) 605–613 (Springer, 2019).

  27. Li, S., Zhang, C. & He, X. in Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (eds. Martel, A. L. et al.) 552–561 (Springer, 2020).

  28. Roth, H. R. et al. DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 556–564 (Springer, 2015).

  29. Roth, H. R. et al. Data from pancreas-CT. The Cancer Imaging Archive https://doi.org/10.7937/K9/TCIA.2016.tNB1kqBU (2016).

  30. Xiong, Z. et al. A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging. Med. Image Anal. 67, 101832 (2021).

    Article  Google Scholar 

  31. Antonelli, M. et al. The medical segmentation decathlon. Nat. Commun. 13, 4128 (2022).

    Article  Google Scholar 

  32. Milletari, F., Navab, N. & Ahmadi, S.-A. V-Net: fully convolutional neural networks for volumetric medical image segmentation. In Proc. 2016 Fourth International Conference on 3D Vision (3DV) 565–571 (IEEE, 2016).

  33. Kornblith, S., Norouzi, M., Lee, H. & Hinton, G. Similarity of neural network representations revisited. In Proc. International Conference on Machine Learning 3519–3529 (PMLR, 2019).

  34. Wang, W. & Zhou, Z.-H. Co-training with insufficient views. In Proc. Asian Conference on Machine Learning 467–482 (PMLR, 2013).

  35. Shaw, R., Sudre, C., Ourselin, S. & Cardoso, M. J. MRI k-space motion artifact augmentation: model robustness and task-specific uncertainty. In International Conference on Medical Imaging with Deep Learning Vol. 102, 427–436 (PMLR, 2018).

  36. Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139–144 (2020).

    Article  Google Scholar 

  37. Baid, U. et al. The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification. Preprint at https://arxiv.org/abs/2107.02314 (2021).

  38. Menze, B. H. et al. The multimodal brain tumor image segmentation benchmark (BraTS). IEEE Trans. Med. Imag. 34, 1993–2024 (2014).

    Article  Google Scholar 

  39. Bakas, S. et al. Advancing the Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017).

    Article  Google Scholar 

  40. Bakas, S. et al. Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection (BraTS-TCGA-GBM). The Cancer Imaging Archive https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q (2017).

  41. Bakas, S. et al. Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection (BraTS-TCGA-LGG). The Cancer Imaging Archive https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF (2017).

  42. Begoli, E., Bhattacharya, T. & Kusnezov, D. The need for uncertainty quantification in machine-assisted medical decision making. Nat. Mach. Intell. 1, 20–23 (2019).

    Article  Google Scholar 

  43. Bilodeau, A. et al. Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations. Nat. Mach. Intell. 4, 455–466 (2022).

    Article  Google Scholar 

  44. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015).

  45. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).

  46. Yu, L., Wang, S., Li, X., Fu, C.-W. & Heng, P.-A. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 605–613 (Springer, 2019).

  47. Li, S., Zhang, C. & He, X. Shape-aware semi-supervised 3D semantic segmentation for medical images. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 552–561 (Springer, 2020).

  48. Milletari, F., Navab, N. & Ahmadi, S.-A. V-Net: fully convolutional neural networks for volumetric medical image segmentation. In Proc. 2016 Fourth International Conference on 3D Vision (3DV) 565–571 (IEEE, 2016).

  49. Lee, D., Moon, W.-J. & Ye, J. C. Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks. Nat. Mach. Intell. 2, 34–42 (2020).

    Article  Google Scholar 

  50. Laine, S. & Aila, T. Temporal ensembling for semi-supervised learning. In Proc. International Conference on Learning Representations (ICLR, 2017).

  51. Peiris, H., Hayat, M., Chen, Z., Egan, G. & Harandi, M. A robust volumetric transformer for accurate 3D tumor segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 162–172 (Springer, 2022).

  52. Roth, H., Lu, L., Farag, A., Sohn, A. & Summers, R. Spatial aggregation of holistically-nested networks for automated pancreas segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 451–459 (Springer, 2016).

  53. Taha, A. A. & Hanbury, A. Metrics for evaluating 3D medical image segmentation: analysis, selection and tool. BMC Med. Imag. 15, 29 (2015).

    Article  Google Scholar 

  54. Clark, K. et al. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J. Digit. Imag. 26, 1045–1057 (2013).

    Article  Google Scholar 

  55. Xiong, Z. et al. A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging. Med. Imag. Anal. 67, 101832 (2021).

    Article  Google Scholar 

  56. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Proc. 33rd International Conference on Neural Information Processing Systems (NIPS ’19) 8026–8037 (ACM, 2019).

  57. Peiris, H. himashi92/co-bionet: stable release https://doi.org/10.5281/zenodo.7935535 (2023).

  58. Peiris, H. Project contributions https://figshare.com/articles/journal_contribution/Project_Contributions/22140194 (2023).

  59. Hatamizadeh, A. et al. UNETR: transformers for 3D medical image segmentation. In Proc. IEEE/CVF Winter Conference on Applications of Computer Vision 574–584 (IEEE, 2022).

Download references

Acknowledgements

This work was supported by funding from The Australian Research Council Discovery Program (DP210101863 and DP230101176).

Author information

Authors and Affiliations

Authors

Contributions

H.P. and M. Harandi conceived the initial idea and planned the experiments. H.P. developed the core theory, designed the model and computational framework and analysed data. M. Harandi directed the project. H.P., M. Harandi and M. Hayat interpreted the results. M. Harandi, Z.C., G.E. and M. Hayat provided scientific insights on the applications and supervised the study. H.P., M. Harandi and M. Hayat wrote the manuscript, with feedback from all other authors. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Himashi Peiris.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Xiaosong Wang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Comparison of qualitative segmentation masks.

Comparison generated from models trained on 10% of NIH Pancreas CT labelled data.

Extended Data Fig. 2 Comparison of qualitative segmentation masks.

Comparison generated from models trained on 20% of NIH Pancreas CT labelled data.

Extended Data Fig. 3 Experimental result analysis with semi-supervised baselines on 2018 Left Atrial Segmentation Challenge dataset.

a, Comparison of accuracy scores for models trained with 10% and 20% of labeled data. b, Patient MRI scan on three anatomical planes: Axial, Sagittal, and Coronal. The Co-BioNet produces a segmentation mask on three anatomical planes for the region of interest. c, The performance comparison on distance measurement scores for models trained on 10% and 20% labeled data. d, The clustered bar chart shows performance scores for Co-BioNet with a fully-supervised learning method on three labeled data settings (10%, 20%, and 100%). e, A reel of images shows a qualitative comparison of representative images from the LA MRI dataset, overlaying with the ground truth(expert annotation), predicted segmentation masks, and the corresponding segmented volume. Predictions generated for models trained with 10% labeled data. f, Predictions generated for models trained with 20% labeled data. (See Extended Data Fig. 4 & Extended Data Fig. 5 for more qualitative results).

Extended Data Fig. 4 Comparison of qualitative segmentation masks.

Comparison generated from the models trained on 10% of 2018 Left Atrial Segmentation Challenge labelled data.

Extended Data Fig. 5 Comparison of qualitative segmentation masks.

Comparison generated from the models trained on 20% of 2018 Left Atrial Segmentation Challenge labelled data.

Extended Data Fig. 6 Predictions generated from segmentation network and critic network.

Predictions for Co-BioNet trained on 20% of Pancreas CT labelled data.

Extended Data Fig. 7 Quantitative trend analysis on the Co-BioNet framework.

Quantitative trend analysis for different Pancreas CT and LA MRI labelled data portions.

Extended Data Table 1 Quantitative comparison of segmentation accuracy scores for the predictions generated for the multi-modal BraTS 2022 dataset

Supplementary information

Supplementary Information

Supplementary Tables 1–7.

Reporting Summary

Supplementary Data 1

Co-BioNet performance scores for Pancreas CT individual test case.

Supplementary Data 2

MC-Net+ performance scores for Pancreas CT individual test case.

Supplementary Data 3

Co-BioNet performance scores for LA MRI individual test case.

Supplementary Data 4

MC-Net+ performance scores for LA MRI individual test case.

Supplementary Data 5

Co-BioNet performance scores for MSD BraTS MRI test cases.

Supplementary Data 6

MC-Net+ performance scores for MSD BraTS MRI test cases.

Supplementary Data 7

P-value calculation.

Supplementary Data 8

Trend analysis on Pancreas CT test cases.

Supplementary Data 9

CKA values for model layers trained on Pancreas 10% labelled data.

Supplementary Data 10

CKA values for model layers trained on Pancreas 20% labelled data.

Supplementary Data 11

Values to construct box plot.

Supplementary Code 1

Zip file containing all the scripts and instructions to train and test Co-BioNet.

Supplementary Video 1

Video on end-to-end training of Co-BioNet.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peiris, H., Hayat, M., Chen, Z. et al. Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation. Nat Mach Intell 5, 724–738 (2023). https://doi.org/10.1038/s42256-023-00682-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-023-00682-w

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics