Abstract
In machine learning for image-based medical diagnostics, supervised convolutional neural networks are typically trained with large and expertly annotated datasets obtained using high-resolution imaging systems. Moreover, the network’s performance can degrade substantially when applied to a dataset with a different distribution. Here, we show that adversarial learning can be used to develop high-performing networks trained on unannotated medical images of varying image quality. Specifically, we used low-quality images acquired using inexpensive portable optical systems to train networks for the evaluation of human embryos, the quantification of human sperm morphology and the diagnosis of malarial infections in the blood, and show that the networks performed well across different data distributions. We also show that adversarial learning can be used with unlabelled data from unseen domain-shifted datasets to adapt pretrained supervised networks to new distributions, even when data from the original distribution are not available. Adaptive adversarial networks may expand the use of validated neural-network models for the evaluation of data collected from multiple imaging systems of varying quality without compromising the knowledge stored in the network.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
Data availability
Deidentified data collected and annotated for this study are available for research use online (https://osf.io/3kc2d/). The public datasets used in this study can be accessed via information in the relevant cited publications.
Code availability
The codes and algorithms developed for this study, in particular MD-nets and its variants, are available at GitHub (https://github.com/shafieelab/Medical-Domain-Adaptive-Neural-Networks). Some custom software and scripts that are supplementary in nature and specific to some of the subsections of the study (in particular, the smartphone application for sperm annotation) are available from the corresponding author on reasonable request.
References
Esteva, A. et al. A guide to deep learning in healthcare. Nat. Med. 25, 24–29 (2019).
Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44–56 (2019).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Morvant, E. Advances in Domain Adaptation Theory: Available Theoretical Results (Elsevier, 2019).
Khosravi, P. et al. Deep learning enables robust assessment and selection of human blastocysts after in vitro fertilization. NPJ Digit. Med. 2, 21 (2019).
Zech, J. R. et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 15, e1002683 (2018).
Badgeley, M. A. et al. Deep learning predicts hip fracture using confounding patient and healthcare variables. NPJ Digit. Med. 2, 31 (2019).
Beede, E. et al. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 1–12 (Association for Computing Machinery, 2020).
Hosny, A. & Aerts, H. J. W. L. Artificial intelligence for global health. Science 366, 955–956 (2019).
Goodfellow, I. J. et al. Generative adversarial networks. In Adv. Neural Inf. Process. Syst. (eds Ghahramani, Z. et al.) (Curran Associates, Inc., 2014).
Long, M., Cao, Z., Wang, J. & Jordan, M. I. Conditional adversarial domain adaptation. In Adv. Neural Inf. Process. Syst. (eds Bengio, S. et al.) (Curran Associates, Inc., 2018).
Ganin, Y. et al. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 1–35 (2016).
Kanakasabapathy, M. K. et al. Development and evaluation of inexpensive automated deep learning-based imaging systems for embryology. Lab Chip 19, 4139–4145 (2019).
Bormann, C. L. et al. Consistency and objectivity of automated embryo assessments using deep neural networks. Fertil. Steril. 113, 781–787 (2020).
Thirumalaraju, P. et al. Evaluation of deep convolutional neural networks in classifying human embryo images based on their morphological quality. Heliyon 7, e06298 (2021).
Bormann, C. L. et al. Performance of a deep learning based neural network in the selection of human blastocysts for implantation. eLife 9, e55301 (2020).
Curchoe, C. L. & Bormann, C. L. Artificial intelligence and machine learning for human reproduction and embryology presented at ASRM and ESHRE 2018. J. Assist. Reprod. Genet. 36, 591–600 (2019).
Hardarson, T., Van Landuyt, L. & Jones, G. The blastocyst. Hum. Reprod. 27, i72–i91 (2012).
Saenko, K., Kulis, B., Fritz, M. & Darrell, T. Adapting visual category models to new domains. In 11th European Conference on Computer Vision (eds Daniilidis, K. et al.) 213–226 (Springer Berlin Heidelberg, 2010).
Tzeng, E., Hoffman, J., Saenko, K. & Darrell, T. Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2962–2971 (IEEE, 2017).
Long, M., Cao, Y., Wang, J. & Jordan, M. I. Learning transferable features with deep adaptation networks. In Proc. 32nd International Conference on Machine Learning (eds Francis, B. & David, B.) 97–105 (PMLR, 2015).
Bousmalis, K., Silberman, N., Dohan, D., Erhan, D. & Krishnan, D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 95–104 (IEEE, 2017).
Wei, K.-Y. & Hsu, C.-T. Generative adversarial guided learning for domain adaptation. In British Machine Vision Conference 2018 100 (BMVA Press, 2018).
Kang, G., Jiang, L., Yang, Y. & Hauptmann, A. G. Contrastive adaptation network for unsupervised domain adaptation. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 4888–4897 (IEEE, 2019).
Wilson, G. & Cook, D. J. A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol. 11, 51 (2020).
WHO Laboratory Manual for the Examination and Processing of Human Semen (WHO, 2010).
Kose, M., Sokmensuer, L. K., Demir, A., Bozdag, G. & Gunalp, S. Manual versus computer-automated semen analysis. Clin. Exp. Obstet. Gynecol. 41, 662–664 (2014).
Mortimer, S. T., van der Horst, G. & Mortimer, D. The future of computer-aided sperm analysis. Asian J. Androl. 17, 545–553 (2015).
Thirumalaraju, P. et al. Automated sperm morpshology testing using artificial intelligence. Fertil. Steril. 110, e432 (2018).
Thirumalaraju, P. et al. Human sperm morphology analysis using smartphone microscopy and deep learning. Fertil. Steril. 112, e41 (2019).
Kanakasabapathy, M. K. et al. An automated smartphone-based diagnostic assay for point-of-care semen analysis. Sci. Transl. Med. 9, eaai7863 (2017).
Agarwal, A. et al. Home sperm testing device versus laboratory sperm quality analyzer: comparison of motile sperm concentration. Fertil. Steril. 110, 1277–1284 (2018).
Rajaraman, S. et al. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 6, e4568 (2018).
World Malaria Report 2018 (WHO, 2018).
Parasites—Malaria (CDC, 2019); https://www.cdc.gov/parasites/malaria/index.html
Treatment of Malaria: Guidelines For Clinicians (United States) (CDC, 2020); https://www.cdc.gov/malaria/diagnosis_treatment/clinicians1.html
Guidelines for the Treatment of Malaria (WHO, 2015).
Global Technical Strategy for Malaria 2016–2030. Library Cataloguing-in-Publication Data (WHO, 2015).
Poostchi, M., Silamut, K., Maude, R. J., Jaeger, S. & Thoma, G. Image analysis and machine learning for detecting malaria. Transl. Res. 194, 36–55 (2018).
Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G. & King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 17, 195 (2019).
Kim, D. W., Jang, H. Y., Kim, K. W., Shin, Y. & Park, S. H. Design characteristics of studies reporting the performance of artificial intelligence algorithms for diagnostic analysis of medical images: results from recently published papers. Korean J. Radiol. 20, 405–410 (2019).
Winkler, J. K. et al. Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition. JAMA Dermatol. 155, 1135–1141 (2019).
D’Amour, A. et al. Underspecification presents challenges for credibility in modern machine learning. Preprint at https://arxiv.org/abs/2011.03395 (2020).
Kazeminia, S. et al. GANs for medical image analysis. Artif. Intell. Med. 109, 101938 (2020).
Rivenson, Y. et al. Deep learning enhanced mobile-phone microscopy. ACS Photonics 5, 2354–2364 (2018).
Shin, H.-C. et al. in Simulation and Synthesis in Medical Imaging Vol. 11037 (eds Gooya, A. et al.) 1–11 (Springer, 2018).
Ghorbani, A., Natarajan, V., Coz, D. & Liu, Y. DermGAN: synthetic generation of clinical skin images with pathology. In Proc. Machine Learning for Health NeurIPS Workshop (eds Dalca Adrian, V. et al.) 155–170 (PMLR, 2020).
Rivenson, Y. et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat. Biomed. Eng. 3, 466–477 (2019).
Rivenson, Y., Wu, Y. & Ozcan, A. Deep learning in holography and coherent imaging. Light Sci. Appl. 8, 85 (2019).
Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215–1225 (2019).
Sankaranarayanan, S., Balaji, Y., Castillo, C. D. & Chellappa, R. Generate to adapt: aligning domains using generative adversarial networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 8503–8512 (IEEE, 2018).
Wood, C. S. et al. Taking connected mobile-health diagnostics of infectious diseases to the field. Nature 566, 467–474 (2019).
DPDx—Laboratory Identification of Parasites of Public Health Concern (CDC, 2020); https://www.cdc.gov/dpdx/malaria/index.html
Mirza, M. & Osindero, S. Conditional generative adversarial nets. Preprint at https://arxiv.org/abs/1411.1784 (2014).
Chollet, F. Xception: deep learning with depthwise separable convolutions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1800–1807 (IEEE, 2017).
Caron, M., Bojanowski, P., Joulin, A. & Douze, M. Deep clustering for unsupervised learning of visual features. In 15th European Conference on Computer Vision (eds Ferrari, V. et al.) 139–156 (Springer, 2018).
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2818–2826 (IEEE, 2016).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).
Szegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proc. Thirty-First AAAI Conference on Artificial Intelligence 4278–4284 (AAAI Press, 2017).
Acknowledgements
We thank the staff members of the Massachusetts General Hospital (MGH) IVF laboratory and the MGH clinical pathology laboratory for their support and assistance in data collection and annotation; the American Association of Bioanalysts Proficiency Testing Services for providing sperm image data and clinical performance information; and staff at the Massachusetts General Hospital and Brigham and Women’s Hospital Centre for Clinical Data Science (CCDS) for providing access to additional compute power. The work reported here was partially supported by the National Institutes of Health under award numbers R01AI118502, R01AI138800 and R61AI140489; the Brigham and Women’s Hospital through the Precision Medicine Development Grant; and the Mass General Brigham through Partners Innovation Discovery grant.
Author information
Authors and Affiliations
Contributions
M.K.K., P.T. and H.S. designed the study. H.S. supervised the overall study. P.T., H.K., F.D., D.K., R.G. and R.P. developed the scripts and algorithms used in this study. P.T. and A.D.S. developed the different imaging systems used in this study. A.M.T. and J.A.B. provided the malaria samples and confirmatory tests for this study. D.R.K. provided supervision as a clinical infectious disease expert. J.C.P. provided supervision and resources for the sperm analysis section of the study. C.L.B. provided sperm and embryo data, supervision and annotations for this study. M.K.K. and P.T. performed the data analysis. M.K.K., P.T. and H.S. wrote the manuscript. All coauthors edited the manuscript.
Corresponding author
Ethics declarations
Competing interests
M.K.K., P.T., C.L.B. and H.S. have submitted patent applications (WO2019068073) and invention disclosures related to this work through Brigham and Women’s Hospital and Mass General Brigham. All other authors declare no competing interests.
Additional information
Peer review information Nature Biomedical Engineering thanks the anonymous reviewers for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary methods, figures and tables.
Rights and permissions
About this article
Cite this article
Kanakasabapathy, M.K., Thirumalaraju, P., Kandula, H. et al. Adaptive adversarial neural networks for the analysis of lossy and domain-shifted datasets of medical images. Nat Biomed Eng 5, 571–585 (2021). https://doi.org/10.1038/s41551-021-00733-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41551-021-00733-w
This article is cited by
-
The prospect of artificial intelligence to personalize assisted reproductive technology
npj Digital Medicine (2024)
-
Smartphone-based platforms implementing microfluidic detection with image-based artificial intelligence
Nature Communications (2023)
-
Rapidly adaptable automated interpretation of point-of-care COVID-19 diagnostics
Communications Medicine (2023)
-
Advancements in the future of automating micromanipulation techniques in the IVF laboratory using deep convolutional neural networks
Journal of Assisted Reproduction and Genetics (2023)
-
Proceedings of the first world conference on AI in fertility
Journal of Assisted Reproduction and Genetics (2023)