Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks

A preprint version of the article is available at arXiv.

Abstract

A unique advantage of magnetic resonance imaging (MRI) is its mechanism for generating various image contrasts depending on tissue-specific parameters, which provides useful clinical information. Unfortunately, a complete set of MR contrasts is often difficult to obtain in a real clinical environment. Recently, there have been claims that generative models such as generative adversarial networks (GANs) can synthesize MR contrasts that are not acquired. However, the poor scalability of existing GAN-based image synthesis poses a fundamental challenge to understanding the nature of MR contrasts: which contrasts matter, and which cannot be synthesized by generative models? Here, we show that these questions can be addressed systematically by learning the joint manifold of multiple MR contrasts using collaborative generative adversarial networks. Our experimental results show that the exogenous contrast provided by contrast agents is not replaceable, but endogenous contrasts such as T1 and T2 can be synthesized from other contrasts. These findings provide important guidance for the acquisition-protocol design of MR in clinical environments.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Concept diagram of multidomain imputation task on MR contrast images using the proposed CollaGAN.
Fig. 2: BraTS segmentation results for quantitative evaluation of CollaGAN.
Fig. 3: Quantitative results for segmentation performance using the following data sets: original BraTS, T1Colla, T2Colla, T2FColla and T1GdColla.
Fig. 4: Comparison of the MR contrast imputation results using CollaGAN, CycleGAN and StarGAN.
Fig. 5: Architecture of the generator used for MR contrast imputation.
Fig. 6: Reconstruction results with lesions by CollaGAN.

Similar content being viewed by others

Data availability

BraTS data are available at https://www.smir.ch/BraTS/Start2015. The MAGiC data sets are available at https://github.com/jongcye/CollaGAN_MRI.

Code availability

The CollaGAN codes with the hyperparameter and the training procedure can also be found at https://doi.org/10.5281/zenodo.3567003.

References

  1. Drevelegas, A. & Papanikolaou, N. in Imaging of Brain Tumors with Histological Correlations (ed. Drevelegas, A.) 13–33 (Springer, 2011).

  2. Menze, B. H. et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2015).

    Article  Google Scholar 

  3. Bakas, S. et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017).

    Article  Google Scholar 

  4. Baraldi, A. N. & Enders, C. K. An introduction to modern missing data analyses. J. School Psychol. 48, 5–37 (2010).

    Article  Google Scholar 

  5. Tanenbaum, L. N. et al. Synthetic MRI for clinical neuroimaging: results of the Magnetic Resonance Image Compilation (MAGiC) prospective, multicenter, multireader trial. Am. J. Neuroradiol. 38, 1103–1110 (2017).

    Article  Google Scholar 

  6. Hagiwara, A. et al. Synthetic MRI in the detection of multiple sclerosis plaques. Am. J. Neuroradiol. 38, 257–263 (2017).

    Article  Google Scholar 

  7. Hagiwara, A. et al. SyMRI of the brain: rapid quantification of relaxation rates and proton density, with synthetic MRI, automatic brain segmentation, and myelin measurement. Invest. Radiol. 52, 647 (2017).

    Article  Google Scholar 

  8. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012) (eds Pereira, F. et al.) 1097–1105 (Neural Information Processing Systems Foundation, 2012).

  9. Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Processing 26, 3142–3155 (2017).

    Article  MathSciNet  Google Scholar 

  10. Dong, C., Loy, C. C., He, K. & Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).

    Article  Google Scholar 

  11. Xie, J., Xu, L. & Chen, E. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012) (eds Pereira, F. et al.) 341–349 (Neural Information Processing Systems Foundation, 2012).

  12. Deng, J. et al. ImageNet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).

  13. Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision 2223–2232 (IEEE, 2017).

  14. Choi, Y. et al. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition 8789–8797 (IEEE, 2018).

  15. Goodfellow, I. J. et al. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014) (eds Ghahramani, Z. et al.) 2672–2680 (Neural Information Processing Systems Foundation, 2014).

  16. Wolterink, J. M. et al. Deep MR to CT synthesis using unpaired data. In International Workshop on Simulation and Synthesis in Medical Imaging (eds Tsaftaris, S. et al.) 14–23 (Springer, 2017).

  17. Dar, S. U. et al. Image synthesis in multicontrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38, 2375–2388 (2019).

    Article  Google Scholar 

  18. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).

  19. Liu, M.-Y., Breuel, T. & Kautz, J. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems 30 (NIPS 2017) (eds Guyon, I. et al.) 700–708 (Neural Information Processing Systems Foundation, 2017).

  20. Welander, P., Karlsson, S. & Eklund, A. Generative adversarial networks for image-to-image translation on multicontrast MR images—a comparison of CycleGAN and UNIT. Preprint at https://arxiv.org/abs/1806.07777 (2018).

  21. Yang, H. et al. Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018, ML-CDS 2018 (Lecture Notes in Computer Science Vol. 11045) 174–182 (Springer, 2018)..

  22. Hiasa, Y. et al. Cross-modality image synthesis from unpaired data using CycleGAN. In International Workshop on Simulation and Synthesis in Medical Imaging (eds Gooya, A. et al.) 31–41 (Springer, 2018).

  23. Hagiwara, A. et al. Improving the quality of synthetic FLAIR images with deep learning using a conditional generative adversarial network for pixel-by-pixel image translation. Am. J. Neuroradiol. 40, 224–230 (2019).

    Article  Google Scholar 

  24. Lee, D., Kim, J., Moon, W.-J. & Ye, J. C. CollaGAN: collaborative GAN for missing image data imputation. In 2019 IEEE Conference on Computer Vision and Pattern Recognition 2487–2496 (IEEE, 2019).

  25. Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization. In International Conference on Medical Image Computing and Computer-Assisted Intervention Brainlesion Workshop (eds Crimi, A. et al.) 311–320 (Springer, 2018).

  26. Dice, L. R. Measures of the amount of ecologic association between species. Ecology 26, 297–302 (1945).

    Article  Google Scholar 

  27. Salimans, T. et al. Improved techniques for training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016) (eds Lee, D. D. et al.) 2234–2242 (Neural Information Processing Systems Foundation, 2016).

  28. Shrivastava, A. et al. Learning from simulated and unsupervised images through adversarial training. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2107–2116 (IEEE, 2017).

  29. Mao, X. et al. Least squares generative adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV) 2813–2821 (IEEE, 2017).

  30. Arjovsky, M., Chintala, S. & Bottou, L. Wasserstein GAN. Preprint at https://arxiv.org/abs/1701.07875 (2017).

  31. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Article  Google Scholar 

  32. Ledig, C.et al. Photo-realistic single image super-resolution using a generative adversarial network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Vol. 2, 4 (IEEE, 2017)..

  33. Mathieu, M., Couprie, C. & LeCun, Y. Deep multi-scale video prediction beyond mean square error. Preprint at https://arxiv.org/abs/1511.05440 (2015).

  34. Zhao, H., Gallo, O., Frosio, I. & Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3, 47–57 (2017).

    Article  Google Scholar 

  35. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Navab, N. et al.) 234–241 (Lecture Notes in Computer Science Vol. 9351, Springer, 2015).

  36. Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1–9 (IEEE, 2015).

  37. Chang, S., Park, S., Yang, J. & Kwak, N. Image translation to mixed-domain using sym-parameterized generative network. Preprint at https://arxiv.org/abs/1811.12362 (2018).

  38. Remedios, S., Pham, D. L., Butman, J. A. & Roy, S. Classifying magnetic resonance image modalities with convolutional neural networks. Proc. SPIE 10575, 105752I (2018).

    Google Scholar 

  39. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. Preprint at https://arxiv.org/abs/1207.0580 (2012).

  40. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learning Res. 15, 1929–1958 (2014).

    MathSciNet  MATH  Google Scholar 

  41. Wu, Y. & He, K. (2018). Group normalization. In 2018 IEEE European Conference on Computer Vision (ECCV) 3–19 (IEEE, 2018).

Download references

Acknowledgement

This research was supported by the National Research Foundation (NRF) of Korea grant NRF-2016R1A2B3008104.

Author information

Authors and Affiliations

Authors

Contributions

J.C.Y. supervised the project in conception and discussion. D.L. and J.C.Y. designed the experiments and analysis. D.L. performed all experiments and analysis. W.-J.M. prepared the MAGiC MRI databases and evaluated the qualitative assessment of the results. D.L., W.-J.M. and J.C.Y wrote the manuscript.

Corresponding author

Correspondence to Jong Chul Ye.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, D., Moon, WJ. & Ye, J.C. Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks. Nat Mach Intell 2, 34–42 (2020). https://doi.org/10.1038/s42256-019-0137-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-019-0137-x

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing