Abstract
A unique advantage of magnetic resonance imaging (MRI) is its mechanism for generating various image contrasts depending on tissue-specific parameters, which provides useful clinical information. Unfortunately, a complete set of MR contrasts is often difficult to obtain in a real clinical environment. Recently, there have been claims that generative models such as generative adversarial networks (GANs) can synthesize MR contrasts that are not acquired. However, the poor scalability of existing GAN-based image synthesis poses a fundamental challenge to understanding the nature of MR contrasts: which contrasts matter, and which cannot be synthesized by generative models? Here, we show that these questions can be addressed systematically by learning the joint manifold of multiple MR contrasts using collaborative generative adversarial networks. Our experimental results show that the exogenous contrast provided by contrast agents is not replaceable, but endogenous contrasts such as T1 and T2 can be synthesized from other contrasts. These findings provide important guidance for the acquisition-protocol design of MR in clinical environments.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
BraTS data are available at https://www.smir.ch/BraTS/Start2015. The MAGiC data sets are available at https://github.com/jongcye/CollaGAN_MRI.
Code availability
The CollaGAN codes with the hyperparameter and the training procedure can also be found at https://doi.org/10.5281/zenodo.3567003.
References
Drevelegas, A. & Papanikolaou, N. in Imaging of Brain Tumors with Histological Correlations (ed. Drevelegas, A.) 13–33 (Springer, 2011).
Menze, B. H. et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2015).
Bakas, S. et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017).
Baraldi, A. N. & Enders, C. K. An introduction to modern missing data analyses. J. School Psychol. 48, 5–37 (2010).
Tanenbaum, L. N. et al. Synthetic MRI for clinical neuroimaging: results of the Magnetic Resonance Image Compilation (MAGiC) prospective, multicenter, multireader trial. Am. J. Neuroradiol. 38, 1103–1110 (2017).
Hagiwara, A. et al. Synthetic MRI in the detection of multiple sclerosis plaques. Am. J. Neuroradiol. 38, 257–263 (2017).
Hagiwara, A. et al. SyMRI of the brain: rapid quantification of relaxation rates and proton density, with synthetic MRI, automatic brain segmentation, and myelin measurement. Invest. Radiol. 52, 647 (2017).
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012) (eds Pereira, F. et al.) 1097–1105 (Neural Information Processing Systems Foundation, 2012).
Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Processing 26, 3142–3155 (2017).
Dong, C., Loy, C. C., He, K. & Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
Xie, J., Xu, L. & Chen, E. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012) (eds Pereira, F. et al.) 341–349 (Neural Information Processing Systems Foundation, 2012).
Deng, J. et al. ImageNet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).
Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision 2223–2232 (IEEE, 2017).
Choi, Y. et al. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition 8789–8797 (IEEE, 2018).
Goodfellow, I. J. et al. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014) (eds Ghahramani, Z. et al.) 2672–2680 (Neural Information Processing Systems Foundation, 2014).
Wolterink, J. M. et al. Deep MR to CT synthesis using unpaired data. In International Workshop on Simulation and Synthesis in Medical Imaging (eds Tsaftaris, S. et al.) 14–23 (Springer, 2017).
Dar, S. U. et al. Image synthesis in multicontrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38, 2375–2388 (2019).
Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).
Liu, M.-Y., Breuel, T. & Kautz, J. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems 30 (NIPS 2017) (eds Guyon, I. et al.) 700–708 (Neural Information Processing Systems Foundation, 2017).
Welander, P., Karlsson, S. & Eklund, A. Generative adversarial networks for image-to-image translation on multicontrast MR images—a comparison of CycleGAN and UNIT. Preprint at https://arxiv.org/abs/1806.07777 (2018).
Yang, H. et al. Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018, ML-CDS 2018 (Lecture Notes in Computer Science Vol. 11045) 174–182 (Springer, 2018)..
Hiasa, Y. et al. Cross-modality image synthesis from unpaired data using CycleGAN. In International Workshop on Simulation and Synthesis in Medical Imaging (eds Gooya, A. et al.) 31–41 (Springer, 2018).
Hagiwara, A. et al. Improving the quality of synthetic FLAIR images with deep learning using a conditional generative adversarial network for pixel-by-pixel image translation. Am. J. Neuroradiol. 40, 224–230 (2019).
Lee, D., Kim, J., Moon, W.-J. & Ye, J. C. CollaGAN: collaborative GAN for missing image data imputation. In 2019 IEEE Conference on Computer Vision and Pattern Recognition 2487–2496 (IEEE, 2019).
Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization. In International Conference on Medical Image Computing and Computer-Assisted Intervention Brainlesion Workshop (eds Crimi, A. et al.) 311–320 (Springer, 2018).
Dice, L. R. Measures of the amount of ecologic association between species. Ecology 26, 297–302 (1945).
Salimans, T. et al. Improved techniques for training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016) (eds Lee, D. D. et al.) 2234–2242 (Neural Information Processing Systems Foundation, 2016).
Shrivastava, A. et al. Learning from simulated and unsupervised images through adversarial training. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2107–2116 (IEEE, 2017).
Mao, X. et al. Least squares generative adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV) 2813–2821 (IEEE, 2017).
Arjovsky, M., Chintala, S. & Bottou, L. Wasserstein GAN. Preprint at https://arxiv.org/abs/1701.07875 (2017).
Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).
Ledig, C.et al. Photo-realistic single image super-resolution using a generative adversarial network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Vol. 2, 4 (IEEE, 2017)..
Mathieu, M., Couprie, C. & LeCun, Y. Deep multi-scale video prediction beyond mean square error. Preprint at https://arxiv.org/abs/1511.05440 (2015).
Zhao, H., Gallo, O., Frosio, I. & Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3, 47–57 (2017).
Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Navab, N. et al.) 234–241 (Lecture Notes in Computer Science Vol. 9351, Springer, 2015).
Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1–9 (IEEE, 2015).
Chang, S., Park, S., Yang, J. & Kwak, N. Image translation to mixed-domain using sym-parameterized generative network. Preprint at https://arxiv.org/abs/1811.12362 (2018).
Remedios, S., Pham, D. L., Butman, J. A. & Roy, S. Classifying magnetic resonance image modalities with convolutional neural networks. Proc. SPIE 10575, 105752I (2018).
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. Preprint at https://arxiv.org/abs/1207.0580 (2012).
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learning Res. 15, 1929–1958 (2014).
Wu, Y. & He, K. (2018). Group normalization. In 2018 IEEE European Conference on Computer Vision (ECCV) 3–19 (IEEE, 2018).
Acknowledgement
This research was supported by the National Research Foundation (NRF) of Korea grant NRF-2016R1A2B3008104.
Author information
Authors and Affiliations
Contributions
J.C.Y. supervised the project in conception and discussion. D.L. and J.C.Y. designed the experiments and analysis. D.L. performed all experiments and analysis. W.-J.M. prepared the MAGiC MRI databases and evaluated the qualitative assessment of the results. D.L., W.-J.M. and J.C.Y wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
About this article
Cite this article
Lee, D., Moon, WJ. & Ye, J.C. Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks. Nat Mach Intell 2, 34–42 (2020). https://doi.org/10.1038/s42256-019-0137-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-019-0137-x
This article is cited by
-
Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation
Nature Machine Intelligence (2023)
-
Synthetic T2-weighted fat sat based on a generative adversarial network shows potential for scan time reduction in spine imaging in a multicenter test dataset
European Radiology (2023)
-
Scientific discovery in the age of artificial intelligence
Nature (2023)
-
Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey
Eye and Vision (2022)
-
Cycle-consistent adversarial networks improves generalizability of radiomics model in grading meningiomas on external validation
Scientific Reports (2022)