Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation

Abstract

Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: nnU-Net handles a broad variety of datasets and target image properties.
Fig. 2: Proposed automated method configuration for deep learning-based biomedical image segmentation.
Fig. 3: nnU-Net outperforms most specialized deep learning pipelines.
Fig. 4: Pipeline fingerprints from KiTS 2019 leaderboard entries.
Fig. 5: Data fingerprints across different challenge datasets.
Fig. 6: Evaluation of design decisions across multiple tasks.

Similar content being viewed by others

Data availability

All 23 datasets used in this study are publicly available and can be accessed via their respective challenge websites as follows. D1–D10 Medical Segmentation Decathlon, http://medicaldecathlon.com/; D11 Beyond the Cranial Vault (BCV)-Abdomen, https://www.synapse.org/#!Synapse:syn3193805/wiki/; D12 PROMISE12, https://promise12.grand-challenge.org/; D13 ACDC, https://acdc.creatis.insa-lyon.fr/; D14 LiTS, https://competitions.codalab.org/competitions/17094; D15 MSLes, https://smart-stats-tools.org/lesion-challenge; D16 CHAOS, https://chaos.grand-challenge.org/; D17 KiTS, https://kits19.grand-challenge.org/; D18 SegTHOR, https://competitions.codalab.org/competitions/21145; D19 CREMI, https://cremi.org/; D20–D23 Cell Tracking Challenge, http://celltrackingchallenge.net/.

Code availability

The nnU-Net repository is available as Supplementary Software. Updated versions can be found at https://github.com/mic-dkfz/nnunet. Pretrained models for all datasets used in this study are available for download at https://zenodo.org/record/3734294.

References

  1. Falk, T. et al. U-net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019).

    Article  CAS  Google Scholar 

  2. Hollon, T. C. et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nat. Med. 26, 52–58 (2020).

  3. Aerts, H. J. W. L. et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 5, 4006 (2014).

    Article  CAS  Google Scholar 

  4. Nestle, U. et al. Comparison of different methods for delineation of 18F-FDG PET-positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer. J. Nucl. Med. 46, 1342–1348 (2005).

    PubMed  Google Scholar 

  5. De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24, 1342–1350 (2018).

    Article  Google Scholar 

  6. Bernard, O. et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37, 2514–2525 (2018).

    Article  Google Scholar 

  7. Nikolov, S. et al. Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. Preprint at https://arxiv.org/abs/1809.04430 (2018).

  8. Kickingereder, P. et al. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study. Lancet Oncol. 20, 728–740 (2019).

    Article  Google Scholar 

  9. Maier-Hein, L. et al. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9, 5217 (2018).

    Article  CAS  Google Scholar 

  10. Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017).

    Article  Google Scholar 

  11. LeCun, Y. 1.1 deep learning hardware: past, present, and future. In 2019 IEEE International Solid-State Circuits Conference 12–19 (IEEE, 2019).

  12. Hutter, F., Kotthoff, L. & Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges. (Springer Nature, 2019).

  13. Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).

    Google Scholar 

  14. Simpson, A. L. et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. Preprint at https://arxiv.org/abs/1902.09063 (2019).

  15. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In MICCAI (eds. Navab, N. et al) 234–241 (2015).

  16. Landman, B. et al. MICCAI multi-atlas labeling beyond the cranial vault—workshop and challenge. https://doi.org/10.7303/syn3193805 (2015).

  17. Litjens, G. et al. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med. Image Anal. 18, 359–373 (2014).

    Article  Google Scholar 

  18. Bilic, P. et al. The liver tumor segmentation benchmark (LiTS). Preprint at https://arxiv.org/abs/1901.04056 (2019).

  19. Carass, A. et al. Longitudinal multiple sclerosis lesion segmentation: resource and challenge. NeuroImage 148, 77–102 (2017).

    Article  Google Scholar 

  20. Kavur, A. E. et al. CHAOS challenge—combined (CT–MR) healthy abdominal organ segmentation. Preprint at https://arxiv.org/abs/2001.06535 (2020).

  21. Heller, N. et al. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. Preprint at https://arxiv.org/abs/1904.00445 (2019).

  22. Lambert, Z., Petitjean, C., Dubray, B. & Ruan, S. SegTHOR: segmentation of thoracic organs at risk in CT images. Preprint at https://arxiv.org/abs/1912.05950 (2019).

  23. Maška, M. et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 30, 1609–1617 (2014).

    Article  Google Scholar 

  24. Ulman, V. et al. An objective comparison of cell-tracking algorithms. Nat. Methods 14, 1141–1152 (2017).

    Article  CAS  Google Scholar 

  25. Heller, N. et al. The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the KiTS19 challenge. In Medical Image Analysis vol. 67 (2021).

  26. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-net: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds. Ourselin, S. et al.) 424–432 (Springer, 2016).

  27. Milletari, F., Navab, N. & Ahmadi, S.-A. V-net: fully convolutional neural networks for volumetric medical image segmentation. In International Conference on 3D Vision (3DV) 565–571 (IEEE, 2016).

  28. He, K., Zhang, Z., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  29. Jégou, S., Drozdzal, M., Vazquez, D., Romero, A. & Bengio, Y. The one hundred layers tiramisu: fully convolutional DenseNets for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 11–19 (IEEE, 2017).

  30. Huang, G., Liu, Z., van der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 4700–4708 (IEEE, 2017).

  31. Oktay, O. et al. Attention U-net: learning where to look for the pancreas. Preprint at https://arxiv.org/abs/1804.03999 (2018).

  32. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K. & Yuille, A. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).

    Article  Google Scholar 

  33. McKinley, R., Meier, R. & Wiest, R. Ensembles of densely-connected CNNs with label-uncertainty for brain tumor segmentation. In International MICCAI Brain Lesion Workshop (eds. Crimi, A. et al.) 456–465 (Springer, 2018).

  34. Heinrich, L., Funke, J., Pape, C., Nunez-Iglesias, J. & Saalfeld, S. Synaptic cleft segmentation in non-isotropic volume electron microscopy of the complete Drosophila brain. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds. Frangi, A.F. et al.) 317–325 (Springer, 2018).

  35. Nolden, M. et al. The Medical Imaging Interaction Toolkit: challenges and advances. Int. J. Comput. Assist. Radiol. Surg. 8, 607–620 (2013).

    Article  Google Scholar 

  36. Castilla, C., Maška, M., Sorokin, D. V., Meijering, E. & Ortiz-de-Solórzano, C. 3-D quantification of filopodia in motile cancer cells. IEEE Trans. Med. Imaging 38, 862–872 (2018).

    Article  Google Scholar 

  37. Sorokin, D. V. et al. FiloGen: a model-based generator of synthetic 3-D time-lapse sequences of single motile cells with growing and branching filopodia. IEEE Trans. Med. Imaging 37, 2630–2641 (2018).

    Article  Google Scholar 

  38. Menze, B. H. et al. The Multimodal Brain Tumor Image Segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2014).

    Article  Google Scholar 

  39. Svoboda, D. & Ulman, V. MitoGen: a framework for generating 3D synthetic time-lapse sequences of cell populations in fluorescence microscopy. IEEE Trans. Med. Imaging 36, 310–321 (2016).

    Article  Google Scholar 

  40. Wu, Z., Shen, C. & van den Hengel, A. Bridging category-level and instance-level semantic image segmentation. Preprint at https://arxiv.org/abs/1605.06885 (2016).

  41. He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. In European Conference on Computer Vision (eds. Sebe, N. et al.) 630–645 (Springer, 2016).

  42. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In 3rd International Conference on Learning Representations (eds. Bengio, Y. & LeCun, Y.) (ICLR, 2015).

  43. Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of Machine Learning Research Vol. 37 (eds. Francis Bach and David Blei) 448–456 (PMLR, 2015).

  44. Ulyanov, D., Vedaldi, A. & Lempitsky, V. Instance normalization: the missing ingredient for fast stylization. Preprint at https://arxiv.org/abs/1607.08022 (2016).

  45. Wiesenfarth, M. et al. Methods and open-source toolkit for analyzing and visualizing challenge results. Preprint at https://arxiv.org/abs/1910.05121 (2019).

  46. Hu, J., Shen, L. & Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 7132–7141 (IEEE, 2018).

  47. Wu, Y. & He, K. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (eds. Leal-Taixé, L. & Roth, S.) 3–19 (ECCV, 2018).

  48. Singh, S. & Krishnan, S. Filter response normalization layer: eliminating batch dependence in the training of deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 11237–11246 (CVPR, 2020)

  49. Maas, A. L., Hannun, A. Y. & Ng, A. Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the International Conference on Machine Learning 3 (eds. Dasgupta, S. & McAllester, D.) (ICML, 2013).

  50. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2818–2826 (IEEE, 2016).

  51. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S. & Pal, C. The importance of skip connections in biomedical image segmentation. In Deep Learning and Data Labeling for Medical Applications (eds. Carneiro, G. et al.) 179–187 (Springer, 2016).

  52. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (eds. Wallach, H. et al.) 8024–8035 (NeurIPS, 2019).

  53. Isensee, F. et al. Batchgenerators—a Python framework for data augmentation. Zenodo https://doi.org/10.5281/zenodo.3632567 (2020).

Download references

Acknowledgements

This work was co-funded by the National Center for Tumor Diseases (NCT) in Heidelberg and the Helmholtz Imaging Platform (HIP). We thank our colleagues at DKFZ who were involved in the various challenge contributions, especially A. Klein, D. Zimmerer, J. Wasserthal, G. Koehler, T. Norajitra and S. Wirkert, who contributed to the Decathlon submission. We also thank the MITK team, which supported us in producing all medical dataset visualizations. We are also thankful to all the challenge organizers, who provided an important basis for our work. We want to especially mention N. Heller, who enabled the collection of all the details from the KiTS challenge through excellent challenge design, E. Kavur from the CHAOS team, who generated comprehensive leaderboard information for us, C. Petitjean, who provided detailed leaderboard information of the SegTHOR entries from ISBI 2019 and M. Maška, who patiently supported us during our Cell Tracking Challenge submission. We thank M. Wiesenfarth for his helpful advice concerning the ranking of methods and the visualization of rankings. We further thank C. Pape and T. Wollman for their crucial introductions to the CREMI and Cell Tracking Challenges, respectively. Last but not least, we thank O. Ronneberger and L. Maier-Hein for their important feedback on this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

F.I. and P.F.J. conceptualized the method and planned the experiments with the help of S.A.A.K., J.P. and K.H.M.-H. F.I. implemented and configured nnU-Net and conducted the experiments on the 23 selected datasets. F.I. and P.F.J. analyzed the results and performed the KiTS analysis. P.F.J., S.A.A.K. and K.H.M.-H. conceived the communication and presentation of the method. P.F.J. designed and created the figures. P.F.J., F.I. and K.H.M.-H. wrote the paper with contributions from J.P. and S.A.A.K. K.H.M.-H. managed and coordinated the overall project. S.A.A.K. started work on this research as a PhD student at the German Cancer Research Center.

Corresponding author

Correspondence to Klaus H. Maier-Hein.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Rita Strack was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Notes 1–9

Reporting Summary

Supplementary Software

Supplementary Software 1

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18, 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-020-01008-z

This article is cited by

Search

Quick links

Nature Briefing: Translational Research

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

Get what matters in translational research, free to your inbox weekly. Sign up for Nature Briefing: Translational Research