Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

U-Net: deep learning for cell counting, detection, and morphometry

An Author Correction to this article was published on 25 February 2019

This article has been updated

Abstract

U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Pipelines of the U-Net software.
Fig. 2: Example applications of U-Net for 2D and 3D detection and segmentation.

Similar content being viewed by others

Data availability

Datasets F1-MSC, F2-GOWT1, F3-SIM, F4-HeLa, DIC1-HeLa, PC1-U373 and PC2-PSC are from the ISBI Cell Tracking Challenge 2015 (ref. 17). Information on how to obtain the data can be found at http://celltrackingchallenge.net/datasets.html, and free registration for the challenge is currently required. Datasets PC3-HKPV, BF1-POL, BF2-PPL and BF3-MiSp are custom and are available from the corresponding author upon reasonable request. Datasets for the detection experiments partially contain unpublished sample-preparation protocols and are currently not freely available. After protocol publication, datasets will be made available on an as-requested basis. Details on sample preparation for our life science experiments can be found in Supplementary Note 3 and the Life Sciences Reporting Summary.

Change history

  • 25 February 2019

    In the version of this paper originally published, one of the affiliations for Dominic Mai was incorrect: "Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany" should have been "Life Imaging Center, Center for Biological Systems Analysis, Albert-Ludwigs-University, Freiburg, Germany." This change required some renumbering of subsequent author affiliations. These corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.

References

  1. Sommer, C, Strähle, C, Koethe, U. & Hamprecht, F. A. in Ilastik: interactive learning and segmentation toolkit in IEEE Int. Symp. Biomed. Imaging. 230–233 (IEEE: Piscataway, NJ, USA, 2011).

  2. Arganda-Carreras, I. et al. Bioinformatics 33, 2424–2426 (2017).

    Article  CAS  Google Scholar 

  3. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 Vol. 9351, 234–241 (Springer, Cham, Switzerland, 2015).

  4. Rusk, N. Nat. Methods 13, 35 (2016).

    Article  CAS  Google Scholar 

  5. Webb, S. Nature 554, 555–557 (2018).

    Article  CAS  Google Scholar 

  6. Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Sci. Rep. 7, 7860 (2017).

    Article  Google Scholar 

  7. Weigert, M. et al. Nat. Methods https://doi.org/10.1038/s41592-018-0216-7 (2018).

  8. Haberl, M. G. et al. Nat. Methods 15, 677–680 (2018).

    Article  CAS  Google Scholar 

  9. Ulman, V. et al. Nat. Methods 14, 1141–1152 (2017).

    Article  CAS  Google Scholar 

  10. Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. Nat. Methods 9, 671–675 (2012).

    Article  CAS  Google Scholar 

  11. Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 3431–3440 (IEEE, Piscataway, NJ, USA, 2015).

  12. Simonyan, K. & Zisserman, A. Preprint at https://arxiv.org/abs/1409.1556 (2014)

  13. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016 Vol. 9901, 424–432 (Springer, Cham, Switzerland, 2016).

  14. Jia, Y. et al. Preprint at https://arxiv.org/abs/1408.5093 (2014).

  15. He, K., Zhang, X., Ren, S. & Sun, J. Preprint at https://arxiv.org/abs/1502.01852 (2015).

  16. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. & Zisserman, A. Int. J. Comput. Vis. 88, 303–338 (2010).

    Article  Google Scholar 

  17. Maška, M. et al. Bioinformatics 30, 1609–1617 (2014).

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the German Federal Ministry for Education and Research (BMBF) through the MICROSYSTEMS project (0316185B) to T.F. and A.D.; the Bernstein Award 2012 (01GQ2301) to I.D.; the Federal Ministry for Economic Affairs and Energy (ZF4184101CR5) to A.B.; the Deutsche Forschungsgemeinschaft (DFG) through the collaborative research center KIDGEM (SFB 1140) to D.M., Ö.Ç., T.F. and O.R., and (SFB 746, INST 39/839,840,841) to K.P.; the Clusters of Excellence BIOSS (EXC 294) to T.F., D.M., R.B., A.A., Y.M., D.S., T.L.T., M.P., K.P., M.S., T.B. and O.R.; BrainLinks-Brain-Tools (EXC 1086) to Z.J., K.S., I.D. and T.B.; grants DI 1908/3-1 to J.D., DI 1908/6-1 to Z.J. and K.S., and DI 1908/7-1 to I.D.; the Swiss National Science Foundation (SNF grant 173880) to A.A.; the ERC Starting grant OptoMotorPath (338041) to I.D.; and the FENS-Kavli Network of Excellence (FKNE) to I.D. We thank F. Prósper, E. Bártová, V. Ulman, D. Svoboda, G. van Cappellen, S. Kumar, T. Becker and the Mitocheck consortium for providing a rich diversity of datasets through the ISBI segmentation challenge. We thank P. Fischer for manual image annotations. We thank S. Wrobel for tobacco microspore preparation.

Author information

Authors and Affiliations

Authors

Contributions

T.F., D.M., R.B., Y.M., Ö.Ç., T.B. and O.R. selected and designed the computational experiments. T.F., R.B., D.M., Y.M., A.B. and Ö.Ç. performed the experiments: R.B., D.M., Y.M. and A.B. (2D), and T.F. and Ö.Ç. (3D). R.B., Ö.Ç., A.A., T.F. and O.R. implemented the U-Net extensions into caffe. T.F. designed and implemented the Fiji plugin. D.S. and M.S. selected, prepared and recorded the keratinocyte dataset PC3-HKPV. T.F. and O.R. prepared the airborne-pollen dataset BF1-POL. A.D., S.W., O.T., C.D.B. and K.P. selected, prepared and recorded the protoplast and microspore datasets BF2-PPL and BF3-MiSp. T.L.T. and M.P. prepared, recorded and annotated the data for the microglial proliferation experiment. J.D., K.S. and Z.J. selected, prepared and recorded the optogenetic dataset. I.D., J.D. and Z.J. manually annotated the optogenetic dataset. I.D., T.F., D.M., R.B., Ö.Ç., T.B. and O.R. wrote the manuscript.

Corresponding author

Correspondence to Olaf Ronneberger.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1 The U-Net architecture in the example of a 2D cell segmentation network.

(left) Input: An image tile with 540×540 pixels and C channels (blue box). (right) Output: The K-class soft-max segmentation with 356×356 pixels (yellow box). Blocks show the computed feature hierarchy. Numbers atop each network block: number of feature channels; numbers left to each block: spatial feature map shape in pixels. Yellow arrows: Data flow

Supplementary Figure 2 Separation of touching cells by using pixelwise loss weights.

(a) Generated segmentation mask with one-pixel wide background ridge between touching cells (white: foreground, black: background). (b) Map showing pixel-wise loss weights to enforce the network to separate touching cells

Supplementary Figure 3 Training data augmentation through random smooth elastic deformation.

(a) Upper left: Raw image; Upper right: Labels; Lower Left: Loss Weights; Lower Right: 20μm grid (for illustration purpose only) (b) Deformation field (black arrows) generated using bicubic interpolation from a coarse grid of displacement vectors (blue arrows; magnification: 5×). Vector components are drawn from a Gaussian distribution (σ = 10px). (c) Backwarp-transformed images of (a) using the deformation field

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–3 and Supplementary Notes 1–3

Reporting Summary

Supplementary Software 1

Caffe_unet binary package (GPU version with cuDNN, Recommended). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04, cuda 8.0.61 (https://developer.nvidia.com/cuda-80-ga2-download-archive) and cuDNN 7.1.4 for cuda 8.0 (https://developer.nvidia.com/rdp/cudnn-archive). At time of publication cuDNN download from the nVidia website requires free registration as nVidia developer

Supplementary Software 2

Caffe_unet binary package (GPU version no cuDNN). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04 and cuda 8.0.61 (https://developer.nvidia.com/cuda-80-ga2-downloadarchive)

Supplementary Software 3

Caffe_unet binary package (CPU version). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04

Supplementary Software 4

The source code difference (patch file) to the open source caffe deep learning software (https://github.com/BVLC/caffe.git commit hash d1208dbf313698de9ef70b3362c89cfddb51c520). Checkout the correspondingly tagged commit and apply the patch using “git apply” to get the full source for custom builds

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Falk, T., Mai, D., Bensch, R. et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat Methods 16, 67–70 (2019). https://doi.org/10.1038/s41592-018-0261-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-018-0261-2

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing