Abstract

U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Data availability

Datasets F1-MSC, F2-GOWT1, F3-SIM, F4-HeLa, DIC1-HeLa, PC1-U373 and PC2-PSC are from the ISBI Cell Tracking Challenge 2015 (ref. 17). Information on how to obtain the data can be found at http://celltrackingchallenge.net/datasets.html, and free registration for the challenge is currently required. Datasets PC3-HKPV, BF1-POL, BF2-PPL and BF3-MiSp are custom and are available from the corresponding author upon reasonable request. Datasets for the detection experiments partially contain unpublished sample-preparation protocols and are currently not freely available. After protocol publication, datasets will be made available on an as-requested basis. Details on sample preparation for our life science experiments can be found in Supplementary Note 3 and the Life Sciences Reporting Summary.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Sommer, C, Strähle, C, Koethe, U. & Hamprecht, F. A. in Ilastik: interactive learning and segmentation toolkit in IEEE Int. Symp. Biomed. Imaging. 230–233 (IEEE: Piscataway, NJ, USA, 2011).

  2. 2.

    Arganda-Carreras, I. et al. Bioinformatics 33, 2424–2426 (2017).

  3. 3.

    Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 Vol. 9351, 234–241 (Springer, Cham, Switzerland, 2015).

  4. 4.

    Rusk, N. Nat. Methods 13, 35 (2016).

  5. 5.

    Webb, S. Nature 554, 555–557 (2018).

  6. 6.

    Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Sci. Rep. 7, 7860 (2017).

  7. 7.

    Weigert, M. et al. Nat. Methods https://doi.org/10.1038/s41592-018-0216-7 (2018).

  8. 8.

    Haberl, M. G. et al. Nat. Methods 15, 677–680 (2018).

  9. 9.

    Ulman, V. et al. Nat. Methods 14, 1141–1152 (2017).

  10. 10.

    Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. Nat. Methods 9, 671–675 (2012).

  11. 11.

    Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 3431–3440 (IEEE, Piscataway, NJ, USA, 2015).

  12. 12.

    Simonyan, K. & Zisserman, A. Preprint at https://arxiv.org/abs/1409.1556 (2014)

  13. 13.

    Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016 Vol. 9901, 424–432 (Springer, Cham, Switzerland, 2016).

  14. 14.

    Jia, Y. et al. Preprint at https://arxiv.org/abs/1408.5093 (2014).

  15. 15.

    He, K., Zhang, X., Ren, S. & Sun, J. Preprint at https://arxiv.org/abs/1502.01852 (2015).

  16. 16.

    Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. & Zisserman, A. Int. J. Comput. Vis. 88, 303–338 (2010).

  17. 17.

    Maška, M. et al. Bioinformatics 30, 1609–1617 (2014).

Download references

Acknowledgements

This work was supported by the German Federal Ministry for Education and Research (BMBF) through the MICROSYSTEMS project (0316185B) to T.F. and A.D.; the Bernstein Award 2012 (01GQ2301) to I.D.; the Federal Ministry for Economic Affairs and Energy (ZF4184101CR5) to A.B.; the Deutsche Forschungsgemeinschaft (DFG) through the collaborative research center KIDGEM (SFB 1140) to D.M., Ö.Ç., T.F. and O.R., and (SFB 746, INST 39/839,840,841) to K.P.; the Clusters of Excellence BIOSS (EXC 294) to T.F., D.M., R.B., A.A., Y.M., D.S., T.L.T., M.P., K.P., M.S., T.B. and O.R.; BrainLinks-Brain-Tools (EXC 1086) to Z.J., K.S., I.D. and T.B.; grants DI 1908/3-1 to J.D., DI 1908/6-1 to Z.J. and K.S., and DI 1908/7-1 to I.D.; the Swiss National Science Foundation (SNF grant 173880) to A.A.; the ERC Starting grant OptoMotorPath (338041) to I.D.; and the FENS-Kavli Network of Excellence (FKNE) to I.D. We thank F. Prósper, E. Bártová, V. Ulman, D. Svoboda, G. van Cappellen, S. Kumar, T. Becker and the Mitocheck consortium for providing a rich diversity of datasets through the ISBI segmentation challenge. We thank P. Fischer for manual image annotations. We thank S. Wrobel for tobacco microspore preparation.

Author information

Author notes

    • Dominic Mai

    Present address: SICK AG, Waldkirch, Germany

    • Robert Bensch

    Present address: ANavS GmbH, München, Germany

    • Alexander Dovzhenko
    • , Olaf Tietz
    •  & Sean Walsh

    Present address: ScreenSYS GmbH, Freiburg, Germany

    • Olaf Ronneberger

    Present address: DeepMind, London, UK

  1. These authors contributed equally: Thorsten Falk, Dominic Mai, Robert Bensch.

Affiliations

  1. Department of Computer Science, Albert-Ludwigs-University, Freiburg, Germany

    • Thorsten Falk
    • , Dominic Mai
    • , Robert Bensch
    • , Özgün Çiçek
    • , Ahmed Abdulkadir
    • , Yassine Marrakchi
    • , Anton Böhm
    • , Thomas Brox
    •  & Olaf Ronneberger
  2. BIOSS Centre for Biological Signalling Studies, Freiburg, Germany

    • Thorsten Falk
    • , Dominic Mai
    • , Robert Bensch
    • , Yassine Marrakchi
    • , Deniz Saltukoglu
    • , Marco Prinz
    • , Klaus Palme
    • , Matias Simons
    • , Thomas Brox
    •  & Olaf Ronneberger
  3. CIBSS Centre for Integrative Biological Signalling Studies, Albert-Ludwigs-University, Freiburg, Germany

    • Thorsten Falk
    • , Yassine Marrakchi
    • , Marco Prinz
    •  & Thomas Brox
  4. Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany

    • Dominic Mai
    • , Deniz Saltukoglu
    •  & Matias Simons
  5. University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland

    • Ahmed Abdulkadir
  6. Optophysiology Lab, Institute of Biology III, Albert-Ludwigs-University, Freiburg, Germany

    • Jan Deubner
    • , Zoe Jäckel
    • , Katharina Seiwald
    •  & Ilka Diester
  7. BrainLinks-BrainTools, Albert-Ludwigs-University, Freiburg, Germany

    • Jan Deubner
    • , Zoe Jäckel
    • , Tuan Leng Tay
    • , Ilka Diester
    •  & Thomas Brox
  8. Institute of Biology II, Albert-Ludwigs-University, Freiburg, Germany

    • Alexander Dovzhenko
    • , Olaf Tietz
    • , Cristina Dal Bosco
    • , Sean Walsh
    •  & Klaus Palme
  9. Renal Division, University Medical Centre, Freiburg, Germany

    • Deniz Saltukoglu
    •  & Matias Simons
  10. Spemann Graduate School of Biology and Medicine (SGBM), Albert-Ludwigs-University, Freiburg, Germany

    • Deniz Saltukoglu
  11. Institute of Neuropathology, University Medical Centre, Freiburg, Germany

    • Tuan Leng Tay
    •  & Marco Prinz
  12. Institute of Biology I, Albert-Ludwigs-University, Freiburg, Germany

    • Tuan Leng Tay
  13. Paris Descartes University-Sorbonne Paris Cité, Imagine Institute, Paris, France

    • Matias Simons
  14. Bernstein Center Freiburg, Albert-Ludwigs-University, Freiburg, Germany

    • Ilka Diester

Authors

  1. Search for Thorsten Falk in:

  2. Search for Dominic Mai in:

  3. Search for Robert Bensch in:

  4. Search for Özgün Çiçek in:

  5. Search for Ahmed Abdulkadir in:

  6. Search for Yassine Marrakchi in:

  7. Search for Anton Böhm in:

  8. Search for Jan Deubner in:

  9. Search for Zoe Jäckel in:

  10. Search for Katharina Seiwald in:

  11. Search for Alexander Dovzhenko in:

  12. Search for Olaf Tietz in:

  13. Search for Cristina Dal Bosco in:

  14. Search for Sean Walsh in:

  15. Search for Deniz Saltukoglu in:

  16. Search for Tuan Leng Tay in:

  17. Search for Marco Prinz in:

  18. Search for Klaus Palme in:

  19. Search for Matias Simons in:

  20. Search for Ilka Diester in:

  21. Search for Thomas Brox in:

  22. Search for Olaf Ronneberger in:

Contributions

T.F., D.M., R.B., Y.M., Ö.Ç., T.B. and O.R. selected and designed the computational experiments. T.F., R.B., D.M., Y.M., A.B. and Ö.Ç. performed the experiments: R.B., D.M., Y.M. and A.B. (2D), and T.F. and Ö.Ç. (3D). R.B., Ö.Ç., A.A., T.F. and O.R. implemented the U-Net extensions into caffe. T.F. designed and implemented the Fiji plugin. D.S. and M.S. selected, prepared and recorded the keratinocyte dataset PC3-HKPV. T.F. and O.R. prepared the airborne-pollen dataset BF1-POL. A.D., S.W., O.T., C.D.B. and K.P. selected, prepared and recorded the protoplast and microspore datasets BF2-PPL and BF3-MiSp. T.L.T. and M.P. prepared, recorded and annotated the data for the microglial proliferation experiment. J.D., K.S. and Z.J. selected, prepared and recorded the optogenetic dataset. I.D., J.D. and Z.J. manually annotated the optogenetic dataset. I.D., T.F., D.M., R.B., Ö.Ç., T.B. and O.R. wrote the manuscript.

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to Olaf Ronneberger.

Integrated supplementary information

  1. Supplementary Figure 1 The U-Net architecture in the example of a 2D cell segmentation network.

    (left) Input: An image tile with 540×540 pixels and C channels (blue box). (right) Output: The K-class soft-max segmentation with 356×356 pixels (yellow box). Blocks show the computed feature hierarchy. Numbers atop each network block: number of feature channels; numbers left to each block: spatial feature map shape in pixels. Yellow arrows: Data flow

  2. Supplementary Figure 2 Separation of touching cells by using pixelwise loss weights.

    (a) Generated segmentation mask with one-pixel wide background ridge between touching cells (white: foreground, black: background). (b) Map showing pixel-wise loss weights to enforce the network to separate touching cells

  3. Supplementary Figure 3 Training data augmentation through random smooth elastic deformation.

    (a) Upper left: Raw image; Upper right: Labels; Lower Left: Loss Weights; Lower Right: 20μm grid (for illustration purpose only) (b) Deformation field (black arrows) generated using bicubic interpolation from a coarse grid of displacement vectors (blue arrows; magnification: 5×). Vector components are drawn from a Gaussian distribution (σ = 10px). (c) Backwarp-transformed images of (a) using the deformation field

Supplementary information

  1. Supplementary Text and Figures

    Supplementary Figures 1–3 and Supplementary Notes 1–3

  2. Reporting Summary

  3. Supplementary Software 1

    Caffe_unet binary package (GPU version with cuDNN, Recommended). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04, cuda 8.0.61 (https://developer.nvidia.com/cuda-80-ga2-download-archive) and cuDNN 7.1.4 for cuda 8.0 (https://developer.nvidia.com/rdp/cudnn-archive). At time of publication cuDNN download from the nVidia website requires free registration as nVidia developer

  4. Supplementary Software 2

    Caffe_unet binary package (GPU version no cuDNN). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04 and cuda 8.0.61 (https://developer.nvidia.com/cuda-80-ga2-downloadarchive)

  5. Supplementary Software 3

    Caffe_unet binary package (CPU version). Pre-compiled binary version of the caffe_unet backend software for Ubuntu 16.04

  6. Supplementary Software 4

    The source code difference (patch file) to the open source caffe deep learning software (https://github.com/BVLC/caffe.git commit hash d1208dbf313698de9ef70b3362c89cfddb51c520). Checkout the correspondingly tagged commit and apply the patch using “git apply” to get the full source for custom builds

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/s41592-018-0261-2