Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Analysis
  • Published:

The multimodality cell segmentation challenge: toward universal solutions

Abstract

Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Overview of the challenge task and pipeline.
Fig. 2: Dataset overview.
Fig. 3: Evaluation results of 28 algorithms on the holdout testing set.
Fig. 4: Quantitative and qualitative comparison between the top three algorithms and SOTA generalist cell segmentation algorithms: KIT-GE (top solution in the segmentation benchmark of the CTC), Cellpose, Omnipose and their variants under different training strategies.

Similar content being viewed by others

Data availability

The dataset is available on the challenge website at https://neurips22-cellseg.grand-challenge.org/. It is also available on Zenodo at https://zenodo.org/records/10719375 (ref. 64). Source data are provided with this paper.

Code availability

The top ten teams have made their code publicly available at https://neurips22-cellseg.grand-challenge.org/awards/. They are also available on Zenodo at https://zenodo.org/records/10718351.

References

  1. Jackson, H. W. et al. The single-cell pathology landscape of breast cancer. Nature 578, 615–620 (2020).

    Article  CAS  PubMed  Google Scholar 

  2. Capolupo, L. et al. Sphingolipids control dermal fibroblast heterogeneity. Science 376, eabh1623 (2022).

    Article  CAS  PubMed  Google Scholar 

  3. Lin, J.-R. et al. Multiplexed 3d atlas of state transitions and immune interaction in colorectal cancer. Cell 186, 363–381 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Hollandi, R. et al. Nucleus segmentation: towards automated solutions. Trends Cell Biol. 32, 295–310 (2022).

  5. Greenwald, N. F. et al. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nat. Biotechnol. 40, 555–565 (2021).

  6. Lee, M. Y. et al. Cellseg: a robust, pre-trained nucleus segmentation and pixel quantification software for highly multiplexed fluorescence images. BMC Bioinform. 23, 1–17 (2022).

    Article  Google Scholar 

  7. Kempster, C. et al. Fully automated platelet differential interference contrast image analysis via deep learning. Sci. Rep. 12, 1–13 (2022).

    Article  Google Scholar 

  8. Cutler, K. J. et al. Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation. Nat. Meth. 19, 1438–1448 (2022).

    Article  CAS  Google Scholar 

  9. Bunk, D. et al. Yeastmate: neural network-assisted segmentation of mating and budding events in Saccharomyces cerevisiae. Bioinformatics 38, 2667–2669 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Dietler, N. et al. A convolutional neural network segments yeast microscopy images with high accuracy. Nat. Commun. 11, 1–8 (2020).

    Article  Google Scholar 

  11. Stringer, C., Wang, T., Michaelos, M. & Pachitariu, M. Cellpose: a generalist algorithm for cellular segmentation. Nat. Meth. 18, 100–106 (2021).

    Article  CAS  Google Scholar 

  12. Pachitariu, M. & Stringer, C. Cellpose 2.0: how to train your own model. Nat. Meth. 19, 1634–1641 (2022).

    Article  CAS  Google Scholar 

  13. Ulman, V. et al. An objective comparison of cell-tracking algorithms. Nat. Meth. 14, 1141–1152 (2017).

    Article  CAS  Google Scholar 

  14. Maška, M. et al. The cell tracking challenge: 10 years of objective benchmarking. Nat. Meth. 20, 1010–1020 (2023).

  15. Caicedo, J. C. et al. Nucleus segmentation across imaging experiments: the 2018 data science bowl. Nat. Meth. 16, 1247–1253 (2019).

    Article  CAS  Google Scholar 

  16. Graham, S. et al. CoNIC challenge: pushing the frontiers of nuclear detection, segmentation, classification and counting. Med. Image Anal. 92, 103047 (2024).

    Article  PubMed  Google Scholar 

  17. Tajbakhsh, N. et al. Embracing imperfect datasets: a review of deep learning solutions for medical image segmentation. Med. Image Anal. 63, 101693 (2020).

    Article  PubMed  Google Scholar 

  18. Ma, J. & Wang, B. Towards foundation models of biological image segmentation. Nat. Meth. 20, 953–955 (2023).

    Article  CAS  Google Scholar 

  19. Gupta, A. et al. Segpc-2021: a challenge & dataset on segmentation of multiple myeloma plasma cells from microscopic images. Med. Image Anal. 83, 102677 (2023).

    Article  PubMed  Google Scholar 

  20. Falk, T. et al. U-net: deep learning for cell counting, detection, and morphometry. Nat. Meth. 16, 67–70 (2019).

    Article  CAS  Google Scholar 

  21. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K. & Yuille, A. L. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2017).

    Article  PubMed  Google Scholar 

  22. Vaswani, A. et al. Attention is all you need. in Advances in Neural Information Processing Systems, vol. 30 (NeurIPS, 2017).

  23. Dosovitskiy, A. et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. in International Conference on Learning Representations (ICLR, 2021).

  24. Ma, J. et al. Segment anything in medical images. Nat. Commun. 15, 654 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Lee, G., Kim, S., Kim, J. & Yun, S.-Y. Mediar: harmony of data-centric and model-centric for multi-modality microscopy. in Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images, vol. 212, pages 1–16 (2023).

  26. Xie, E. et al. Segformer: simple and efficient design for semantic segmentation with transformers. in Advances in Neural Information Processing Systems, vol. 34 (NeurIPS, 2021).

  27. Fan, T., Wang, G., Li, Y. & Wang, H. Ma-net: a multi-scale attention network for liver and tumor segmentation. IEEE Access 8, 179656–179665 (2020).

    Article  Google Scholar 

  28. Chaudhry, A., Gordo, A., Dokania, P., Torr, P. & Lopez-Paz, D. Using hindsight to anchor past knowledge in continual learning. in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pages 6993–7001 (AAAI, 2021).

  29. Lou, W. et al. Multi-stream cell segmentation with low-level cues for multi-modality images. Proc. Mach. Learn. Res. 212, 1–10 (2023).

  30. Liu, Z. et al. A convnet for the 2020s. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11976–11986 (IEEE, 2022).

  31. Upschulte, E., Harmeling, S., Amunts, K. & Dickscheid, T. Uncertainty-aware contour proposal networks for cell segmentation in multi-modality high-resolution microscopy images. Proc. Mach. Learn Res. 212, 1–12 (2023).

  32. Xie, S., Girshick, R., Dollár, P., Tu, Z. & He, K. Aggregated residual transformations for deep neural networks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1492–1500 (IEEE, 2017).

  33. Scherr, T., Löffler, K., Böhland, M. & Mikut, R. Cell segmentation and tracking using cnn-based distance predictions and a graph-based matching strategy. PLoS ONE 15, e0243219 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  34. Parisi, G. I., Kemker, R., Part, J. L., Kanan, C. & Wermter, S. Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019).

    Article  PubMed  Google Scholar 

  35. De Lange, M. et al. A continual learning survey: defying forgetting in classification tasks. IEEE Trans. Pattern Anal. Mach. Intell. 44, 3366–3385 (2021).

    Google Scholar 

  36. Pena, F. A. G. et al. J regularization improves imbalanced multiclass segmentation. in IEEE 17th International Symposium on Biomedical Imaging, 1–5 (IEEE, 2020).

  37. Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J. & Maier-Hein, K. H. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Meth. 18, 203–211 (2021).

    Article  CAS  Google Scholar 

  38. He, K., Gkioxari, G., Dollár, P. & Girshick, R. Mask r-cnn. in Proceedings of the IEEE International Conference on Computer Vision, 2961–2969 (IEEE, 2017).

  39. Wangkai, L. et al. Maunet: modality-aware anti-ambiguity u-net for multi-modality cell segmentation. Proc. Mach. Learn Res. 212, 1–12 (2023).

  40. Bochkovskiy, A., Wang, C.-Y. & Liao, H.-Y. M. Yolov4: optimal speed and accuracy of object detection. Preprint at arXiv https://doi.org/10.48550/arXiv.2004.10934 (2020).

  41. Jeong, J., Lee, S., Kim, J. & Kwak, N. Consistency-based semi-supervised learning for object detection. in Advances in Neural Information Processing Systems, vol. 32 (NeurIPS, 2019).

  42. Chen, S., Bortsova, G., Juárez, A.G.-U., Van Tulder, G. & De Bruijne, M. Multi-task attention-based semi-supervised learning for medical image segmentation. in Medical Image Computing and Computer Assisted Intervention, 457–465 (MICCAI, 2019).

  43. Liu, Y.-C., Ma, C.-Y. & Kira, Z. Unbiased teacher v2: semi-supervised object detection for anchor-free and anchor-based detectors. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9819–9828 (IEEE, 2022).

  44. Sofroniew, N. et al. napari: a multi-dimensional image viewer for Python. Zenodo https://zenodo.org/10.5281/zenodo.3555620 (2022).

  45. Risom, T. et al. Transition to invasive breast cancer is associated with progressive changes in the structure and composition of tumor stroma. Cell 185, 299–310 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Fu, S. et al. Field-dependent deep learning enables high-throughput whole-cell 3D super-resolution imaging. Nat. Meth. 20, 459–468 (2023).

    Article  CAS  Google Scholar 

  47. Misra, D. Mish: a self regularized non-monotonic activation function. in British Machine Vision Conference (2020).

  48. Edlund, C. et al. Livecell—a large-scale dataset for label-free live cell segmentation. Nat. Meth. 18, 1038–1045 (2021).

    Article  CAS  Google Scholar 

  49. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778 (IEEE, 2016).

  50. Schmidt, U., Weigert, M., Broaddus, C. & Myers, G. Cell detection with star-convex polygons. in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 265–273 (MICCAI, 2018).

  51. Ren, S., He, K., Girshick, R. & Sun, J. Faster r-cnn: towards real-time object detection with region proposal networks. in Advances in Neural Information Processing Systems, vol. 28 (NeurIPS, 2015).

  52. Graham, S. et al. Hover-net: simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med. Image Anal. 58, 101563 (2019).

    Article  PubMed  Google Scholar 

  53. Upschulte, E., Harmeling, S., Amunts, K. & Dickscheid, T. Contour proposal networks for biomedical instance segmentation. Med. Image Anal. 77, 102371 (2022).

    Article  PubMed  Google Scholar 

  54. Kuhl, F. P. & Giardina, C. R. Elliptic fourier features of a closed contour. Comput.Graph. Image Process. 18, 236–258 (1982).

    Article  Google Scholar 

  55. Rezatofighi, H. et al. Generalized intersection over union: a metric and a loss for bounding box regression. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019).

  56. Lee, Y. et al. Localization uncertainty estimation for anchor-free object detection. in Computer Vision – ECCV 2022 Workshops, 27–42 (ECCV, 2023).

  57. Adams, R. & Bischof, L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 16, 641–647 (1994).

    Article  Google Scholar 

  58. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. in International Conference on Medical Image Computing and Computer-assisted Intervention, 234–241 (MICCAI, 2015).

  59. Maier-Hein, L. et al. Metrics reloaded: recommendations for image analysis validation. Nat. Meth. https://doi.org/10.1038/s41592-023-02151-z (2024).

  60. Hirling, D. et al. Segmentation metric misinterpretations in bioimage analysis. Nat. Meth. https://doi.org/10.1038/s41592-023-01942-8 (2023).

  61. Maier-Hein, L. et al. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9, 5217 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  62. Wiesenfarth, M. et al. Methods and open-source toolkit for analyzing and visualizing challenge results. Sci. Rep. 11, 1–15 (2021).

    Google Scholar 

  63. Kendall, M. G. A new measure of rank correlation. Biometrika 30, 81–93 (1938).

    Article  Google Scholar 

  64. Ma, J. et al. NeurIPS 2022 Cell Segmentation Competition Dataset. in Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS) Zenodo https://doi.org/10.5281/zenodo.10719375 (2024).

Download references

Acknowledgements

This work was supported by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2020-06189 and DGECR-2020-00294) and CIFAR AI Chair programs. This research was enabled, in part, by computing resources provided by the Digital Research Alliance of Canada. We thank P. Byrne, M. Kost-Alimova, S. Singh and A.E. Carpenter for contributing U2OS and adipocyte images. We thank A.J. Radtke and R. Germain for contributing adenoid and tonsil whole-slide fluorescent images. We thank S. Banerjee for providing multiple myeloma plasma cell annotations in stained brightfield images. The platelet DIC images collected by C. Kempster and A. Pollitt were supported by the British Heart Foundation/NC3Rs (NC/S001441/1) grant. A.G. thanks the Department of Science and Technology, Government of India for the SERB-POWER fellowship (grant no. SPF/2021/000209) and the Infosys Centre for AI, IIIT-Delhi for the financial support to run this challenge. M.L., V.G., M.S. and S.J.R. were supported by SNSF grants CRSK-3_190526 and 310030_204938 awarded to S.J.R. E.U. and T.D. received funding from Priority Program 2041 (SPP 2041) ‘Computational Connectomics’ of the German Research Foundation and the Helmholtz Association’s Initiative and Networking Fund through the Helmholtz International BigBrain Analytics and Learning Laboratory under the Helmholtz International Laboratory grant agreement InterLabs-0015. The authors gratefully acknowledge the computing time granted through JARA on the supercomputer JURECA at Forschungszentrum Jülich. We also thank the grand-challenge platform for hosting the competition.

Author information

Authors and Affiliations

Authors

Contributions

J.M. conceived and designed the analysis, collected and cleaned the data, contributed analysis tools, managed challenge registration and evaluation, performed the analysis, wrote the initial manuscript and revised the manuscript; R.X. conceived and designed the analysis, managed challenge registration and evaluation and revised the manuscript. S.A., C.G., A.G., R.G., S.G. and Y.Z. conceived and designed the analysis, cleaned data, contributed labeled images, managed challenge registration and evaluation, performed the analysis and revised the manuscript. G.L., J.K., W.L., H.L., E.U. and T.D. participated in the challenge, developed the top-three algorithms and made the code publicly available. J.G.A., Y.W., L.H. and X.Y. cleaned data, contributed labeled images and managed challenge registration and evaluation. M.L., V.G., M.S., S.J.R., C.K., A.P., L.E., T.M. J.M.M. and J.-N.E., contributed new labeled data in the competition. W.L., Z.L., X.C. and B.B. participated in the challenge, developed algorithms and made the code publicly available. N.F.G., D.V.V., E.W., B.A.C. and O.B. contributed public or unlabeled data to the competition. T.C. managed the challenge registration and evaluation. G.D.B. and B.W. conceived and designed the analysis and wrote and revised the manuscript.

Corresponding author

Correspondence to Bo Wang.

Ethics declarations

Competing interests

S.G. is employed by Nanjing Anke Medical Technology Co. J.M.M. and J.-N.E. are co-owners of Cancilico. D.V.V. is a co-founder and chief scientist of Barrier Biosciences and holds equity in the company. O.B. declares the following competing financial interests: consultancy fees from Novartis, Sanofi and Amgen, outside the submitted work; research grants from Pfizer and Gilead Sciences, outside the submitted work; and stock ownership (Hematoscope) outside the submitted work. All other authors declare no competing interests.

Peer review

Peer review information

Nature Methods thanks Yi Wang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Rita Strack, in collaboration with the Nature Methods team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Example segmentation results for four microscopy image modalities.

Brightfield (the 1st-2nd rows), fluorescent (the 3rd-4th rows), phase-contrast (the 5th-6th rows), and DIC images (the 7th-8th) rows. Cellpose-pretrain: Cellpose pretrained model (‘cyto2’). Cellpose-scratch: Cellpose model trained from scratch on the challenge dataset. Omnipose-pretrain: Omnipose pretrained model (‘cyto2’). Omnipose-scratch: Omnipose model trained from scratch on the challenge dataset.

Extended Data Fig. 2 Example segmentation results for the post-challenge testing images.

Cellpose-pretrain: Cellpose pretrained model (‘cyto2’). Cellpose-scratch: Cellpose model trained from scratch on the challenge dataset. Cellpose-finetune: Cellpose fine-tuned model on the challenge dataset. Omnipose-pretrain: Omnipose pretrained model (‘cyto2’). Omnipose-scratch: Omnipose model trained from scratch on the challenge dataset. Omnipose-finetune: Omnipose fine-tuned model on the challenge dataset.

Extended Data Fig. 3 Statistics of image size.

a, Distribution of image size across training (labeled (n=1000 independent images) and unlabeled (n=1725 independent images)), tuning (n=101 independent images), and testing sets (n=422 independent images). b, Distribution of image width and height across training (labeled (n=1000 independent images) and unlabeled (n=1725 independent images)), tuning (n=101 independent images), and testing sets (n=422 independent images).

Source data

Supplementary information

Source data

Source Data Fig. 2c

Statistical Source Data. Source Data Fig. 2d Statistical Source Data. Source Data Fig. 2f Statistical Source Data. Source Data Fig. 2g Statistical Source Data. Source Data Fig. 3a Statistical Source Data. Source Data Fig. 3b Statistical Source Data. Source Data Fig. 3c,d Statistical Source Data. Source Data Fig. 3e Statistical Source Data. Source Data Fig. 3f Statistical Source Data. Source Data Fig. 4a Statistical Source Data. Source Data Fig. 4b Statistical Source Data. Source Data Fig. 4c Statistical Source Data. Source Data Fig. 4d Statistical Source Data. Source Data Fig. 4e Statistical Source Data. Source Data Fig. 4g Statistical Source Data. Source Data Extended Data Fig. 3a Statistical Source Data. Source Data Extended Data Fig. 3b Statistical Source Data.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, J., Xie, R., Ayyadhury, S. et al. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods (2024). https://doi.org/10.1038/s41592-024-02233-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41592-024-02233-6

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing