The success of deep learning in analyzing bioimages comes at the expense of biologically meaningful interpretations. We review the state of the art of explainable artificial intelligence (XAI) in bioimaging and discuss its potential in hypothesis generation and data-driven discovery.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization
Nature Communications Open Access 27 August 2024
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Zaritsky, A. et al. Cell Syst. 12, 733–747.e6 (2021).
DeGrave, A. J., Cai, Z. R., Janizek, J. D., Daneshjou, R. & Lee, S. I. Nat. Biomed. Eng. https://doi.org/10.1038/s41551-023-01160-9 (2023).
Lamiable, A. et al. Nat. Commun. 14, 6386 (2023).
Rotem, O. et al. Preprint at bioRxiv https://doi.org/10.1101/2023.11.15.566968. (2023).
Kobayashi, H., Cheveralls, K. C., Leonetti, M. D. & Royer, L. A. Nat. Methods 19, 995–1003 (2022).
Razdaibiedina, A. et al. Mol. Syst. Biol. 20, 521–548 (2024).
Lundberg, S. & Lee, S.-I. NIPS’17: Proc. 31st International Conference on Neural Information Processing Systems 4768–4777 (ACM, 2017).
Soelistyo, C. J. et al. Nat. Mach. Intell. 4, 636–644 (2022).
Yamamoto, T., Cockburn, K., Greco, V. & Kawaguchi, K. PLOS Comput. Biol. 18, e1010477 (2022).
Schmitt, M. S. et al. Cell 187, 481–494.e24 (2024).
Doron, M. et al. Preprint at bioRxiv https://doi.org/10.1101/2023.06.16.545359 (2023).
LaChance, J., Suh, K., Clausen, J. & Cohen, D. J. PLOS Comput. Biol. 18, e1009293 (2022).
Yang, K. D. et al. PLOS Comput. Biol. 16, e1007828 (2020).
Eckstein, N. et al. Cell 187, 2574–2594 (2024).
Soelistyo, C. J. & Lowe, A. R. Preprint at arXiv https://doi.org/10.48550/arXiv.2402.03115 (2024).
Acknowledgements
This research was supported by the Israeli Council for Higher Education (CHE) via the Data Science Research Center, Ben-Gurion University of the Negev, Israel (to AZ), and by the Rosetree trust (to AZ). We thank Nadav Rappoport, Meghan Driscoll, Orit Kliper-Gross and Kevin Dean for critically reading this Comment.
Author information
Authors and Affiliations
Contributions
O.R. and A.Z. conceived and wrote this comment.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Rotem, O., Zaritsky, A. Visual interpretability of bioimaging deep learning models. Nat Methods 21, 1394–1397 (2024). https://doi.org/10.1038/s41592-024-02322-6
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41592-024-02322-6
This article is cited by
-
Embedding AI in biology
Nature Methods (2024)
-
Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization
Nature Communications (2024)