Machine learning methods have proved powerful in particle physics, but without interpretability there is no guarantee the outcome of a learning algorithm is correct or robust. Christophe Grojean, Ayan Paul, Zhuoni Qian and Inga Strümke give an overview of how to introduce interpretability to methods commonly used in particle physics.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Machine learning the trilinear and light-quark Yukawa couplings from Higgs pair kinematic shapes
Journal of High Energy Physics Open Access 09 November 2022
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Rent or buy this article
Get just this article for as long as you need it
$39.95
Prices may be subject to local taxes which are calculated during checkout

References
Guest, D., Cranmer, K. & Whiteson, D. Deep Learning and its Application to LHC Physics. Ann. Rev. Nucl. Part. Sci. 68, 161–181 (2018).
Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R. & Yu, B. Definitions, methods, and applications in interpretable machine learning. PNAS 116, 22071–22080 (2019).
Barredo Arrieta, A. et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020).
Hamon, R., Junklewitz, H. & Sanchez, I. Robustness and explainability of Artificial Intelligence. Publ. Off. Eur. Union, Luxembourg (2020).
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).
Chen, T. & Guestrin, C. XGBoost: A scalable tree boosting system. In Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining (KDD ‘16) 785–794 (ACM, 2016).
Fawagreh, K., Gaber, M. M. & Elyan, E. Random forests: from early developments to recent advancements. Syst. Sci. Control. Eng. 2, 602–609 (2014).
Ribeiro, M. T., Singh, S. & Guestrin, S. “Why should I trust you?”: Explaining the predictions of any classifier. In Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining (KDD ‘16) 1135–1144 (ACM, 2016).
Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2, 56–67 (2020).
Grojean, C., Paul, A. & Qian, Z. Resurrecting \(b\bar{b}h\) with kinematic shapes. J. High Energy Phys. 4, 139 (2021).
Acknowledgements
This work benefited from support by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy EXC 2121 “Quantum Universe”–390833306. The work of A.P. is funded by Volkswagen Foundation within the initiative “Corona Crisis and Beyond–Perspectives for Science, Scholarship and Society”. I.S. is grateful to the Norwegian Research Council for support through the EXAIGON project–Explainable AI systems for gradual industry adoption (grant no. 304843).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Grojean, C., Paul, A., Qian, Z. et al. Lessons on interpretable machine learning from particle physics. Nat Rev Phys 4, 284–286 (2022). https://doi.org/10.1038/s42254-022-00456-0
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42254-022-00456-0
This article is cited by
-
Should artificial intelligence be interpretable to humans?
Nature Reviews Physics (2022)
-
Machine learning the trilinear and light-quark Yukawa couplings from Higgs pair kinematic shapes
Journal of High Energy Physics (2022)