Abstract
Causal inference has recently attracted substantial attention in the machine learning and artificial intelligence community. It is usually positioned as a distinct strand of research that can broaden the scope of machine learning from predictive modelling to intervention and decision-making. In this Perspective, however, we argue that ideas from causality can also be used to improve the stronghold of machine learning, predictive modelling, if predictive stability, explainability and fairness are important. With the aim of bridging the gap between the tradition of precise modelling in causal inference and black-box approaches from machine learning, stable learning is proposed and developed as a source of common ground. This Perspective clarifies a source of risk for machine learning models and discusses the benefits of bringing causality into learning. We identify the fundamental problems addressed by stable learning, as well as the latest progress from both causal inference and learning perspectives, and we discuss relationships with explainability and fairness problems.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals
npj Digital Medicine Open Access 22 May 2023
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout



References
Athey, S. C., Bryan, K. A. & Gans, J. S. The allocation of decision authority to human and artificial intelligence. AEA Papers and Proceedings 110, 80–84 (2020).
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).
Corbett-Davies, S. & Goel, S. The measure and mismeasure of fairness: a critical review of fair machine learning. Preprint at https://arxiv.org/abs/1808.00023 (2018).
Heinze-Deml, C. & Meinshausen, N. Conditional variance penalties and domain shift robustness. Mach. Learn. 110, 303–348 (2021).
Pearl, J. Theoretical impediments to machine learning with seven sparks from the causal revolution. In Proc. of the Eleventh ACM International Conference on Web Search and Data Mining (2018).
Imbens, G. W. & Rubin, D. B. Causal Inference in Statistics, Social, and Biomedical Sciences (Cambridge Univ. Press, 2015).
Rosenbaum, P. R. & Rubin, D. B. The central role of the propensity score in observational studies for causal effects. Biometrika 70, 41–55 (1983).
Athey, S. & Imbens, G. A measure of robustness to misspecification. Am. Econ. Rev. 105, 476–480 (2015).
Holland, P. W. Statistics and causal inference. J. Am. Stat. Assoc. 81, 945–960 (1986).
Xu, R., Cui, P., Shen, Z., Zhang, X. & Zhang, T. Why stable learning works? A theory of covariate shift generalization. Preprint at https://arxiv.org/abs/2111.02355 (2021).
Kuang, K., Cui, P., Athey, S., Xiong, R. & Li, B. Stable prediction across unknown environments. In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 1617–1626 (2018).
Yu, B. et al. Stability. Bernoulli 19, 1484–1500 (2013).
Vapnik, V. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems 831–838 (1992).
Pan, S. J. et al. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010).
Shen, Z. et al. Towards out-of-distribution generalization: a survey. Preprint at https://arxiv.org/abs/2108.13624 (2021).
Athey, S., Imbens, G. W. & Wager, S. Approximate residual balancing: debiased inference of average treatment effects in high dimensions. J. R. Stat. Soc. Series B Stat. Methodol. 80.4, 597–623 (2018).
Zubizarreta, J. R. Stable weights that balance covariates for estimation with incomplete outcome data. J. Am. Stat. Assoc. 110, 910–922 (2015).
Hainmueller, J. Entropy balancing for causal effects: a multivariate reweighting method to produce balanced samples in observational studies. Political Anal. 20.1, 25–46 (2012).
Guo, R., Cheng, L., Li, J., Hahn, P. R. & Liu, H. A Survey of Learning Causality With Data: Problems and Methods 53.4, 137 (ACM Computing Surveys (CSUR), 2021).
Hicks, R. & Tingley, D. Causal mediation analysis. Stata J. 11, 605–619 (2011).
Pearl, J. Direct and indirect effects. In Proc. of the Seventeenth conference on Uncertainty in Artificial Intelligence 411–420 (2001).
Shen, Z., Cui, P., Kuang, K., Li, B. & Chen, P. Causally regularized learning with agnostic data selection bias. In Proc. of the 26th ACM International Conference on Multimedia 411–419 (2018).
Bisgaard, T. M. & Sasvári, Z. When does e (xk⋅ yl)= e (xk)⋅ e (yl) imply independence? Stat. Probabil. Lett. 76, 1111–1116 (2006).
Kuang, K., Xiong, R., Cui, P., Athey, S. & Li, B. Stable prediction with model misspecification and agnostic distribution shift. In Proc. of the AAAI Conference on Artificial Intelligence 34, No. 04 (2020).
Shen, Z., Cui, P., Zhang, T. & Kunag, K. Stable learning via sample reweighting. In Proc. of the AAAI Conference on Artificial Intelligence 34, no. 04, 5692–5699 (2020).
Cornelißen, T. & Sonderhof, K. Partial effects in probit and logit models with a triple dummy-variable interaction term. Stata J. 9, 571–583 (2009).
Gelman, A. & Hill, J. in Data Analysis Using Regression and Multilevel/Hierarchical Models 167–198 (Cambridge Univ. Press, 2007).
Holzinger, A., Langs, G., Denk, H., Zatloukal, K. & Müller, H. Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9, e1312 (2019).
Gunning, D. & Aha, D. W. DARPA’s explainable artificial intelligence program. AI Mag. 40, 44–58 (2019).
Rai, A. Explainable AI: from black box to glass box. J. Acad. Market. Sci. 48, 137–141 (2020).
Zhang, X., Cui, P., Xu, R., Zhou, L., He, Y., & Shen, Z. Deep stable learning for out-of-distribution generalization. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 5372–5382 (2021).
Dwork, C., Hardt, M., Pitassi, T., Reingold, O. & Zemel, R. Fairness through awareness. In Proc. of the 3rd Innovations in Theoretical Computer Science Conference 214–226 (2012).
Hardt, M. et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 3315–3323 (2016).
Kusner, M. J., Loftus, J., Russell, C. & Silva, R. Counterfactual fairness. In Advances in Neural Information Processing Systems 4066–4076 (2017).
Kilbertus, N. et al. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems 656–666 (2017).
Adragna, R., Creager, E., Madras, D. & Zemel, R. Fairness and robustness in invariant learning: a case study in toxicity classification. Preprint at https://arxiv.org/abs/2011.06485 (2020).
Hashimoto, T. B., Srivastava, M., Namkoong, H. & Liang, P. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning 1929–1938 (PMLR, 2018).
Roh, Y., Lee, K., Whang, S. E. & Suh, C. FR-Train: a mutual information-based approach to fair and robust training. In International Conference on Machine Learning 8147–8157 (PMLR, 2020).
Acknowledgements
Peng Cui’s research is supported by National Key R&D Program of China (No. 2018AAA0102004), National Natural Science Foundation of China (No. U1936219), Beijing Academy of Artificial Intelligence (BAAI) and Guoqiang Institute of Tsinghua University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks Kush Varshney and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cui, P., Athey, S. Stable learning establishes some common ground between causal inference and machine learning. Nat Mach Intell 4, 110–115 (2022). https://doi.org/10.1038/s42256-022-00445-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-022-00445-z
This article is cited by
-
Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals
npj Digital Medicine (2023)
-
Big Data in Earth system science and progress towards a digital twin
Nature Reviews Earth & Environment (2023)
-
Evaluation of digital resource service platform architecture based on machine learning
Soft Computing (2023)
-
The insight of why: Causal inference in Earth system science
Science China Earth Sciences (2023)
-
Feature importance measure of a multilayer perceptron based on the presingle-connection layer
Knowledge and Information Systems (2023)