Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

The imperative of interpretable machines

As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.

Your institute does not have access to this article

Access options

Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.


  1. Stone, P. et al. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel (Stanford Univ., 2016).

  2. New York City Automated Decision Systems Task Force Report (, 2019).

  3. Rovatsos, M. Nat. Mach. Intell 1, 497–498 (2019).

    Article  Google Scholar 

  4. Selbst, A. & Barocas, S. Fordham L. Rev 87, 1085–1139 (2018).

    Google Scholar 

  5. Guidotti, R. et al. ACM Comput. Surv. 51, 93 (2019).

    Article  Google Scholar 

  6. Bobocel D. R., Gosse, L. The Oxford Handbook of Justice in the Workplace (Oxford Univ. Press, 2015).

  7. Shapiro, E. The New York Times (2019).

  8. Lee, M. K. et al. Proc. ACM CHI 3, 1–35 (2019).

    Google Scholar 

  9. Lum, K. & Isaac, W. Significance 13, 14–19 (2016).

    Article  Google Scholar 

  10. Stoyanovich, J. & Howe, B. IEEE Data Eng. Bull 42, 13–23 (2019).

    Google Scholar 

  11. Van Bavel, J. J. & Pereira, A. Trends Cogn. Sci. 22, 213–224 (2018).

    Article  Google Scholar 

  12. Alfano, M. & Huijts, N. Handbook of Trust and Philosophy (Routledge, 2019).

  13. Feiner, L. CBNC (2019).

  14. Schwartz, O. The Guardian (2018).

  15. Haidt, J. Science 316, 998–1002 (2007).

    Article  Google Scholar 

  16. Bigman, Y. E., Waytz, A., Alterovitz, R. & Gray, K. Trends Cogn. Sci. 23, 365–368 (2019).

    Article  Google Scholar 

  17. Bonnefon, J. F., Shariff, A. & Rahwan, I. Science 352, 1573–1576 (2016).

    Article  Google Scholar 

  18. Awad, E. et al. Nature 563, 59–64 (2018).

    Article  Google Scholar 

  19. Van Lange, P. A. M., Joireman, J., Parks, C. D. & Van Dijk, E. Organ. Behav. Hum. Decis. Process 120, 125–141 (2013).

    Article  Google Scholar 

  20. Holland, S., Hosny, A., Newman, S., Joseph, J. & Chmielinski, K. Preprint at (2018).

  21. Mitchell, M. et al. in Proc. ACM FAT* 220–229 (2019).

  22. Yang, K. et al. in Proc. ACM SIGMOD 1773–1776 (2018).

  23. Loewenstein, G. Am. J. Clin. Nutr 93, 679–680 (2011).

    Article  Google Scholar 

  24. Zhang, B. & Dafoe, A. Artificial Intelligence: American Attitudes and Trends (Center for the Governance of AI, 2019).

Download references

Author information

Authors and Affiliations


Corresponding authors

Correspondence to Julia Stoyanovich, Jay J. Van Bavel or Tessa V. West.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Stoyanovich, J., Van Bavel, J.J. & West, T. The imperative of interpretable machines. Nat Mach Intell 2, 197–199 (2020).

Download citation

  • Published:

  • Issue Date:

  • DOI:

Further reading


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing