Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Introducing contextual transparency for automated decision systems


As automated decision systems (ADS) get more deeply embedded into business processes worldwide, there is a growing need for practical ways to establish meaningful transparency. Here we argue that universally perfect transparency is impossible to achieve. We introduce the concept of contextual transparency as an approach that integrates social science, engineering and information design to help improve ADS transparency for specific professions, business processes and stakeholder groups. We demonstrate the applicability of the contextual transparency approach by using it for a well-established ADS transparency tool: nutritional labels that display specific information about an ADS. Empirically, it focuses on the profession of recruiting. Presenting data from an ongoing study about ADS use in recruiting alongside a typology of ADS nutritional labels, we suggest a nutritional label prototype for ADS-driven rankers such as LinkedIn Recruiter before closing with directions for future work.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Contextual transparency Venn diagram.
Fig. 2: The CTP matrix.
Fig. 3: Generating the slate.
Fig. 4: Nutritional label in slate generation.

Similar content being viewed by others


  1. AI capabilities deployed in standard business processes 2020. Statista (2022).

  2. Bailey, D. E. Emerging technologies at work: policy ideas to address negative consequences for work, workers, and society. ILR Rev. 75, 527–551 (2022).

    Article  Google Scholar 

  3. Ajunwa, I., Crawford, K. & Schultz, J. Limitless worker surveillance. Preprint at SSRN (2016).

  4. Kizilcec, R. F. & Lee, H. Algorithmic fairness in education. Preprint at (2021).

  5. Baker, R. S. & Hawn, A. Algorithmic bias in education. Int. J. Artif. Intell. Educ. 32, 1052–1092 (2022).

    Article  Google Scholar 

  6. Gipson Rankin, S. Technological tethereds: potential impact of untrustworthy artificial intelligence in criminal justice risk assessment instruments. Preprint at SSRN (2020).

  7. Angwin, J., Larson, J., Mattu, S. & Kirchner, L. Machine Bias. ProPublica (23 May 2016).

  8. Executive Order on Improving the Nation’s Cybersecurity. The White House (12 May 2021).

  9. Blueprint for an AI Bill of Rights. The White House (2022).

  10. AI RMF Playbook (NIST, 2022).

  11. Peirce, C. S. Philosophical Writings of Peirce (Dover Publications, 1955).

  12. James, W. Pragmatism, and Other Essays (Washington Square Press, 1963).

  13. de Lange, F. P., Heilbron, M. & Kok, P. How do expectations shape perception? Trends Cognit. Sci. 22, 764–779 (2018).

    Article  Google Scholar 

  14. Dretske, F. Knowledge and the Flow of Information (Basil Blackwell, 1981).

  15. Glynn, D. In Corpus Methods for Semantics 7–38 (John Benjamins Publishing Company, 2014).

  16. Otten, M., Seth, A. K. & Pinto, Y. A social Bayesian brain: How social knowledge can shape visual perception. Brain Cognit. 112, 69–77 (2017).

    Article  Google Scholar 

  17. Snyder, J. S., Schwiedrzik, C. M., Vitela, A. D. & Melloni, L. How previous experience shapes perception in different sensory modalities. Front. Hum. Neurosci. 9, 594 (2015).

    Article  Google Scholar 

  18. Baldauf, M., Dustdar, S. & Rosenberg, F. A survey on context-aware systems. Int. J. Ad Hoc Ubiquitous Comput. 2, 263–277 (2007).

    Article  Google Scholar 

  19. Greenberg, S. Context as a dynamic construct. Hum. Comput. Interact. 16, 257–268 (2001).

    Article  Google Scholar 

  20. Suchman, L. Human-Machine Reconfigurations: Plans and Situated Actions (Cambridge Univ. Press, 1987).

  21. Dourish, P. What we talk about when we talk about context. Pers. Ubiquit. Comput. 8, 19–30 (2004).

    Article  Google Scholar 

  22. Bazire, M. & Brézillon, P. In Modeling and Using Context (eds. Dey, A., Kokinov, B., Leake, D. & Turner, R.) Vol. 3554, 29–40 (Springer, 2005).

  23. Bellotti, V. & Edwards, K. Intelligibility and accountability: human considerations in context-aware systems. Hum. Comput. Interact. 16, 193–212 (2001).

    Article  Google Scholar 

  24. Sloane, M. & Moss, E. AI’s social sciences deficit. Nat. Mach. Intell. 1, 330–331 (2019).

    Article  Google Scholar 

  25. Miller, T. Explanation in artificial intelligence: insights from the social sciences. Preprint at (2018).

  26. Hirsch, T., Merced, K., Narayanan, S., Imel, Z. E. & Atkins, D. C. Designing contestability: interaction design, machine learning, and mental health. in Proceedings of the 2017 Conference on Designing Interactive Systems 95–99 (Association for Computing Machinery, 2017);

  27. Lyons, H., Velloso, E. & Miller, T. Conceptualising contestability: perspectives on contesting algorithmic decisions. Proc. ACM Hum. Comput. Interact. 5, 106:1–106:25 (2021).

    Article  Google Scholar 

  28. Stoyanovich, J., Van Bavel, J. J. & West, T. V. The imperative of interpretable machines. Nat. Mach. Intell. 2, 197–199 (2020).

    Article  Google Scholar 

  29. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).

    Article  Google Scholar 

  30. Marcinkevičs, R. & Vogt, J. E. Interpretability and explainability: a machine learning zoo mini-tour. Preprint at (2020).

  31. Ribeiro, M. T., Singh, S. & Guestrin, C. ‘Why should i trust you?’: Explaining the predictions of any classifier. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135–1144 (Association for Computing Machinery, 2016);

  32. Datta, A., Sen, S. & Zick, Y. Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In 2016 IEEE Symposium on Security and Privacy 598–617 (2016);

  33. Lundberg, S. M. & Lee, S.-I. A unified approach to interpreting model predictions. in Advances in Neural Information Processing Systems Vol. 30 (Curran Associates, 2017).

  34. Weld, D. S. & Bansal, G. The challenge of crafting intelligible intelligence. Commun. ACM 62, 70–79 (2019).

    Article  Google Scholar 

  35. Nissenbaum, H. Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford Univ. Press, 2009);

  36. Zimmer, M. Addressing conceptual gaps in big data research ethics: an application of contextual integrity. Social Media Soc. 4, 2056305118768300 (2018).

    Google Scholar 

  37. Jacovi, A., Marasović, A., Miller, T. & Goldberg, Y. Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 624–635 (Association for Computing Machinery, 2021);

  38. Celino, I. Who is this explanation for? Human intelligence and knowledge graphs for explainable AI. Preprint at (2020)

  39. Ajunwa, I. An auditing imperative for automated hiring. Preprint at SSRN (2019).

  40. Black, J. S. & van Esch, P. AI-enabled recruiting: what is it and how should a manager use it? Bus. Horiz. 63, 215–226 (2020).

    Article  Google Scholar 

  41. Cumbo, L. A. NYC Local Law 144 of 2021 (The New York City Council, 2021).

  42. Artificial Intelligence Video Interview Act (Illinois General Assembly, 2020).

  43. Statista. Number of LinkedIn users worldwide from 2019 to 2021, by subscription type. Statista (2022).

  44. Burns, S. What is Boolean search? The New York Public Library (2011).

  45. Stoyanovich, J. Hiring and AI: let job candidates know why they were rejected. WSJ (2021).

  46. Wartella, E. A., Lichtenstein, A. H. & Boon, C. S. History of Nutrition Labeling. Front-of-Package Nutrition Rating Systems and Symbols: Phase I Report (National Academies Press, 2010).

  47. Guidance for Industry: Food Labeling Guide. US Food and Drug Administration (2013).

  48. Felzmann, H., Fosch-Villaronga, E., Lutz, C. & Tamò-Larrieux, A. Towards transparency by design for artificial intelligence. Sci. Eng. Ethics 26, 3333–3361 (2020).

    Article  Google Scholar 

  49. Kay, J., Kuflik, T. & Rovatsos, M. Transparency by design (Dagstuhl seminar 21231). Dagstuhl Rep. 11, 1–22 (2021).

    Google Scholar 

  50. Zieglmeier, V. & Pretschner, A. Trustworthy transparency by design. Preprint at (2021).

  51. Cavoukian, A. The 7 Foundational Principles (Information and Privacy Commissioner of Ontario, 2009).

  52. Pattakou, A., Mavroeidi, A.-G., Diamantopoulou, V., Kalloniatis, C. & Gritzalis, S. Towards the design of usable privacy by design methodologies. In 2018 IEEE 5th International Workshop on Evolving Security & Privacy Requirements Engineering (ESPRE) (2018);

  53. Romanou, A. The necessity of the implementation of privacy by design in sectors where data protection concerns arise. Comput. Law Secur. Rev. 34, 99–110 (2018).

    Article  Google Scholar 

  54. Emami-Naeini, P., Agarwal, Y., Faith Cranor, L. & Hibshi, H. Ask the experts: what should be on an IoT privacy and security label? In 2020 IEEE Symposium on Security and Privacy 447–464 (2020);

  55. Johansen, J. et al. A multidisciplinary definition of privacy labels. Inf. Comput. Secur. 30, 452–469 (2022).

    Article  Google Scholar 

  56. Kelley, P. G., Cesca, L., Bresee, J. & Cranor, L. F. Standardizing privacy notices: an online study of the nutrition label approach. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 1573–1582 (Association for Computing Machinery, 2010);

  57. Shen, Y. & Vervier, P.-A. in Privacy Technologies and Policy (eds. Naldi, M. et al.) 136–147 (Springer, 2019);

  58. Kelley, P. G., Bresee, J., Cranor, L. F. & Reeder, R. W. A ‘nutrition label’ for privacy. in Proceedings of the 5th Symposium on Usable Privacy and Security (Association for Computing Machinery, 2009);

  59. Kollnig, K., Shuba, A., Van Kleek, M., Binns, R. & Shadbolt, N. Goodbye tracking? Impact of iOS app tracking transparency and privacy labels. In 2022 ACM Conference on Fairness, Accountability, and Transparency 508–520 (Association for Computing Machinery, 2022);

  60. Scoccia, G. L., Autili, M., Stilo, G. & Inverardi, P. An empirical study of privacy labels on the Apple iOS mobile app store. In Proceedings of the 9th IEEE/ACM International Conference on Mobile Software Engineering and Systems 114–124 (Association for Computing Machinery, 2022);

  61. Mascharka, D., Tran, P., Soklaski, R. & Majumdar, A. Transparency by design: closing the gap between performance and interpretability in visual reasoning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 4942–4950 (2018).

  62. Bargh, M., van de Mosselaar, M., Rutten, P. & Choenni, S. On using privacy labels for visualizing the privacy practice of SMEs: challenges and research directions. In DG.O 2022: The 23rd Annual International Conference on Digital Government Research 166–175 (Association for Computing Machinery, 2022).

  63. Yang, K. et al. A nutritional label for rankings. In Proceedings of the 2018 International Conference on Management of Data 1773–1776 (Association for Computing Machinery, 2018);

  64. Mitchell, M. et al. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency 220–229 (Association for Computing Machinery, 2019);

  65. Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (Association for Computing Machinery, 2014);

  66. Wnuk, K. & Garrepalli, T. Knowledge management in software testing: a systematic snowball literature review. E-Informatica Softw. Eng. J. 12, 51–78 (2018).

    Google Scholar 

  67. Stoyanovich, J. & Howe, B. Nutritional labels for data and models. In A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering (2019).

  68. Gebru, T. et al. Datasheets for datasets. Commun. ACM 64, 86–92 (2021).

    Article  Google Scholar 

  69. Harkous, H. et al. Polisis: automated analysis and presentation of privacy policies using deep learning. Preprint at (2018).

  70. Fang, L. & LeFevre, K. Privacy wizards for social networking sites. In Proceedings of the 19th International Conference on World Wide Web 351–360 (Association for Computing Machinery, 2010);

  71. Mateescu, A. & Elish, M. C. AI in context. Data & Society (2019).

  72. Volokhin, S., Collins, M., Rokhlenko, O. & Agichtein, E. Generating and validating contextually relevant justifications for conversational recommendation. In ACM SIGIR Conference on Human Information Interaction and Retrieval 284–289 (ACM, 2022);

  73. Balog, K. & Radlinski, F. Measuring recommendation explanation quality: the conflicting goals of explanations. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 329–338 (Association for Computing Machinery, 2020);

  74. Sinha, R. & Swearingen, K. The role of transparency in recommender systems. In CHI ’02 Extended Abstracts on Human Factors in Computing Systems 830–831 (Association for Computing Machinery, 2002);

  75. 2020/C 424/01 (European Union, 2020).

  76. Bathaee, Y. The artificial intelligence black box and the failure of intent and causation. Harvard J. Law Technol. 31, 889–938 (2018).

    Google Scholar 

  77. Wagner, B. Liable, but not in control? Ensuring meaningful human agency in automated decision-making systems. Policy Internet 11, 104–122 (2019).

    Article  Google Scholar 

  78. Susskind, R. E. & Susskind, D. The Future of the Professions: How Technology Will Transform the Work of Human Experts (Oxford Univ. Press, 2015).

  79. Klegon, D. The sociology of professions: an emerging perspective. Sociol. Work Occup. 5, 259–283 (1978).

    Article  Google Scholar 

  80. Abbott, A. The System of Professions: An Essay on The Division of Expert Labor (Univ. Chicago Press, 1988).

  81. Wenger, E. Communities of Practice: Learning, Meaning, and Identity (Cambridge Univ. Press, 1999).

  82. Kartikeya, A. in Intelligent Computing (ed. Arai, K.) 353–358 (Springer, 2022);

  83. Schmidt, P., Biessmann, F. & Teubner, T. Transparency and trust in artificial intelligence systems. J. Decis. Syst. 29, 260–278 (2020).

    Article  Google Scholar 

  84. Elia, J. Transparency rights, technology, and trust. Ethics Inf. Technol. 11, 145–153 (2009).

    Article  Google Scholar 

  85. Ashwin, Agnew, W., Pajaro, U., Jethwani, H. & Subramonian, A. Rebuilding trust: queer in AI approach to artificial intelligence risk management. Preprint at (2022).

  86. LaRosa, E. & Danks, D. Impacts on trust of healthcare AI. in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society 210–215 (Association for Computing Machinery, 2018);

  87. Chmielinski, K. S. et al. The Dataset Nutrition Label (2nd Gen): Leveraging Context to Mitigate Harms in Artificial Intelligence. Preprint at (2022).

  88. Stoyanovich, J., Abiteboul, S., Howe, B., Jagadish, H. V. & Schelter, S. Responsible data management. Commun. ACM 65, 64–74 (2022).

    Article  Google Scholar 

  89. Arnold, M. et al. FactSheets: increasing trust in AI services through supplier’s declarations of conformity. Preprint at (2019).

  90. Sun, C., Asudeh, A., Jagadish, H. V., Howe, B. & Stoyanovich, J. MithraLabel: flexible dataset nutritional labels for responsible data science. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2893–2896 (Association for Computing Machinery, 2019);

  91. Wachter, S., Mittelstadt, B. & Russell, C. Counterfactual explanations without opening the black box: automated decisions and the GDPR. Preprint at (2018)

  92. Byrne, R. M. J. Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 6276–6282 (2019).

Download references


This research was supported in part by National Science Foundation awards 1916505, 1922658 and 1928627.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Mona Sloane.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Aurelia Tamo-Larrieux and Silvia Milano for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sloane, M., Solano-Kamaiko, I.R., Yuan, J. et al. Introducing contextual transparency for automated decision systems. Nat Mach Intell 5, 187–195 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

This article is cited by


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing