Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Measuring algorithmically infused societies


It has been the historic responsibility of the social sciences to investigate human societies. Fulfilling this responsibility requires social theories, measurement models and social data. Most existing theories and measurement models in the social sciences were not developed with the deep societal reach of algorithms in mind. The emergence of ‘algorithmically infused societies’—societies whose very fabric is co-shaped by algorithmic and human behaviour—raises three key challenges: the insufficient quality of measurements, the complex consequences of (mis)measurements, and the limits of existing social theories. Here we argue that tackling these challenges requires new social theories that account for the impact of algorithmic systems on social realities. To develop such theories, we need new methodologies for integrating data and measurements into theory construction. Given the scale at which measurements can be applied, we believe measurement models should be trustworthy, auditable and just. To achieve this, the development of measurements should be transparent and participatory, and include mechanisms to ensure measurement quality and identify possible harms. We argue that computational social scientists should rethink what aspects of algorithmically infused societies should be measured, how they should be measured, and the consequences of doing so.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type



Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Measuring algorithmically infused societies.


  1. boyd, d. The future of privacy: how privacy norms can inform regulation. In 32nd Intl Conf. Data Protection and Privacy Commissioners (2010);

  2. Gill, K. S. The internet of things! Then what? AI Soc. 28, 367–371 (2013).

    Article  Google Scholar 

  3. O’Reilly, T. Open data and algorithmic regulation. In Beyond Transparency: Open Data and the Future of Civic Innovation (eds Goldstein, B. & Dyson, L.) 289–300 (Code for America Press, 2013).

  4. Castells, M. The Information Age: Economy, Society and Culture. Vol. 1: The Rise of the Network Society (Wiley–Blackwell, 1996).

  5. Fleder, D. & Hosanagar, K. Blockbuster culture’s next rise or fall: the impact of recommender systems on sales diversity. Manage. Sci. 55, 697–712 (2009).

    Article  Google Scholar 

  6. Tufekci, Z. Engineering the public: big data, surveillance and computational politics. First Monday 19, (2014).

  7. Bakshy, E., Messing, S. & Adamic, L. A. Exposure to ideologically diverse news and opinion on Facebook. Science 348, 1130–1132 (2015).

    Article  MathSciNet  CAS  PubMed  MATH  ADS  Google Scholar 

  8. Ferrara, E., Varol, O., Davis, C., Menczer, F. & Flammini, A. The rise of social bots. Commun. ACM 59, 96–104 (2016).

    Article  Google Scholar 

  9. Le Chen, A. M. & Wilson, C. An empirical analysis of algorithmic pricing on Amazon Marketplace. In Proc. 25th Intl Conf. World Wide Web (WWW’16) (eds Bourdeau, J. et al.) 1339–1349 (International World Wide Web Conferences Steering Committee, 2016);

  10. Salganik, M. J., Sheridan Dodds, P. & Watts, D. J. Experimental study of inequality and unpredictability in an artificial cultural market. Science 311, 854–856 (2006).

    Article  CAS  PubMed  ADS  Google Scholar 

  11. Baym, N. K. Playing to the Crowd: Musicians, Audiences, and the Intimate Work of Connection (NYU Press, 2018).

  12. Burgess, J. & Green, J. YouTube: Online Video and Participatory Culture (Wiley, 2018).

  13. Hitsch, G. J., Hortaçsu, A. & Ariely, D. Matching and sorting in online dating. Am. Econ. Rev. 100, 130–163 (2010).

    Article  Google Scholar 

  14. Zignani, M. et al. Link and triadic closure delay: temporal metrics for social network dynamics. In Proc. 8th Intl AAAI Conf. Web and Social Media (eds Adar, E. & Resnick, P.) 564–573 (2014).

  15. Malik, M. & Pfeffer, J. Identifying platform effects in social media data. In Proc. 10th Intl AAAI Conf. Web and Social Media (eds Krishna, G. & Strohmaier, M.) 241–249 (2016).

  16. Su, J., Sharma, A. & Goel, S. The effect of recommendations on network structure. In Proc. 25th Intl Conf. World Wide Web (WWW’16) (eds Bourdeau, J. et al.) 1157–1167 (International World Wide Web Conferences Steering Committee, 2016);

  17. Loscalzo, J. & Barabasi, A.-L. Systems biology and the future of medicine. Wiley Interdiscip. Rev. Syst. Biol. Med. 3, 619–627 (2011).

    Article  PubMed  PubMed Central  Google Scholar 

  18. Frizzell, J. D., et al. Prediction of 30-day all-cause readmissions in patients hospitalized for heart failure: comparison of machine learning and other statistical approaches. JAMA Cardiol. 2, 204–209 (2017).

    Article  PubMed  Google Scholar 

  19. Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453 (2019).

    Article  CAS  PubMed  ADS  Google Scholar 

  20. Huang, C.-L., Chen, M.-C. & Wang, C.-J. Credit scoring with a data mining approach based on support vector machines. Expert Syst. Appl. 33, 847–856 (2007).

    Article  Google Scholar 

  21. Perry, W. L., McInnis, B., Price, C. C., Smith, S. C. & Hollywood, J. S. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations (RAND Corporation, 2013).

  22. Dressel, J. & Farid, H. The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4, eaao5580 (2018).

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  23. Raghavan, M., Barocas, S., Kleinberg, J. & Levy, K. Mitigating bias in algorithmic hiring: evaluating claims and practices. In Proc. 2020 Conf. Fairness, Accountability, and Transparency (eds Hildebrandt, M. & Castillo, C.) 469–481 (ACM, 2020);

  24. Hannák, A. et al. Bias in online freelance marketplaces: evidence from TaskRabbit and Fiverr. In Proc. 2017 ACM Conf. Computer Supported Cooperative Work and Social Computing (CSCW’17) (eds Lee, C. P. & Poltrock, S.) 1914–1933 (ACM, 2017); This study reports on sociodemographic inequalities in online marketplaces.

  25. Gray, M. L. & Suri, S. Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass (Eamon Dolan Books, 2019).

  26. Beer, D. The social power of algorithms. Inf. Commun. Soc. 20, 1–13 (2017).

    Article  Google Scholar 

  27. Kleinberg, J., Ludwig, J., Mullainathan, S. & Sunstein, C. R. Algorithms as discrimination detectors. Proc. Natl Acad. Sci. USA 117, 30096–30100 (2020).

    Article  MathSciNet  CAS  PubMed  PubMed Central  Google Scholar 

  28. Buolamwini, J. & Gebru, T. Gender shades: intersectional accuracy disparities in commercial gender classification. In Proc. 1st Conf. Fairness, Accountability and Transparency (eds Friedler, S. A. & Wilson, C.) 77–91 (2018).

  29. Moy, L. How police technology aggravates racial inequity: a taxonomy of problems and a path forward. Univ. Illinois Law Rev. 2021, 139–193 (2021).

    Google Scholar 

  30. Hutchinson, B. & Mitchell, M. 50 years of test (un)fairness: lessons for machine learning. In Proc. Conf. Fairness, Accountability, and Transparency (eds Morgenstern, J. & boyd, d.) 49–58 (2019).

  31. Barocas, S. & Selbst, A. D. Big data’s disparate impact. Calif. Law Rev. 104, 671–732 (2016).

    Google Scholar 

  32. Milli, S., Miller, J., Dragan, A. D. & Hardt M. The social cost of strategic classification. In Proc. Conf. Fairness, Accountability, and Transparency 230–239 (2019).

  33. Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell. S. On the dangers of stochastic parrots: can language models be too big? In Proc. Conf. Fairness, Accountability, and Transparency (eds Elish, M. C. et al.) 610–623 (2021).

  34. Schnabel, T., Swaminathan, A., Singh, A., Chandak, N. & Joachims, T. Recommendations as treatments: debiasing learning and evaluation. In Intl Conf. Machine Learning (eds Balcan, M. F. & Weinberger, K. Q.) 1670–1679 (2016).

  35. Sztompka, P. in Polish Essays in the Methodology of the Social Sciences (ed. Wiatr, J. J.) 173–194 (Springer, 1979).

  36. Jaccard, J. & Jacoby, J. Theory Construction and Model Building Skills: A Practical Guide for Social Scientists (Guilford, 2010).

  37. Lord, F. M. & Novick, M. R. Statistical Theories of Mental Test Scores (Addison-Wesley, 1968).

  38. Allen, M. J. & Yen, W. M. Introduction to Measurement Theory (Waveland, 2002).

  39. Joye, D., Wolf, C., Smith, T. W. & Fu, Y. in The SAGE Handbook of Survey Methodology (eds Wolf, C. et al.) 3–15 (Sage, 2016).

  40. Strathern, M. ‘Improving ratings’: audit in the British university system. Eur. Rev. 5, 305–321 (1997).

    Article  Google Scholar 

  41. Campbell, D. T. Assessing the impact of planned social change. Eval. Program Plann. 2, 67–90 (1979).

    Article  Google Scholar 

  42. Festinger, L. A Theory of Cognitive Dissonance (Stanford Univ. Press, 1957).

  43. Heider, F. Attitudes and cognitive organization. J. Psychol. 21, 107–112 (1946).

    Article  CAS  PubMed  Google Scholar 

  44. Cartwright, D. & Harary, F. Structural balance: a generalization of Heider’s theory. Psychol. Rev. 63, 277–293 (1956).

    Article  CAS  PubMed  Google Scholar 

  45. McPherson, M., Smith-Lovin, L. & Cook, J. M. Birds of a feather: homophily in social networks. Annu. Rev. Sociol. 27, 415–444 (2001).

    Article  Google Scholar 

  46. Seitlinger, P., Kowald, D., Trattner, C. & Ley, T. Recommending tags with a model of human categorization. In Proc. 22nd ACM Intl Conf. on Information & Knowledge Management (eds He, Q. & Iyengar, A.) 2381–2386 (ACM, 2013).

  47. Bowker, G. C. & Star, S. L. Sorting Things Out: Classification and its Consequences (MIT Press, 2000).

  48. Healy, K. The performativity of networks. Euro. J. Sociol. 56, 175–205 (2015). This article argues that theories have the potential to reformat and reorganize the phenomena that models purport to describe.

    Article  Google Scholar 

  49. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Profile Books, 2018).

  50. Pasquale, F. The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard Univ. Press, 2015).

  51. Nissenbaum, H. How Computer Systems Embody Values (IEEE Computer Society Press, 2001).

  52. Seaver, N. Knowing algorithms. In DigitalSTS (eds Vertesi, J. & Ribes, D.) 412–422 (Princeton Univ. Press, 2013).

  53. boyd, d. & Crawford, K. Critical questions for big data: provocations for a cultural, technological, and scholarly phenomenon. Inf. Commun. Soc. 15, 662–679 (2012).

    Article  Google Scholar 

  54. Graham, S. & Wood, D. Digitizing surveillance: categorization, space, inequality. Crit. Soc. Policy 23, 227–248 (2003).

    Article  Google Scholar 

  55. Benjamin, R. Catching our breath: critical race STS and the carceral imagination. Engaging Sci. Technol. Soc. 2, 145–156 (2016).

    Article  Google Scholar 

  56. Lazer, D. et al. Computational social science. Science 323, 721–723 (2009). This landmark article discussed the early potential of computational approaches for the social sciences.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Lazer, D. M. J. et al. Computational social science: obstacles and opportunities. Science 369, 1060–1062 (2020).

    Article  CAS  PubMed  ADS  Google Scholar 

  58. Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359, 1146–1151 (2018).

    Article  CAS  PubMed  ADS  Google Scholar 

  59. Del Vicario, M. et al. The spreading of misinformation online. Proc. Natl Acad. Sci. USA 113, 554–559 (2016).

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  60. Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proc. Natl Acad. Sci. USA 115, 9216–9221 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  61. Blalock, H. M. Conceptualization and Measurement in the Social Sciences (Sage, 1982).

  62. RatSWD Quality Standards Working Group. Quality Standards for the Development, Application, and Evaluation of Measurement Instruments in Social Science Survey Research RatSWD Working Papers 245 (German Data Forum (RatSWD), 2015). This paper proposes quality standards and guidelines for the development, application and evaluation of measurement instruments in social science survey research.

  63. Jacobs, A. Z. & Wallach, H. Measurement and fairness. In FAccT’21: Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency (eds Elish, M. C. et al.) 375–385 (ACM, 2019); This paper describes how validity issues can lead to fairness issues.

  64. Adcock, R. & Collier, D. Measurement validity: a shared standard for qualitative and quantitative research. Am. Polit. Sci. Rev. 95, 529–546 (2001).

    Article  Google Scholar 

  65. Hofman, J. M., Sharma, A. & Watts, D. J. Prediction and explanation in social systems. Science 355, 486–488 (2017).

    Article  CAS  PubMed  ADS  Google Scholar 

  66. Peters, J., Janzing, D. & Schoelkopf, B. Elements of Causal Inference: Foundations and Learning Algorithms (MIT Press, 2017).

  67. Jungherr, A. in Digital Discussions: How Big Data Informs Political Communication (eds Stroud, N. J. & McGregor, S.) 9–35 (Routledge, 2018).

  68. Donoho, D. 50 years of data science. J. Comput. Graph. Stat. 26, 745–766 (2017).

    Article  MathSciNet  Google Scholar 

  69. Blodgett, S. L., Lopez, G., Olteanu, A., Sim, R. & Wallach, H. Stereotyping Norwegian salmon: an inventory of pitfalls in fairness benchmark datasets. In Proc. 59th Annual Meeting of the Association for Computational Linguistics (ed. Zong, C.) (2021).

  70. Ethayarajh, K. & Jurafsky, D. Utility is in the eye of the user: a critique of NLP leaderboard design. In Proc. 2020 Conf. Empirical Methods in Natural Language Processing (EMNLP) (eds Webber, B. et al.) 4846–4853 (2020).

  71. Jungherr, A., Schoen, H., Posegga, O. & Jürgens, P. Digital trace data in the study of public opinion: an indicator of attention toward politics rather than political support. Soc. Sci. Comput. Rev. 35, 336–356 (2017).

    Article  Google Scholar 

  72. Samory, M., Sen, I., Kohne, J., Flöck, F. & Wagner, C. Call me sexist, but…: Revisiting sexism detection using psychological scales and adversarial samples. In Intl AAAI Conf. Web and Social Media 573–584 (2021).

  73. Gebru, T. et al. Datasheets for datasets. Preprint at (2018).

  74. Olteanu, A., Castillo, C., Diaz, F. & Kıcıman, E. Social data: biases, methodological pitfalls, and ethical boundaries. Front. Big Data 2, 13 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  75. Sen, I., Floeck, F., Weller, K., Weiss, B. & Wagner, C. A total error framework for digital traces of human behavior on online platforms. Public Opin. Q. (in the press). This paper gives a systematic overview of errors that may be introduced when analysing digital traces of human behavior.

  76. Lazer, D. Issues of construct validity and reliability in massive, passive data collections. The City Papers: An Essay Collection from The Decent City Initiative (2015).

  77. Perdomo, J., Zrnic, T., Mendler-Dünner, C. & Hardt, M. Performative prediction. In Proc. 37th Intl Conf. Machine Learning (eds Daumé III, H. & Singh, A.) 7599–7609 (PMLR, 2020).

  78. Hannak, A. et al. Measuring personalization of web search. In Proc. 22nd Intl Conf. World Wide Web (WWW’13) (eds Schwabe, D. et al.) 527–538 (ACM, 2013).

  79. Ali, M. et al. Discrimination through optimization: how Facebook’s ad delivery can lead to biased outcomes. Proc. ACM Hum.–Comp. Interact. 3, 199 (2019).

  80. Thomas, P. & Brunskill, E. Data-efficient off-policy policy evaluation for reinforcement learning. In Intl Conf. Machine Learning (eds Balcan, M. F. & Weinberger, K. Q.) 2139–2148 (PMLR, 2016).

  81. Sinha, A., Gleich, D. F. & Ramani, K. Deconvolving feedback loops in recommender systems. In Advances in Neural Information Processing Systems 29 (eds Lee, D. et al.) 3243–3251 (2016).

  82. Tomašev, N. et al. AI for social good: unlocking the opportunity for positive impact. Nat. Commun. 11, 2468 (2020).

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  83. Mau, S. Das metrische Wir: Über die Quantifizierung des Sozialen (Suhrkamp, 2017). This book explores the implications of measurements on social systems.

  84. D’Ignazio, C. & Klein, L. F. Data Feminism (MIT Press, 2020).

  85. Muller, J. Z. The Tyranny of Metrics (Princeton Univ. Press, 2018).

  86. Lee, M. K., Jain, A., Cha, H. J., Ojha, S. & Kusbit, D. Procedural justice in algorithmic fairness: leveraging transparency and outcome control for fair algorithmic mediation. Proc. ACM Hum.–Comp. Interact. 3, 182 (2019).

    Google Scholar 

  87. Weller, A. Challenges for transparency. Preprint at (2017).

  88. Shokri, R., Strobel, M. & Zick, Y. Exploiting transparency measures for membership inference: a cautionary tale. In The AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI) (eds Fioretto, F. et al.) 17 (AAAI, 2020).

  89. Shokri, R., Strobel, M. & Zick, Y. On the privacy risks of model explanations. Preprint at (2019).

  90. Lahoti, P. et al. Fairness without demographics through adversarially reweighted learning. In Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) 728–740 (2020).

  91. Dwork, C., Hardt, M., Pitassi, T., Reingold, O. & Zemel, R. Fairness through awareness. In Proc. 3rd Innovations in Theoretical Computer Science Conf. (eds Goldwasser, S.) 214–226 (ACM, 2012).

  92. Hardt, M., Price, E. & Srebro, N. Equality of opportunity in supervised learning. In Proc. 30th Intl Conf. on Neural Information Processing Systems (eds Lee, D. et al.) 3323–3331 (Curran, 2016).

  93. Werner, D. Nothing About Us Without Us: Developing Innovative Technologies for, by and with Disabled Persons (Healthwrights, 1998).

  94. Charlton, J. I. Nothing About Us Without Us: Disability Oppression and Empowerment (Univ. California Press, 1998).

  95. Costanza-Chock, S. Design justice, A.I., and escape from the matrix of domination. J. Design Sci. (2018).

  96. Scott, J. C. Seeing Like a State (Yale Univ. Press, 2008).

  97. Lazer, D. et al. Meaningful measures of human society in the twenty-first century. Nature (2021).

  98. Goel, S., Anderson, A., Hofman, J. & Watts, D. J. The structural virality of online diffusion. Manage. Sci. 62, 180–196 (2016).

    Article  Google Scholar 

  99. Aral, S. & Nicolaides, C. Exercise contagion in a global social network. Nat. Commun. 8, 14753 (2017).

    Article  CAS  PubMed  PubMed Central  ADS  Google Scholar 

  100. Eagle, N., Macy, M. & Claxton, R. Network diversity and economic development. Science 328, 1029–1031 (2010).

    Article  MathSciNet  CAS  PubMed  MATH  ADS  Google Scholar 

  101. Contractor, N. in The Oxford Handbook of Networked Communication (eds Welles, B. F. and González-Bailón, S.) (Oxford Univ. Press, 2018).

  102. Contractor, N., Monge, P. R. & Leonardi, P. M. Multidimensional networks and the dynamics of sociomateriality: bringing technology inside the network. Int. J. Commun. 5, 682–720 (2011).

    Google Scholar 

  103. Rahwan, I. et al. Machine behaviour. Nature 568, 477–486 (2019). This paper argues that studies of machine behaviour are necessary to control AI-enabled systems and to avoid harm.

    Article  CAS  PubMed  ADS  Google Scholar 

  104. Salganik, M. J. Bit By Bit: Social Research in the Digital Age (Princeton Univ. Press, 2017). This book gives an overview of methods deployed in computational social science.

  105. Baeza-Yates, R. Big data or right data? In Proc. 7th Alberto Mendelzon Intl Workshop on Foundations of Data Management (eds Bravo, L. & Lenzerini, M.) 14 (CEUR-WS, 2013).

  106. Schwarz, G. & Stensaker, I. Time to take off the theoretical straightjacket and (re-)introduce phenomenon-driven research. J. Appl. Behav. Sci. 50, 478–501 (2014).

    Article  Google Scholar 

  107. Mathieu, J. E. The problem with [in] management theory. J. Organ. Behav. 37, 1132–1141 (2016).

    Article  Google Scholar 

  108. Watts, D. Should social science be more solution-oriented? Nat. Hum. Behav. 1, 15 (2017).

    Article  Google Scholar 

  109. Stier, S., Breuer, J., Siegers, P. & Thorson, K. Integrating survey data and digital trace data: key issues in developing an emerging field. Soc. Sci. Comput. Rev. 38, 503–516 (2020).

    Article  Google Scholar 

  110. Mellon, J. Internet search data and issue salience: the properties of google trends as a measure of issue salience. J. Elections Public Opin. Parties 24, 45–72 (2014).

    Article  Google Scholar 

  111. Stier, S., Bleier, A., Lietz, H. & Strohmaier, M. Election campaigning on social media: politicians, audiences, and the mediation of political communication on Facebook and Twitter. Polit. Commun. 35, 50–74 (2018).

    Article  Google Scholar 

  112. Bernard, H. R., Killworth, P. D. & Sailer, L. Informant accuracy in social-network data: V. An experimental attempt to predict actual communication from recall data. Soc. Sci. Res. 11, 30–66 (1982).

    Article  Google Scholar 

  113. Prince, S. A. et al. A comparison of direct versus self-report measures for assessing physical activity in adults: a systematic review. Int. J. Behav. Nutr. Phys. Act. 5, 56 (2008).

    Article  PubMed  PubMed Central  Google Scholar 

  114. Scharkow, M. The accuracy of self-reported internet use—a validation study using client log data. Commun. Methods Meas. 10, 13–27 (2016).

    Article  Google Scholar 

  115. Boase, J. & Ling, R. Measuring mobile phone use: self-report versus log data. J. Comput. Mediat. Commun. 18, 508–519 (2013).

    Article  Google Scholar 

  116. Revilla, M., Ochoa, C. & Loewe, G. Using passive data from a meter to complement survey data in order to study online behavior. Soc. Sci. Comput. Rev. 35, 521–536 (2017).

    Article  Google Scholar 

  117. Elmer, T., Chaitanya, K., Purwar, P. & Stadtfeld, C. The validity of RFID badges measuring face-to-face interactions. Behav. Res. Methods 51, 2120–2138 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  118. Sapiezynski, P., Stopczynski, A., Lassen, D. D. & Lehmann, S. Interaction data from the Copenhagen Networks Study. Sci. Data 6, 315 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  119. Groves, R. M. & Lyberg, L. Total survey error: past, present, and future. Public Opin. Q. 74, 849–879 (2010).

    Article  Google Scholar 

  120. Coravos, A., Chen, I., Gordhandas, A. & Stern, A. D. We should treat algorithms like prescription drugs. Quartz, (February 2019).

  121. Arnold, M. et al. FactSheets: increasing trust in AI services through supplier’s declarations of conformity. Preprint at (2019).

  122. Bender, E. & Friedman, B. Data statements for natural language processing: toward mitigating system bias and enabling better science. Trans. Assoc. Comput. Linguist. 6, 587–604 (2018).

    Article  Google Scholar 

  123. Mitchell, M. et al. Model cards for model reporting. In Proc. Conf. Fairness, Accountability, and Transparency (eds Morgenstern, J. & boyd, d.) 220–229 (ACM, 2019). This paper argues for higher standards in documenting machine learning models.

  124. Kuh, A., Petsche, T. & Rivest, R. Learning time-varying concepts. In Advances in Neural Information Processing Systems 3 (eds Lippmann, R. P. et al.) 183–189 (Morgan-Kaufmann, 1991).

  125. Bartlett, P. L., Ben-David, S. & Kulkarni, S. R. Learning changing concepts by exploiting the structure of change. Mach. Learn. 41, 153–174 (2000).

    Article  MATH  Google Scholar 

  126. Gama, J., Žliobaitundefined, I., Bifet, A., Pechenizkiy, M. & Bouchachia, A. A survey on concept drift adaptation. ACM Comput. Surv. 46, 44 (2014).

    Article  MATH  Google Scholar 

  127. Abbasi, M., Friedler, S. A., Scheidegger, C. & Venkatasubramanian, S. Fairness in representation: quantifying stereotyping as a representational harm. In Proc. 2019 SIAM Intl Conf. Data Mining (eds Berger-Wolf, T. & Chawla, N.) 801–809 (SIAM, 2019).

  128. Abebe, R. et al. Roles for computing in social change. In Proc. 2020 Conf. Fairness, Accountability, and Transparency (eds Castillo, C. & Hildebrandt, M.) 252–260 (ACM, 2020).

  129. Hampton, L. M. Black feminist musings on algorithmic oppression. In Proc. 2021 Conf. Fairness, Accountability, and Transparency (eds Elish, M. C. et al.) 1 (ACM, 2021).

  130. De-Arteaga, M., Fogliato, R. & Chouldechova, A. A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores. In Proc. 2020 CHI Conf. Human Factors in Computing Systems (eds Bernhaupt, R. et al.) 1–12 (ACM, 2020).

  131. Boyarskaya, M., Olteanu, A. & Crawford, K. Overcoming failures of imagination in AI infused system development and deployment. Preprint at (2020).

  132. Nanayakkara, P., Diakopoulos, N. & Hullman, J. Anticipatory ethics and the role of uncertainty. Preprint at (2020).

  133. Friedman, B. Value-sensitive design. Interaction 3, 16–23 (1996).

    Article  Google Scholar 

  134. Olteanu, A., Diaz, F. & Kazai, G. When are search completion suggestions problematic? Proc. ACM Hum.–Comp. Interact. 4, 1–25 (2020).

    Article  Google Scholar 

  135. Jiang, J. A., Wade, K., Fiesler, C. & Brubaker, J. R. Supporting serendipity: opportunities and challenges for human–AI collaboration in qualitative analysis. Proc. ACM Hum.–Comp. Interact. 5, 1–23 (2021).

    Google Scholar 

  136. Churchill, E., van Allen, P. & Kuniavsky, M. Designing AI: introduction. Interaction 25, 34–37 (2018).

    Article  Google Scholar 

  137. Selbst, A. D. boyd, d., Friedler, S., Venkatasubramanian, S. & Vertesi, J. Fairness and abstraction in sociotechnical systems. In Proc. Conf. Fairness, Accountability, and Transparency (eds Morgenstern, J. & boyd, d.) 59–68 (ACM, 2019).

  138. Barocas, S., Biega, A. J., Fish, B., Niklas, J. & Stark, L. When not to design, build, or deploy. In Proc. 2020 Conf. Fairness, Accountability, and Transparency (eds Castillo, C. & Hildebrandt, M.) 695–695 (ACM, 2020).

  139. Monge, P. & Contractor, N. Theories of Communication Networks (Oxford Univ. Press, 2003).

  140. Glaser, B. & Strauss, A. The Discovery of Grounded Theory: Strategies for Qualitative Research (Aldine de Gruyter, 1967).

  141. Bryant, A. Re-grounding grounded theory. J. Inf. Technol. Theory Appl. 4, 25–42 (2002).

    Google Scholar 

  142. Charmaz, K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis (Sage, 2006).

  143. Timmermans, S. & Tavory, I. Theory construction in qualitative research: from grounded theory to abductive analysis. Sociol. Theory 30, 167–186 (2012).

    Article  Google Scholar 

  144. Nelson, L. Computational grounded theory: a methodological framework. Sociol. Methods Res. 49, 3–42 (2020). This paper proposes a methodological framework to combine expert human knowledge and hermeneutic skills with the processing power and pattern recognition of computers.

    Article  MathSciNet  Google Scholar 

  145. McFarland, D., Lewis, K. & Goldberg, A. Sociology in the era of big data: the ascent of forensic social science. Am. Sociol. 47, 12–35 (2015).

    Article  Google Scholar 

  146. Radford, J. & Joseph, K. Theory in, theory out: the uses of social theory in machine learning for social science. Front. Big Data 3, 18 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  147. Macy, M. & Willer, R. From factors to actors: computational sociology and agent-based modeling. Annu. Rev. Sociol. 28, 143–166 (2002).

    Article  Google Scholar 

  148. Smith, E. R. & Conrey, F. R. Agent-based modeling: a new approach for theory building in social psychology. Pers. Soc. Psychol. Rev. 11, 87–104 (2007).

    Article  PubMed  Google Scholar 

  149. Keuschnigg, M., Lovsjö, N. & Hedström, P. Analytical sociology and computational social science. J. Comp. Soc. Sci. 1, 3–14 (2018).

    Article  Google Scholar 

  150. Hedström, P. & Bearman, P. in The Oxford Handbook of Analytical Sociology (eds Hedström, P. & Bearman, P.) (Oxford Univ. Press, 2011).

  151. Lemmerich, F. et al. Mining subgroups with exceptional transition behavior. In Proc. 22nd ACM SIGKDD Intl Conf. Knowledge Discovery and Data Mining (eds Krishnapuram, B. & Shah, M.) 965–974 (ACM, 2016).

  152. Singer, P. et al. Why we read Wikipedia. In Proc. 26th Intl Conf. World Wide Web (eds Barrett, R. & Cummings, R.) 1591–1600 (ACM, 2017).

  153. Aguera y Arcas, B., Mitchell, M. & Todorov, A. Physiognomy’s new clothes. Medium, (6 May 2017).

Download references


We thank the reviewers for their contributions to this work.

Author information

Authors and Affiliations



All authors have contributed to the formation, discussion, and writing of this article.

Corresponding author

Correspondence to Claudia Wagner.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature thanks Ceren Budak, Johan Ugander and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wagner, C., Strohmaier, M., Olteanu, A. et al. Measuring algorithmically infused societies. Nature 595, 197–204 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing