Quantitative methods are commonly used to study various phenomena, including different types of Artificial Intelligence (AI). However, scholars from various disciplines are recognizing the limitations of this approach (Rahwan et al, 2019; Adadi and Berrada, 2018; Afnan et al., 2021; Bathaee, 2017; Carabantes, 2020). Importantly, Marda and Narayan (2021) argue that the uncritical and positivist use of quantitative methods fails to consider the contextual factors. Consequently, these methods often fail to uncover the underlying causes and power asymmetries that contribute to the outcomes produced by automated systems (Dourish and Bell, 2011; O’Neil, 2016; Köchling and Wehner, 2020). This lack of understanding contributes to the blackboxing of AI, leading to inaccurate perceptions of these technologies. As a result, people may have either excessive or insufficient trust in AI. Moreover, organizations and institutions can exploit this blackboxing to evade legal responsibility (Sartori and Theodorou, 2022).

Increasingly, ethnography is recognized as a method that could help scientists investigate AI in a more holistic and critical manner. In fact, the call for ethnography as a method to study AI has grown louder over the past decade (Suchman, 1987; Forsythe, 2002; Seaver, 2017; Marda and Narayan, 2021; Sartori and Theodorou, 2022). We agree with these and other authors that ethnography, a qualitative method par excellence, can offer insights into technology as a sociotechnical phenomenon (Hess, 2001; van Voorst, 2024; Ahlin, 2023). Additionally, rather than defining variables in advance and testing hypotheses, ethnographic fieldwork allows for surprising and unexpected discoveries. Researchers can conduct open-ended, in-depth interviews and engage in (participant) observation of how individuals and institutions design, create, and use technologies (Pols, 2012).

As the enthusiasm for ethnographically studying the dynamic and urgent field of algorithmic society grows, it becomes even more important to do justice to the depth of this methodology. This is an ongoing point of consideration as technologies also influence and reshape the norms of ethnographic research (Ahlin and Li, 2019).

In this commentary, we first highlight three key aspects of an ethnography of AI that will contribute to the success of ethnographic studies: committed fieldwork, establishing trustful relationships in the field, and developing an attentiveness for subtle and nuanced data. We argue that by taking these requirements seriously, AI researchers will be able to grasp the crucial contexts in which different types of AI are produced, deployed, and sometimes exploited.

By sharing examples from our own ethnographic experiences and those of other anthropologists, we also offer insights into some realistic challenges that researchers must consider when studying AI ethnographically, while promoting an encouraging approach to these endeavors.

Three methodological focus points

During committed fieldwork, researchers engage in participant observation, which involves actively participating in the everyday lives of the individuals they are studying. As Vidushi Marda and Shivangi Narayan put it, ethnography is about “making connections between what people say and what they do” (2021). This involves crosschecking data obtained through interviews with observations of people’s practices, along with the ethnographer’s personal reflections on their co-lived experiences. Tanja Luhrmann adds that ethnography is “the most fine-grained practice of observation and listening in the world” (2020). To do this effectively, participant observation often requires a significant amount of time. Anthropologists often spend several months or even years with the people they study. The same applies to anthropological studies that focus on (digital) materiality, such as computers, software systems, and algorithms (Seaver, 2017), as well as ethnographic studies that explore how people interact with objects (Latour, 2007; Lynteris and Poleykett, 2018) or, most relevant to this piece, how humans interact with computers (Pink et al., 2016; Christin, 2017; Hoeyer, 2023).

For example, Nick Seaver’s (2021) ethnographic fieldwork on the development of recommendation algorithms in the music industry spanned several years. This extended timeframe allowed him to observe significant shifts within the industry, including the emergence and subsequent decline of startups. In addition, Seaver immersed himself within the companies being studied, enabling him to gather data from individuals at all levels of these organizations, from entry-level interns to top-tier executives. Such deep involvement provided Seaver with insights into the specific strategies employed by engineers to balance both care and scale in the creation of AI systems. This is a new insight that differs from the prevalent understanding of care and scale as two values which essentially contradict each other.

Despite the importance of long-term fieldwork, shorter research periods are not necessarily less valuable. As Pink and Morgan (2013) explain, short fieldwork can be characterized by intense moments that result in deep and valid ways of understanding (see also Marcus and Okely, 2007; Vad Karsten, 2019). Nevertheless, as Pigg notes, spending attentive time with people through “a practice of patient ethnographic ‘sitting’ as a means of understanding” is crucial for gaining a deeper understanding of individuals’ beliefs and actions (Pigg, 2013, 127). Similarly, Luhrmann argues that the strength of ethnography lies in our ability to “sit and watch and listen, at length” (Luhrmann, 2020).

By investing time and energy, ethnographers can establish trustful relationships with their research participants. Spending extended periods of time in a particular field allows ethnographers to go beyond the initial awkwardness, social norms, and self-awareness that often characterize first meetings and interviews. In cases where long-term fieldwork is not possible, working closely with local research assistants who are respected and trusted within their community is essential. In such circumstances (for example, conducting research in unsafe environments), local research participants not only act as gatekeepers to the field but also assist in managing access and may even co-design or manage the research. This is significant work for which they should receive credit in subsequent publications, as long as it does not endanger their safety or well-being (van Voorst and Hilhorst, 2018).

Through intensive immersion and the establishment of trusted relationships valuable data can be obtained which may otherwise be easily missed. This data includes nuanced information that can be recognized in subtle hints and silences. In the following sections, we offer several concrete examples from our own fieldwork experiences, highlighting the types of nuanced and crucial information they have led us to uncover.

Sensitive or hidden data

In contrast to quantitative methods, ethnographic research sheds light on relationships, practices, and beliefs that are difficult to articulate, such as moments of silence during interviews or the inside jokes understood only by insiders (Wikkan, 1991; Marabello and Parisi, 2020). A case study conducted by the first author in a flood-prone slum in Jakarta, Indonesia, serves as an illustrative example. Only after living in the community for over a year she discovered why residents refused to use the government-provided flood protection technology, despite it being free of charge and highly praised by scholars and non-governmental organizations. It transpired that, more than being afraid of floods, the slum dwellers were afraid of being identified and traced by authorities through the personal data collected by these technologies. These fears were valid as many residents were undocumented or engaged in criminalized work, such as sex work or gang membership. If their personal data became known, they risked losing their homes, livelihoods, or even their freedom with the possibility of imprisonment. In the first few months of her fieldwork, the researcher struggled to comprehend these fears. Outsiders were generally mistrusted or seen as potential spies for the authorities, and this skepticism extended to outsider researchers. While slum dwellers politely responded to her questions, many of their answers later—once she established more trusting relationships or even friendships with the interviewees—revealed themselves to be socially correct. For example, it took the researcher 6 months to realize that she was living with an undocumented sex worker who carried out her work in utmost secrecy.

In an entirely different research context, Featherstone and Northcott (2021) conducted a 5-year ethnographic study within acute wards in the United Kingdom, focusing on individuals with dementia who required immediate and unplanned hospital care. The researchers followed these individuals throughout the admission process and closely observed hospital staff as they interacted with them across various shifts. The extensive duration of their fieldwork facilitated the identification of instances where individuals with dementia were unable to access the necessary assistance due to their inability to communicate in ways expected by ambulance and hospital personnel. For instance, people with dementia were unable to verbally indicate their pain level on a standardized “pain ladder from 1 to 10.” However, the ethnographers were attentive to their cries of agony, even during the late hours of the night when they were present on the ward. Their research provides a meticulous and unwavering ethnographic analysis of life within contemporary hospitals, shedding light on the institutional and ward cultures that shape the provision and organization of everyday care.

Gaps and silences

Another type of subtle knowledge that becomes available through well-conducted ethnography is shaped by the gaps and silences in conversations. This is similar to what Law (2002) and others have called the “absent present”: elements or factors that are invisible to the making of an object, such as a technological device, but that are essential to its functioning. This could also be extended to the practice of fieldwork (e.g., Gergen, 2002). For example, through fieldwork, an ethnographer may discover that a certain political discourse can be understood as a warning, and although it is not translated into law, it nevertheless impacts on people so that they adjust their behavior through censoring themselves (Yonusu, 2018).

Or, consider this example from our own fieldwork. When visiting nursing homes in the Netherlands, where she studies healthcare interventions for older people, the second author was surprised to discover that some care home residents enjoyed interacting with robot cats and dogs while others were similarly content with plush animals. The researcher came across a female resident who was particularly attached to a plush cat. Observation and a chat with her and her carers revealed that the cat had a very concrete therapeutic effect: it decreased the resident’s anxiety so efficiently that she did not need any medications for restlessness which had been severe beforehand. However, the plush animal and its impact did not feature in the woman’s healthcare records. On paper, the cat did not exist. A plush animal was therefore an efficient healthcare intervention, yet it was invisible as such in the healthcare policy and funding schemes. Ethnographic fieldwork can uncover such unexpected relations which in turn can lead to new insights about efficient interventions, technological or otherwise.

Complexity of human–nonhuman interaction

A final example highlights the crucial yet easily overlooked data that can be recognized through robust ethnographic fieldwork: the complexity of human–nonhuman interaction. As Seaver (2018) demonstrates, most studies on AI, even anthropological ones, tend to reinforce a binary narrative of humans versus algorithms, culture versus technology, or humans versus computers. However, this view oversimplifies the relationship between humans and AI, as they are intricately interconnected. Therefore, it is impossible to study one without considering the other. Merely filling the “analog slot” with anthropological insights on the social while centering the digital is inadequate and only perpetuates these dichotomies. Recognizing the complexity of human–AI relations requires moving beyond the stereotypical narratives that currently dominate discussions around technology.

Another case study illustrates this complexity. In the first author’s fieldwork on a health application in the United Kingdom, it was found that measurable data only told half the story. The application was implemented in a medium-sized organization to motivate employees to increase their physical activity, drink more water, and reduce their calorie intake. In exchange for “fit points” earned through following these health recommendations, employees received gift vouchers.

Over time, the ethnographer developed trusting relationships with the algorithm coders who supported the application by spending time with them at work and during informal occasions, joining them for lunches, afternoon drinks, events, and hikes, where conversations ranged from daily job matters to family lives and weather discussions. During one such informal meeting, the researcher discovered that while the application users believed that the algorithm determined who received gift vouchers, these decisions were actually made by people behind the application. The programmers explained that they had initially attempted to build the algorithm according to their managers’ requests, but it made numerous mistakes during the testing phase, making it more practical for them to manually decide about which users will receive vouchers. Problematically, the management was so pleased with the application that they considered scaling it up, which caused considerable anxiety among the programmers. Within a few weeks, they managed to develop a functioning algorithm.

This finding exemplifies the recent assertions made by Barchetta and Raffaetà (Forthcoming) regarding the benefits of ethnography in examining AI and other technological phenomena. By actively listening to discussions that occur in everyday conversations in the field, ethnographic research can expose the inherent tensions that arise during the development and implementation of technology in scientific knowledge creation. We further concur with Raffaetà et al. (2023) in that the utilization of in-depth ethnography, which encompasses the examination of technology alongside an understanding of human behaviors and social realities, facilitates interdisciplinary research on data, computation, algorithms, and AI (Raffaetà et al., 2023).

Conclusion

Agreeing with the increasing calls for more ethnographic research of AI, we argue that ethnography can make a significant contribution to the study of data and AI technologies. Specifically, we believe that ethnography is well-suited to exploring underlying causes and power asymmetries that are often difficult or impossible to uncover with other methods. The data produced through ethnographic research are not mere anecdotal stories, but rather crucial insights that can elucidate why automated systems produce the outcomes they do, as ethnography offers a deeper understanding of technology as a sociotechnical phenomenon.

At the same time, we maintain that robust ethnography requires specific approaches. In this commentary, we have outlined three key requirements that we believe offer the best chance of making a valuable contribution: (1) committed fieldwork with adequate time investment, or if time is limited, close attention and appropriate local support; (2) trustworthy relationships between researchers and their interlocutors; and (3) attentiveness to subtle, ambiguous, or absent-present data. It is crucial for all scientists interested in incorporating ethnography into their research design to keep these key points in mind. Failing to do so runs the risk of distorting the outcomes of their ethnographic research.

Finally, in line with the work of ethnographers mentioned in this piece, we advocate for research that examines not only human or technological elements in isolation, but rather explores the interactions, collaborations, frictions, and failures in human–machine relations.