Introduction

Artificial Intelligence (AI) technologies are increasingly being developed for image-based diagnostics. For pathology, these technologies promise to support pathologists in time-consuming and repetitive tasks1,2 and may also move the field forward towards new knowledge, discoveries and “breakthroughs”1,3. Such algorithm-based technologies have been developed for metastases detection, Ki67 scoring, tumor-infiltrating lymphocyte (TIL) scoring and Gleason grading, as well as in predicting the status of molecular markers on HE slides1. The development of AI is even seen by some scholars as a potential “revolution” of the field, since AI could provide pathology with substantial new knowledge and new ways of operating1,4. Whether or not the implementation of AI in the pathologist’s diagnostic process will indeed cause a revolution5, it will likely herald changes in image-based diagnosis1,2,3,4,5,6,7,8,9,10.

The implementation of AI within pathology is dependent on a successful digital transition, where departments shift from using traditional light microscopes to assessing digitalized tissue slides on a computer screen6,7,8,9. This transition can positively impact the pathologist’s daily workflow. For instance, digital images can be stored and quickly accessed in a digital archive; this makes it possible to easily consult colleagues remotely6,7. There are also several challenges to the adoption of AI-based applications in pathology when looking at the current state of digital pathology. For instance, images can currently be digitized in a 2D format, yet fast and high-quality 3D imaging is still largely untenable – meaning some diagnostic tasks (e.g. cytology) that require 3D images must still be conducted with a microscope10. Furthermore, conditions for developing AI technologies that can analyze a diverse range of images are not yet optimal. For instance, most AI algorithms require “labelling” (i.e. annotation) by a pathologist, preferably an expert, who manually delineates the area of interest (i.e. anomaly or malignancy) by which the algorithm can be trained6. Because of time constraints and the financial burden of labelling, systematically expert-annotated images are still scarce. Also, long-term storage of digital images for potential future development requires large and costly data storage facilities. This can therefore constitute a roadblock for pathology departments wanting to digitalize their workflows and needing large data sets for training AI. Finally, pathology images of several basic types of tissue are characterized by a pervasive variability of patterns, which can make comparisons across large data sets more difficult6.

Despite these challenges, AI can still constitute a highly impactful technology. Scholars have therefore recommended involving pathologists in development, implementation, and governance processes in order to optimize this impact2,6,11. Previous empirical studies – two surveys and two qualitative interview studies – have focused on exploring the views of pathologists concerning AI12,13,14,15. These empirical studies have thus far focused on general attitudes towards AI. They have concluded that pathologists are overall positively inclined towards AI. These studies also indicate how necessary – and poorly understood – pathologists’ views and insights are concerning AI’s place within their field13. There is still little known, for instance, about the perspectives of pathologists concerning the current and future integration of AI within their daily work and how responsibility should be approached in the implementation of AI within the diagnostic process.

The current study aims to fill this gap in knowledge and presents the results from the first in-depth interview study, as far as we are aware, on the integration of AI within pathology. In addition to gaining insight into the professionals’ stance towards possibilities for AI integration, our goal was to analyze their views in connection to the broader social and ethical context of AI development. In this article, we will focus primarily on the issue of responsibility. First, we will describe pathologists’ views concerning possibilities, prerequisites and conditions for AI integration; we will then situate these views within the broader context of AI implementation. Finally, we will formulate three concrete recommendations to support successful AI implementation in the clinical decision-making processes of pathologists.

Materials and methods

We conducted a qualitative interview study to investigate the perspectives of pathologists on the development and implementation of AI. The study design is in accordance with the consolidated criteria for reporting qualitative studies (COREQ)16. We have opted for qualitative methodology, and specifically semi-structured interviews, since these are particularly suited to investigate complex phenomena, such as AI, encountered in health care practices by focusing on elucidating different perspectives16. By adopting this kind of methodology, the study is able contribute to our understanding of AI’s potential impact on the work of pathologists, lab technicians and computer scientists, and could help us apply their perspectives to future AI systems within pathology.

Research design

This study constitutes part of the Responsible Artificial Intelligence in Clinical DecisIOn making (RAIDIO) study. In order to gain insight in the integration of AI within the decision-making processes of pathologists and other professionals working in pathology labs, we conducted an inductive qualitative analysis of recorded conversations with pathologists, lab technicians, and computer scientists17,18,19,20.

Sampling and data collection

For this analysis we interviewed professionals working at the pathology departments of the UMC Utrecht and the Radboudumc in the Netherlands. These sites were chosen because they had both completed the transition to a primarily digital workspace21,22. This made it possible to talk with professionals on the impending integration of AI within their work practices, as well as their hopes and expectations concerning AI’s functioning. Both institutions have their own computer science teams and collaborate with different external parties on AI development, whereby the AI applications being proposed, implemented, or fine-tuned at each of these sites vary in the role they play in the decision-making process. We therefore expected that combining both institutions in this study would result in a rich variety of perspectives on the potential use of AI in pathology.

Interviews were conducted between June 2020 and February 2021. Because of the pandemic, the conversations were conducted via telephone; JD and MM conducted interviews both individually and as a team. A semi-structured topic list was used to guide the conversations. The recorded interviews were transcribed verbatim by a professional transcription service and checked for reliability by JD. The transcripts were then coded for confidentiality and identifying information was removed. The interviewees were invited to perform a member check of their own transcript. The interviews were conducted in Dutch and translated to English by JD and MM.

Data analysis

The data selection and analysis occurred inductively and iteratively23 by means of constant comparison24. The software program NVivo12 supported the data analysis. JD and MM read individual interview transcripts and independently identifying conversation fragments, or units of meaning17,18,19,20 they considered relevant to the research question; after each interview they met to compare their observations. After four interviews, they began grouping these fragments into descriptive categories, resulting in the first code tree. They then discussed this code tree with other members of the research team (KJ, SV, and AB) as a means of further refining the code tree. Next, JD and MM sampled and independently coded 15 transcripts. These independent coding results were compared multiple times and discussed as a means of further refining the code tree. JD then coded the remaining transcripts, adjusting the code tree when necessary. Finally, MM and JD performed an intercoder reliability check by recoding four transcripts (2 pathologists, 1 lab technician, and 1 computer scientist) and comparing their results. This final step also served as a means of checking for meaning saturation25.

Data statement

The data has been presented by means of illustrative quotes, which were carefully selected to represent the arguments presented in the interviews and do justice to the variety of perspectives shown within them. In the selection, we have also considered whether the quotes could be understood without the context in which they were originally uttered. The complete datasets themselves are not publicly available because the individual privacy of the participants could be compromised. The individual privacy of the participants particularly important as their statements included opinions and beliefs regarding the ways in which AI should be adopted. These are deemed sensitive therefore fall under the protection of the General Data Protection Regulation (GDPR: article 9).

Ethical considerations

Ethical approval for the RAIDIO study was obtained from the Medical Research Ethics Committee (MREC) of the University Medical Center Utrecht and Radboudumc (WAG/mb/20/014090). The MRECs determined that this study was exempt from the Medical Research Involving Humans Act. Written informed consent was obtained from all participating respondents.

Results

In this study, 45 professionals were invited to participate by means of a department-wide email. Additionally, some professionals were directly approached by either the research team or a contact person at the department to aim for a representative group of professionals with a mixture of experienced pathologists and pathologists in training, and professionals with an active or more passive role in digitalization and AI development. 24 responded to our messages and were interviewed (15 pathologists, 7 lab technicians, 2 computer scientists) (Table 1). The interviews provided a varied sample of perspectives concerning the transition to digital pathology and the possibility of AI-based image analysis.

During the interviews, respondents jumped back and forth between two aspects of the digitalization process. They reflected upon their departments’ recent digitalization processes. They also discussed the potential value AI might have for the future of pathology as a field. Regarding digitalization, multiple respondents described how digital pathology had already significantly advanced and improved their field by increasing time efficiency, facilitating easier communication and the fact that, unlike physical slides, digital images could not be misplaced or accidently switched. At the same time, respondents also described technical challenges when using whole-slide imaging (WSI) to analyze tissue samples. For example, the digital screen cannot always display images at the same level of definition as a microscope, nor can the current scanners create high quality 3D digital images fast enough for cytology specimens. Moreover, specialists working with larger tissue samples mentioned that the digital image could not be viewed in detail in its entirety on the screen; it therefore took them longer to assess such samples accurately.

Similarly, responses about the potential value of AI took on many forms. AI was used within the interviews as an umbrella term for image recognition tasks, other automated tasks, applications based on machine learning or deep learning, but also “simpler” algorithms and calculations. The picture that emerges from the interviews is of AI as a rather amorphous entity, which reflects the ambiguity of the term in broader scholarly and popular discourse. This variety of definitions and understandings could be due, in part, to the fact that not all of the participants were directly involved in AI development. Most participants only knew of the AI tools expected to be integrated in the short-term [see26] and had to speculate on the longer-term applications and possibilities of AI. In doing so, the interviewees seemed to draw on their experiences with currently developed tools and digital pathology developments to talk about expectations about future AI-applications.

In following sections, we will further explicate how participants viewed the emergence of AI within the field of pathology, specifically themes related to the future roles and possibilities of AI. In order to illustrate how respondents view AI’s future place in pathology and connect it to the current digital developments, we have identified four themes related to the potential value of AI: (1) prerequisites and considerations for AI integration, (2) AI in the daily workflow, (3) envisioned roles and responsibilities for AI and (4) envisioned roles and responsibilities for pathologists. The interview extracts referred to in the body of text and can be found in Tables 14.

Table 1 Background characteristics participants.
Table 2 Illustrative quotes for theme 1.
Table 3 Illustrative quotes for theme 2.
Table 4 Illustrative quotes for theme 4.

Prerequisites and considerations for AI integration

When reflecting on the potential use of AI within digital pathology, four categories of consideration have been identified in the comments of respondents. First, their responses substantiated the intrinsic relation between digital pathology interfaces and AI. As a digital workflow is necessary for implementing AI in the decision-making process of a pathologist (Table 2, Quote 1A), AI is dependent on the extent in which a pathology lab is digitalized. Furthermore, the quality of digitalized scans impacts the possibility to train and validate new AI applications. Respondents also described the value of digital pathology and AI as being closely intertwined. As one respondent explained, digital pathology enables pathologists to share medical images – along with medical expertise – nationally and internationally (Table 2, Quote 1B). Similarly, if digital archives are created, AI can be implemented to analyze the images on a larger scale. This means that the combination of digital pathology and AI could result in a broadening of medical expertise and, at the same time, open up new means of acquiring knowledge.

Second, although they also believed in the great potential of AI for pathology, some respondents addressed the fact that the implementation of AI applications within their departments will likely be determined by the application’s ultimate contribution to patient care combined with the costs of developing or purchasing such applications. Both pathologists and computer scientists reported that – despite the great promises of AI as reported in the literature and at conferences – an application’s value depends on the measurable improvements (e.g., in effectivity or in better diagnostics) of the implementation and who is willing to pay for these improvements (Table 2, Quote 1C). Practical feasibility therefore constitutes a key component of successful AI development according to several professionals.

Third, when reflecting on the possible impact of AI on pathology, some respondents emphasized the importance of maintaining a realistic stance towards the potential value of AI. This reserved stance stemmed from their experience and (sub)specialism within the field. For instance, when compared with earlier technological innovations in pathology, AI may be ‘just’ another step towards understanding the complexity of the human body. Respondents also compared the current hype around AI to previous technologies that promised to fundamentally change the field. The electron microscope and DNA research are two examples cited within the interviews. These innovations have contributed to advancements in the field, but they have also resulted in more complex knowledge on the ways in which disease mechanisms work and can therefore make interpretation of clinical cases even more difficult (Table 2, Quote 1D). Furthermore, AI applications may not be relevant to all (sub)specialisms within pathology (Table 2, Quote 1E). Many respondents also emphasized that a large part of their work is integrating and interpreting information from a diverse range of sources (such as tissue samples, histological images and molecular data) into a diagnosis. Their relativizing views on AI development are hence guided by the already highly technical nature of their work.

Fourth, we found that respondents either took a passive or active stance towards the digital transitions and potential AI applications. As Fig. 1 illustrates, a passive stance was often accompanied by a ‘wait and see’ attitude and was mainly adopted when the respondent was not involved in AI development or not able to make executive decisions concerning its future implementation. Respondents showed an active attitude towards AI when they were interested in innovation, initiated it within their departments, or were personally involved in AI research and development. Similarly, when discussing the possible consequences of AI implementation, respondents showed either a more idealistic or pragmatic perspective towards the future. Idealistic perspectives focused only on the promises and benefits of AI for pathology, while pragmatic perspectives focused on the benefits of AI as well as important hurdles to AI development and implementation. Moreover, a distinguishing feature of the respondents was whether they mainly worked in the context of oncological or inflammatory diseases. The promise of AI seems to be more apparent in diagnosing oncological diseases1,27 than inflammatory diseases, which could explain the more optimistic stance towards AI development amongst by respondents mainly working with oncological tissue samples.

Fig. 1
figure 1

Positioning of respondents concerning digital transitions.

AI in the daily workflow

Respondents often had clear and specific hopes and expectations for AI with regard to their daily tasks and workflow. Overall, respondents were almost unanimous in their expectation that AI would increase the efficiency of their workflows. Efficiency is of great relevance in pathology, a point respondents repeatedly underscored. For example, one pathologist remarked that their time is costly, and is therefore best spent on complex cases. For some respondents, the ideal AI application would be one that could complete or support simple, routine, or repetitive tasks. Examples frequently mentioned by respondents were counting mitoses in a digital image or diagnosing basal cell carcinoma. These kinds of tasks are reportedly not intellectually taxing (Table 3, Quote 2A) and can take up around 20 to 25 percent of a pathologist’s time (Table 3, Quote 2B). A few respondents made a different argument, namely that efficiency could be increased if AI could help triage and generate initial reports on a medical case (Table 3, Quote 2C). According to this group, the routine and repetitive tasks can be completed quickly, whereas writing a report is time-consuming and therefore worth allocating to an AI application.

In spite of the agreement with regard to the time-saving possibilities of AI, respondents differed in the degree to which they thought AI might support more complex diagnoses. Some believed that AI should eventually make automated decisions for straightforward diagnoses, while others said that AI should perform a certain pre-screening of a medical image and make suggestions for diagnosis. For more complex cases, pathological decision making requires the integration of information from various sources, and some respondents uttered the hope that AI may help to integrate these relevant sources of knowledge (Table 3, Quote 2D). Others expressed a hope that AI might become able to perform diagnostic tasks that are not (easily) performed by pathologists. Some of the respondents believed that AI could eventually be able to assist in detecting rare cases, which the average pathologist might miss due to lack of experience with those specific disease patterns. Others expressed a wish for AI that could provide prognostic analyses to determine if a patient would develop a (progressive) disease.

Some respondents also envisioned AI as a means of assessing images more consistently; this could help pathology adopt a more evidence-based approach that would be less dependent on individual execution (Table 3, Quote 2E). Furthermore, some pathologists wanted to learn from AI applications, especially if such tools could describe how they arrived at their decisions (Table 3, Quote 2F). This would mean that pathologists (in training) would no longer be solely reliant on the explanation and expertise of their supervising pathologist when learning how diagnoses are made. Similarly, a large number of respondents also fostered the hope that AI would help make pathological practices, which are inherently based on individual and expert interpretations, more standardized (Table 3, Quote 2G). Respondents who desired more standardization often cited the fact that an AI application, unlike a human expert, cannot tire and would always analyze a sample in the same way, uninfluenced by external factors (Table 3, Quote 2H). Nevertheless, some respondents suggested that a desire to be more objective was misplaced or even untenable (Table 3, Quote 2I). These respondents noted the fact that AI applications are developed and validated by drawing on expert opinion and can therefore never become truly objective. Still, these respondents admitted that such AI systems may foster discussions between pathologists and thereby indirectly lead to new perspectives or insights in specific cases – assuming it wouldn’t just further complicate already complex decision-making processes.

Envisioned roles and responsibilities for AI

Respondents envisioned AI in a wide range of roles and respective responsibilities within the diagnostic process. Some of these have been mentioned already: AI could perform an autonomous pre-screening to support pathological decision-making processes, AI could overtake routine tasks (with or without final check of the pathologist), and AI could teach pathologists how it arrived at certain outcomes. The possible roles and responsibilities for AI according to respondents have been categorized in Fig. 2.

The envisioned roles and responsibilities for AI fall roughly into two categories: (1) those roles in which AI is ascribed anthropomorphic traits, and (2) those in which AI is seen as a non-humanlike technology. The first category focuses on the expert qualities of AI, where AI can take responsibility for (a part of) the diagnostic process and the health care professional is likely to take a backseat role when relying on AI outcomes. In this category, respondents described AI as becoming an extra expert in the diagnostic process, adding another view to the pathologist’s judgment, or AI as functioning as a teacher or advisor to the pathologist, proving information she does not have herself. Also, some respondents envisioned AI as a super eye or as being able to ‘see’ more in a digital image than human experts. Finally, respondents hoped to delegate simple, routine tasks to AI, describing it as a workhorse. The second category, on the other hand, describes roles in which AI might support pathologists in the same way as any other technology or tool. In these roles, pathologists would retain responsibility over the complete diagnostic process and actively assess algorithmic outcomes, taking on a driver-seat role as users. Possible supporting roles AI might adopt are as a triage, selecting possible cases where the pathologist’s judgment is required, or as a counting tool, for example counting mitoses in a tissue sample. Other general non-human like roles respondents mentioned were AI as a supportive tool to help pathologists analyze large data sets, and AI as a time saver, because it has the potential to make pathologists more efficient. These envisioned roles and responsibilities are not mutually exclusive; sometimes respondents advocated for several of these roles and were uncertain to what extent AI might indeed be able to take responsibility in the diagnostic process.

Fig. 2
figure 2

Envisioned roles for AI and end-users.

Envisioned roles and responsibilities for pathologists

Most respondents assumed that pathologists would continue to be ultimately responsible for the diagnosis. Several respondents affirmed that while AI might take on many roles in supporting pathologists, they would feel comfortable overruling AI if necessary (Table 4, Quote 4A). Some described this metaphorically by comparing AI with an autopilot function in an airplane or machines in clinical chemistry: both the pilot and the clinical chemist must take responsibility for the machine’s outcomes and in the case that failures in the machine’s functioning occur (Table 4, Quote 4B and 4C).

Even though many respondents believed the responsibility for an AI-assisted diagnosis would likely remain with pathologists, they were simultaneously not (very) interested in understanding the inner workings of algorithms. When talking about the way they would use and interpret AI, pathologists indicated they did not think it would be necessary to know every step in an algorithm’s decision-making process as long as the outcomes could be validated and therefore trusted. Several respondents mentioned the fear pathologists and lay-people have concerning the black box nature of deep learning systems, but argued that reproducibility of AI outcomes is much more important for diagnostic purposes than understanding the black box itself (Table 4, Quote 4D). In other words, being able to check an algorithm’s performance and consistency would be sufficient. To illustrate this point, some compared AI to mobile phones and explained that one could use it correctly with just a basic understanding of the principles underlying its functioning (Table 4, Quote 4D and 4E). One of the computer scientists interviewed also pointed out that black box technologies are present in everyone’s daily lives, and we trust them even though we do not know what happens inside the technology (Table 4, Quote 4F). As multiple respondents noted, the current discussion on AI’s black boxes stems primarily from the fear of the unknown, rather than the need for a deep understanding of technical details.

Nevertheless, some respondents advocated for a middle ground somewhere between fully explainable AI and black box AI. Should AI be used for simple and routine tasks within pathology, it might be less important for pathologists to understand how it arrives at a certain conclusion. However, if AI would be used in the future to completely diagnose a complex disease or produce a prognosis, then it would be important to know more about how AI works (Table 4, Quote 4G). Also, if pathologists should be responsible for algorithmic failures, they should have at least a basic understanding of AI design (Table 4, Quote 4H). According to these respondents, the need for transparency and explainability should directly relate to the degree of responsibility AI would bear and the severity of the consequences in case of a diagnostic mistake.

Discussion

Respondents provided a range of possible ways in which AI could be embedded within pathology. In this sense, they affirmed the assertion that AI holds a large promise to better the field. In the following discussion, their perspectives will be further contextualized amidst social and ethical literature, which has described several conditions for the implementation of medical AI28,29,30,31,32,33,34,35,36,37,38,39. By connecting the results of this study to broader challenges in AI implementation, we will also shed light on important insights which the participants in this study have provided. Gaining a better view of the way participants perceived the possibilities of AI, as well as important prerequisites or conditions, can assist future AI implementation. Specifically, the results of this study have pointed to three concrete recommendations for departments interested in integrating AI in a way that optimally aligns with end-users’ expectations and daily responsibilities.

Recommendation 1: Foster a pragmatic attitude toward AI development

Contrary to much of the existing literature on the promises and pitfalls of medical AI1,2,3,4,5,6,7,8,9,10,11,28,29,30,31,32,33,34,35,36,37,38,39 and despite their belief in the promise of AI, respondents in this study showed a pragmatic attitude towards its actual implementation. Although some of the respondents did show idealistic tendencies when focusing on the future benefits of AI, they were also conscious of the problems facing implementation and focused on the practical usefulness of the technology for certain diagnostic tasks. Also, while they were generally open to the possibility that AI would become an integrated part of their standard workspace, they did not describe it as a wonder tool or silver bullet for problems related to efficiency or objectivity. This pragmatic stance toward AI is due, in part, to previous technological developments and “revolutions” in the field, which may have normalized the introduction of new technological changes4,9,29,33,40. It is also intrinsically related to the complex nature of pathologists’ work and the collaborative and multidisciplinary nature of most diagnostic processes.

This pragmatism can contribute to the responsible introduction of AI technologies, given that these highly depend on the interaction with skilled medical experts. As members of the pathology departments will likely take responsibility for AI outcomes used within the diagnostic process, it is essential for them to assess technological possibilities in a realistic light. The success of a technology such as AI is not only dependent on the accuracy with which it performs its functions, but on its fit with the medical and social context as well41. If pathologists would take responsibility for AI outcomes, AI would for instance need to be able to function in real contexts and with real-time data, and medical norms should be applicable to the way it analyses medical content. The significance of a good fit between AI and its end-users is often described as AI alignment or human-AI cooperation; these terms highlight the importance of aligning the values, needs and wishes of the practitioners with the technological design of AI41,42. The pragmatism of respondents therefore points to the necessary compatibility between AI design and the real social-medical contexts in which these will be implemented.

Recommendation 2: Provide task-sensitive information and training to health care professionals working at pathology departments

The range of envisioned roles and responsibilities for AI in pathologists’ daily practices indicates that a wide variety of applications can potentially be developed or integrated. It also reveals that many members of pathology labs could and would like to learn more regarding the actual possibilities and limits of AI. Task-sensitive training should therefore be provided for all members of a department integrating AI. This training should include basic information about what AI is, how it works, the possibilities and limits of its applicability, what kinds of data it uses, and the types of metadata it generates. A shared understanding of what AI is and how it works can help establish the necessary support base to foster development and implementation that best aligns with the expectations and needs of the department.

Moreover, as our results emphasize, knowledge about computers and the inner workings of AI is often compared to ‘simple’ technologies such as a mobile phone, an autopilot function or black box. Although these comparisons can help describe the ways in which users interact with these technologies, they neglect or downplay the complexity of the technologies and may prevent a more nuanced understanding of their impact on daily practices43. Respondents indicated that they have a broad understanding of the knowledge they need to work and trust AI in their workplace, however many of them also emphasized the need to have a clearer idea of AI design if it were to be implemented in complex decision-making processes. It may therefore be important to focus more on providing nuanced knowledge on AI functioning so members of pathology departments can build their trust and feel confident taking responsibility for algorithm-supported diagnoses.

Interestingly though, respondents did not seem to always regard ‘explainability’ of the inner workings of AI as an essential requirement for AI integration, which implies that they seem to be less concerned about the opaque nature of AI than is suggested by certain discussions44,45,46,47,48. It is important to note here that the participants of this study have interpreted ‘explainability’ in a broad sense; namely, that specific reasons underlying an outcome could be made transparent. We acknowledge there are many definitions possible for explainability, and that specific use cases may demand kinds of explainability which were not discussed in this analysis. It may therefore be salient to further investigate how pathologists would view specific forms of explainability in various contexts. Within the context of this study in any case, respondents were more interested in gaining an understanding of AI’s underlying principles and hoped AI would introduce more standardized knowledge to pathology. This indicates an interest in furthering pathological knowledge by means of AI. Such progress in the state of knowledge can only be accomplished when professionals are equipped with task-sensitive knowledge on the way AI functions and AI tools are co-designed by pathologists, lab scientists and computer scientists representing diverse professional knowledge, values and standards49,50. An inclusive, departmentally wide approach towards future AI development and implementation can stimulate the sharing of expertise and work towards creating AI that truly represents a standardized account of pathological knowledge.

Recommendation 3: Take time to reflect upon users’ changing roles and responsibilities

Lastly, the results of this study have further indicated that reflection upon users’ roles and responsibilities is necessary to attain a clearer idea of the changes members of pathology departments, especially pathologists, will undergo when AI is added to the diagnostic process. One danger of waiting for AI to be implemented before reflecting upon these issues is that the burden of AI – what will need to change in the department – will fall on the individuals who work with AI. This reflection is especially important, as many members must adjust simultaneously to the possibility of AI in their daily workflows and a recent transfer to working with digital systems. As research on computer use within other medical fields has emphasized, digitalization is not a value-neutral process and can create new power dynamics in which certain perspectives are more included in development and implementation processes than others39,43.

The future of AI in pathology is not a question of if it will be implemented, but when and how it will be implemented. The timing of and approach to the implementation are tantamount for successful integration of AI. On the one hand, some relatively simple AI tools used for, for instance, counting mitosis26,51 are already qualified to be implemented. Still, deliberation on their role within and impact on decision-making processes would be advisable. On the other hand, many AI tools are still being developed and, while the development process is on-going, it is essential to simultaneously think about the ways in which fundamental design choices affect AI’s roles and responsibilities52. Socio-technical challenges associated with the implementation of AI can benefit from early reflection, since anticipating upon future roles and responsibilities can prevent unwanted consequences for users. By establishing wanted and unwanted conditions for its implementation, possible unintended effects or even abuse may be detected52. This helps to retain decisional authority over the way AI is integrated, instead of ‘letting it just happen’53.

In order to generate effective reflection, it is important to adopt an open approach in which the adaptions to pathology generated by AI are scrutinized, such as can be seen in ethics parallel research or Value Senstive Design (VSD)52,54. As these approaches emphasize, it is tantamount to include reflection not just at one point in the development and implementation process, but to ask different normative questions at several stages of AI integration. This ensures that computational challenges such as agency, privacy and bias can be continuously targeted and values are intentionally embedded within a technology52,54. Reflection thus becomes an iterative process in which stakeholders think about the ways in which values are incorporated at several steps in the process of integration. Moreover, within the literature on AI development, there is an increasing call for inter- or multidisciplinary efforts to analyze value changes, as problems with algorithmic bias, trust and responsibility provide broader societal consequences54. We would therefore recommend reflection within pathology labs so (1) value changes are openly approached, (2) reflection is seen as an iterative process throughout the development and implementation processes and (3) is practiced at all levels (including patients and members other disciplines, where appropriate).

Study limitations and recommendations for further research

In this study, we investigated the views of respondents who are working in pathology departments which have already adopted digital pathology. In doing so, we had limited access to the departments themselves due to the pandemic. Furthermore, there could be a potential loss in nuance by translating the interviews from Dutch to English. Although we reached saturation in the themes and codes identified, it would be interesting to include the perspectives of professionals who work at non-digital and non-academic pathology departments for comparison. This study included the perspectives of lab technicians and computer scientists; future studies might also consider focusing specifically on their professional roles in and attitudes towards future AI development. Because this study focused on the potential of AI for pathology, it did not go deeply into respondents’ interpretation of important concepts, such as what it means for AI to gain responsibility in the diagnostic process. It would be highly relevant to further investigate the ways in which normative concepts are given substance to by pathology professionals and what kind of normative frameworks can be developed to fit the wishes of medical practitioners.

Concluding remarks

This study responded to the widely held belief that pathology, centered around image-based diagnostics, is one of the medical specializations most suited for implementing AI within the decision-making process1,2,3,4,5. The large number of images processed by pathology labs can be digitally uploaded and then analyzed by AI tools on several parameters for diagnosis1. This is confirmed by the large number of AI tools currently being developed to support pathologists in their diagnostic process. On paper, it is therefore mostly a ‘simple’ question of opportunity: When are the circumstances right for AI to be implemented in pathology? In reality, this question proves harder to answer; technical as well as ethical challenges to AI implementation have been formulated and require a clear strategy to tackle them8,11,31. Specifically, it requires members of pathology labs, with their extensive knowledge on practices, roles and responsibilities within pathology, to reflect on the way in which AI can and should be implemented in diagnostic process.

In order to gain insight in their perspectives, this article has provided the findings from the first in-depth interview study in which the expectations of pathologists, lab technicians and computer scientists on AI development and implementation are explicated. By discussing the future of AI within pathology, the participants have contributed to conceptualizing AI challenges by formulating the perceived possibilities of AI, as well as some important prerequisites or conditions that could be necessary for successful AI implementation. Specifically, the results of this study have pointed to three concrete recommendations for departments interested in integrating AI in a way that optimally aligns with end-users’ expectations and daily responsibilities. These recommendations are targeted at strengthening the compatibility of AI design and implementation processes with the social and medical norms guiding pathological practice.

As the literature also points out52,54, it is important to reflect on the ways in which technologies such as AI impact medical practice, also during the stages between development and implementation. Moving from the theoretical possibility of AI to practical implementation demands a change from members of pathology departments; this change can be guided, and concrete steps can be taken to make the change more manageable. Moreover, deciding over the when, as well as the if and how, of AI implementation requires a variety of perspectives and knowledge to enable AI to live up to its promises.