Impact of a deep learning assistant on the histopathologic classification of liver cancer

Artificial intelligence (AI) algorithms continue to rival human performance on a variety of clinical tasks, while their actual impact on human diagnosticians, when incorporated into clinical workflows, remains relatively unexplored. In this study, we developed a deep learning-based assistant to help pathologists differentiate between two subtypes of primary liver cancer, hepatocellular carcinoma and cholangiocarcinoma, on hematoxylin and eosin-stained whole-slide images (WSI), and evaluated its effect on the diagnostic performance of 11 pathologists with varying levels of expertise. Our model achieved accuracies of 0.885 on a validation set of 26 WSI, and 0.842 on an independent test set of 80 WSI. Although use of the assistant did not change the mean accuracy of the 11 pathologists (p = 0.184, OR = 1.281), it significantly improved the accuracy (p = 0.045, OR = 1.499) of a subset of nine pathologists who fell within well-defined experience levels (GI subspecialists, non-GI subspecialists, and trainees). In the assisted state, model accuracy significantly impacted the diagnostic decisions of all 11 pathologists. As expected, when the model’s prediction was correct, assistance significantly improved accuracy (p = 0.000, OR = 4.289), whereas when the model’s prediction was incorrect, assistance significantly decreased accuracy (p = 0.000, OR = 0.253), with both effects holding across all pathologist experience levels and case difficulty levels. Our results highlight the challenges of translating AI models into the clinical setting, and emphasize the importance of taking into account potential unintended negative consequences of model assistance when designing and testing medical AI-assistance tools.


Model Selection
Model selection consisted of three steps. First, 50 networks with randomly sampled hyperparameters were trained on the TCGA training dataset, and evaluated on the tuning set.
From these, the 10 best-performing networks were selected and evaluated on the internal validation set, to assess generalizability to unencountered data. The network with the highest accuracy on the internal validation set was used to create the assistant. The model selection process is summarized in Supplementary Figure 1.

Assistant Web Application Architecture
The assistant's web architecture is comprised of an HTML5 front end and a Python back end.
The front end communicates with the back end via a JSON-based REST interface. The front end is responsible for authenticating the users and allowing them to upload patches, view the model's output probabilities and explanatory CAMs in real time, and provide feedback regarding the model's output.

Model Explanations
Class activation maps (CAMs) were used to highlight regions with the greatest influence on the model's decision (see Supplementary Figure 4). For a given patch, the CAM was computed for both classes (HCC and CC) by taking the weighted average across the final convolutional feature map, with weights determined by the linear layer. The CAM was then scaled according to the output probability, so that more confident predictions appeared brighter. Finally, the map was upsampled to the input image resolution, and overlaid onto the input image.

Supplementary Figures
Supplementary Figure 1

. Model development and selection
Fifty models were trained with randomly selected hyperparameters. The ten best-performing models on the tuning set were evaluated on the validation set to assess their generalizability. The model with the highest accuracy on the validation set was deployed in the assistant, and evaluated during the pathologist experiment on the independent test (Stanford) dataset.

Supplementary Figure 2. Data preprocessing
The model was trained on 512 x 512 pixel patches, which were randomly sampled from tumor regions segmented by the reference GI pathologist. The sample WSI depicts segmented tumor regions (red), with three randomly sampled patches (patches not drawn to scale). A total of 1,000 training patches were sampled from each WSI.

Supplementary Figure 3: Web Architecture
The assistant's web architecture is comprised of an HTML5 front end and a Python back end. The front end communicates with the back end via a JSON-based REST interface. The front end is responsible for authenticating the users and allowing them to upload patches, view the model's results and explanatory CAMs in real time, and provide feedback about the model's output.  Tables   Supplementary Table 1. Average diagnostic accuracies, sensitivities, and specificities for individual pathologists, with (Asst) and without (Unasst) assistance, as well as for the model alone (Algo) * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001 Note: Total percentages may not add up to 100%, due to rounding error. The unit n corresponds to a single observation (e.g. one whole-slide image read). Pathologist diagnosis = final diagnosis entered on a given WSI by the pathologist during the experiment. Model error = whether the model's prediction was wrong (based on the patch(es) input by each pathologist