Introduction

In the United Kingdom, the care quality commission recently reported that – over the preceding 12 months – a total of 23,000 chest X-rays (CXRs) were not formally reviewed by a radiologist or clinician at Queen Alexandra Hospital alone. Furthermore, three patients with lung cancer suffered significant harm because their CXRs had not been properly assessed1. The Queen Alexandra Hospital is probably not the only hospital having problems with providing expert readings for every CXR. Growing populations and increasing life expectancies are expected to drive an increase in demand for CXR readings.

In computer vision, deep learning has already shown its power for image classification with superhuman accuracy2,3,4,5. In addition, the medical image processing field is vividly exploring deep learning. However, one major problem in the medical domain is the availability of large datasets with reliable ground-truth annotation. Therefore, transfer learning approaches, as proposed by Bar et al.6, were often considered to overcome such problems.

Two larger X-ray datasets have recently become available: The CXR dataset from Open-i7 and the ChestX-ray14 dataset from the National Institutes of Health (NIH) Clinical Center8. Figure 1 illustrates four selected examples from ChestX-ray14. Due to its size, the ChestX-ray14 consisting of 112,120 frontal CXR images from 30,805 unique patients attracted considerable attention in the deep learning community. Triggered by the work of Wang et al.8 using convolution neural networks (CNNs) from the computer vision domain, several research groups have begun to address the application of CNNs for CXR classification. In the work of Yao et al.9, they presented a combination of a CNN and a recurrent neural network to exploit label dependencies. As a CNN backbone, they used a DenseNet10 model which was adapted and trained entirely on X-ray data. Li et al.11 presented a framework for pathology classification and localization using CNNs. More recently, Rajpurkar et al.12 proposed transfer-learning with fine tuning, using a DenseNet-12110, which raised the AUC results on ChestX-ray14 for multi-label classification even higher.

Figure 1
figure 1

Four examples of the ChestX-ray14 dataset. ChestX-ray14 consists of 112,120 frontal chest X-rays from 30,805 patients. All images are labeled with up to 14 pathologies or “No Finding”. The dataset does not only include acute findings, as the pneumothorax in figure (c), but also treated patients with a drain as “pneumothorax” (d).

Unfortunately, a faithful comparison of approaches remains difficult. Most reported results were obtained with differing experimental setups. This includes (among others) the employed network architecture, loss function and data augmentation. In addition, differing dataset splits were used and only Li et al.11 reported 5-fold cross-validated results. In contrast to these results, our experiments (Sec. 3) demonstrate that performance of a network depends significantly on the selected split. To have a fair comparison, Wang et al.8 released an official split later. Yao et al.13 and Guendel et al.14 reported results for this official split. While Guendel et al.14 hold the state-of-the-art results in all fourteen classes with a location-aware DenseNet-121.

To provide better insights into the effects of distinct design decisions for deep learning, we perform a systematic evaluation using a 5-fold re-sampling scheme. We empirically analyze three major topics:

  1. 1.

    weight initialization, pre-training and transfer learning (Section 2.1)

  2. 2.

    network architectures such as ResNet-50 with large input size (Section 2.2)

  3. 3.

    non-image features such as age, gender, and view position (Section 2.3)

Prior work on ChestX-ray14 has been limited to the analysis of image data. In clinical practice however, radiologists employ a broad range of additional features during the diagnosis. To leverage the complete information of the dataset (i.e. age, gender, and view position), we propose in Section 2.3 a novel architecture integrating this information in addition to the learned image representation.

Methods

In the following, we cast pathology detection as a multi-label classification problem. All images \(X=\{{\overrightarrow{x}}_{1},\ldots ,\)\({\overrightarrow{x}}_{N}\},{\overrightarrow{x}}_{i}\in {\mathscr{X}}\) are associated with a ground truth label \({\overrightarrow{y}}_{i}\), while we seek a classification function \(\overrightarrow{f}:{\mathscr{X}}\to {\mathscr{Y}}\) that minimizes a specific loss function l using N training sample-label pairs (\({\overrightarrow{x}}_{i}\), \({\overrightarrow{y}}_{i}\)), i = 1 … N. Here, we encode the label for each image as a binary vector \(\overrightarrow{y}\in {\{0,1\}}^{M}={\mathscr{Y}}\) (with M labels). We encode “No Finding” as an explicit additional label and hence have M = 15 labels. After an initial investigation of weighting loss functions such as positive/negative balancing8 and class balancing, we noticed no significant difference and decided to employ the class-averaged binary cross entropy (BCE) as our objective:

$$\ell (\overrightarrow{y},\overrightarrow{f})=\frac{1}{M}{\sum }_{m=1}^{M}H[{y}_{m},{f}_{m}],\,{\rm{with}}\,H[y,f]=-\,y\,\mathrm{log}\,f-(1-y)\mathrm{log}(1-f).$$
(1)

Prior work on the ChestX-ray14 dataset concentrates primarily on ResNet-50 and DenseNet-121 architectures. Due to its outstanding performance in the computer vision domain10, we focus in our experiments on the ResNet-50 architecture15. To adapt the network to the new task, we replace the last dense layer of the original architecture with a new dense layer matching the number of labels and add a sigmoid activation function for our multi-label problem (see Table 1).

Table 1 Architecture of the original, off-the-shelf, and fine-tuned ResNet-50.

Weight Initialization and Transfer Learning

We investigate two distinct initialization strategies for the ResNet-50. First, we follow the scheme described by He et al.5, where the network parameters are initialized with random values and thus the model is trained from scratch. Second, we initialize the network with pre-trained weights, where knowledge is transferred from a different domain and task. Furthermore, we distinguish between off-the-shelf (OTS) and fine-tuning (FT) in the transfer-learning approach.

A major drawback in medical image processing with deep learning is the limited size of datasets compared to the computer vision domain. Hence, training a CNN from scratch is often not feasible. One solution is transfer-learning. Following the notation in the work of Pan et al.16, a source domain \({{\mathscr{D}}}_{s}=\{{{\mathscr{X}}}_{s},{P}_{s}({X}_{s})\}\) with task \({{\mathscr{T}}}_{s}=\{{{\mathscr{Y}}}_{s},{f}_{s}(\,\cdot \,)\}\) and a target domain \({{\mathscr{D}}}_{t}=\{{{\mathscr{X}}}_{t},{P}_{t}({X}_{t})\}\) with task \({{\mathscr{T}}}_{t}=\{{{\mathscr{Y}}}_{t},{f}_{t}(\,\cdot \,)\}\) are given with \({{\mathscr{D}}}_{s}\ne {{\mathscr{D}}}_{t}\) and/or \({{\mathscr{T}}}_{s}\ne {{\mathscr{T}}}_{t}\). In transfer-learning, the knowledge gained in \({{\mathscr{D}}}_{s}\) and \({{\mathscr{T}}}_{s}\) is used to help learning a prediction function ft() in \({{\mathscr{D}}}_{t}\).

Employing an off-the-shelf approach17,18, the pre-trained network is used as a feature extractor, and only the weights of the last (classifier) layer are adapted. In fine-tuning, one chooses to re-train one or more layers with samples from the new domain. For both approaches, we use the weights of a ResNet-50 network trained on ImageNet as a starting point19. In our fine-tuning experiment, we retrained all conv-layers as shown in Table 1.

Architectures

In addition to the original ResNet-50 architecture, we employ two variants: First, we reduce the number of input channels to one (the ResNet-50 is designed for the processing of RGB images from the ImageNet dataset), which should facilitate the training of an X-ray specific CNN. Second, we increase the input size by a factor of two (i.e. 448 × 448). To keep the model architectures similar, we only add a new max-pooling layer after the first bottleneck block. This max-pooling layer has the same parameters as the “pooling1” layer (i.e. 3 × 3 kernel, stride 2, and padding). In Fig. 2, our changes are illustrated at the image branch. A higher effective resolution could be beneficial for the detection of small structures, which could be indicative of a pathology (e.g. masses and nodules). In the following, we use the postfix “-1channel” and “-large” to refer to our model changes.

Figure 2
figure 2

Patient-data adapted model architecture: ResNet-50-large-meta. Our architecture is based on the ResNet-50 model. Because of the enlarged input size, we added a max-polling layer after the first three ResBlocks. In addition, we fused image features and patient features at the end of our model to incorporate patient information.

Finally, we investigate different model depths with the best performing setup. First, we implement a shallower ResNet-38 where we reduce the number of bottleneck blocks for conv2_x, conv3_x, and conv4_x down to two, two, and three, respectively. Secondly, we also test the ResNet-101 and increased the number of conv_3 blocks from 5 to 22 compare to the ResNet-50.

Non-Image Features

ChestX-ray14 contains information about the patient age, gender, and view position (i.e. if the X-ray image is acquired posterior-anterior (PA) or anterior-posterior (AP)). Radiologists use information beyond the image to conclude which pathologies are present or not. The view position changes the expected position of organs in the X-ray images (i.e. PA images are horizontally flipped compared to AP). In addition, organs (e.g. the heart) are magnified in an AP projection as the distance to the detector is increased.

As illustrated in Fig. 2, we concatenate the image feature vector (i.e. output of the last pooling layer with dimension 2024 × 1) with the new non-image feature vector (with dimension 3 × 1). Therefore, view position and gender is encoded as {0,1} and the age is linearly scaled \([\,{\rm{\min }}({X}_{{\rm{pa}}}),\,{\rm{\max }}({X}_{{\rm{pa}}})]\mapsto \mathrm{[0,1]}\), in order to avoid a bias towards features with a large range of values. In our experiments, we used “-meta” to refer our model architecture with non-image features.

ChestX-ray14 Dataset

To evaluate our approaches for multi-label pathology classification, the entire corpus of ChestX-ray14 (Fig. 1) is employed. In total, the dataset contains 112, 120 frontal chest X-rays from 30,805 patients. The dataset does not include the original DICOM images but Wang et al.8 performed a simple preprocessing based on the encoded display settings while the pixel depth was reduced to 8-bit. In addition, each image was resized to 1024 × 1024 pixel without preserving the aspect ratio. In Table 2 and Fig. 3, we show the distribution of each class and the statistics for non-image information. The prevalence of individual pathologies is generally low and varies between 0.2% and 17.74% as shown in Table 2. While, the distribution of patient gender and view position is quite even with a ratio of 1.3 and 1.5, respectively (see Table 3). In Fig. 3, the histogram shows the distribution of patient age in ChestX-ray14. The average patient age is 46.87 years with a standard deviation of 16.60 years.

Table 2 Overview of label distributions in the ChestX-ray14 dataset.
Table 3 Overview of label distributions in the ChestX-ray14 dataset.
Figure 3
figure 3

Distribution of patient age in the ChestX-ray14 dataset. Each bin covers a width of two years. The average patient age is 46.87 years with a standard deviation of 16.60 years.

To determine if the provided non-image features contain information for a disease classification, we performed an initial experiment. We trained a very simple Multi-layer Perceptron (MLP) classifier only with the three non-image feature as input. The MLP classifier has a low average AUC of 0.61 but this still indicates that those non-image features could help to improve classification results when provided to our novel model architecture.

Experiments and Results

For an assessment of the generalization performance, we perform a 5 times re-sampling scheme20. Within each split, the data is divided into 70% training, 10% validation, and 20% testing. When working with deep learning, hyper-parameters, and tuning without a validation set and/or cross-validation can easily result in over-fitting. Since individual patients have multiple follow-up acquisitions, all data from a patient is assigned to a single subset only. This leads to a large patient number diversity (e.g. split two has 5,817 patients and 22,420 images whereas split 5 has 6,245 patients and the same number of images). We estimate the average validation loss over all re-samples to determine the best models. Finally, our results are calculated for each fold on the test set and averaged afterwards.

To have a fair comparison to other groups, we conduct an additional evaluation using the best performing architecture with different depth on the official split of Wang et al.8 in Section 3.1.

Implementation

In all experiments, we use a fixed setup. To extend ChestX-ray14, we use the same geometric data augmentation as in the work of Szegedy et al.3. At training, we sample various sized patches of the image with sizes between 8% and 100% of the image area. The aspect ratio is distributed evenly between 3:4 and 4:3. In addition, we employ random rotations between ±7° and horizontal flipping. For validation and testing, we rescale images to 256 × 256 and 480 × 480 pixels for small and large spatial size, respectively. Afterwards, we use the center crop as input image. As in the work of He et al.5, dropout is not employed21. As optimizer, we use ADAM22 with default parameters for β1 = 0.9 and β2 = 0.999. The learning rate lr is set to lr = 0.001 and lr = 0.01 for transfer-learning and from scratch, respectively. While training, we reduce the learning rate by a factor of 2 when the validation loss does not improve. Due to model architecture variations, we use batch sizes of 16 and 8 for transfer-learning and from scratch with a large input size, respectively. The models are implemented in CNTK and trained on GTX 1080 GPUs yielding a processing time of around 10 ms per image.

Results

Table 4 summarizes the outcome of our evaluation. In total, we evaluate eight different experimental setups with varying weight initialization schemes and network architectures as well as with and without non-image features. We perform an ROC analysis using the area under the curve (AUC) for all pathologies, compare the classifier scores by Spearman’s pairwise rank correlation coefficient, and employ the state-of-the-art method Gradient-weighted Class Activation Mapping (Grad-CAM)23 to gain more insight into our CNNs. Grad-CAM is a method for visually assessing CNN model predictions. The method highlights important regions in the input image for a specific classification result by using the gradient of the final convolutional layer.

Table 4 AUC result overview for all our experiments.

The results indicate a high variability of the outcome with respect to the selected dataset split. Especially for “Hernia”, which is the class with the smallest number of positive samples, we observe a standard deviation of up to 0.05. As a result, an assessment of existing approaches and comparison of their performance is difficult, since prior work focuses mostly on a single (random) split.

With respect to the different initialization schemes, we observe already reasonable results for OTS networks that are optimized on natural images. Using fine-tuning, the results are improved considerably, from 0.730 to 0.819 AUC on average. A complete training of the ResNet-50-1channel using CXRs results in a rather comparable performance. Only the high-resolution variant of the ResNet-50-large outperforms the FT approach by 0.002 on average AUC. In particular, for smaller pathologies like nodules and masses an improvement is observed (i.e. 0.018 and 0.006 AUC increase, respectively), while for other pathologies a similar, or slightly lower performance is estimated.

Finally, all our experiments with non-image features slightly increase the AUC on average to its counterpart (i.e. without non-image feature). Our from scratch trained ResNet-50-large-meta yields the best overall performance with 0.822 average AUC.

To get a better insight why the non-image features only slightly increased the AUC for our fine-tuned and from scratch trained models, we investigated the capability of predict the non-image features based on the extracted image features. We used our from scratch trained model (i.e. ResNet-50-large) as a feature extractor and trained three models to predict the patient age, patient gender, and view position (VP) – i.e. ResNet-50-large-age, ResNet-50-large-gender, ResNet-50-large-VP. We employed the same training setup as in our experiments before. First, our ResNet-50-large-VP model can predict with a very high AUC of 0.9983 ± 0.0002 the correct VP (i.e. we encoded AP as true and PA as false). After choosing the optimal threshold based on Youden index, we calculated a sensitivity and specificity of 99.3% and 99.1%, respectively. Secondly, the ResNet-50-large-gender predicts the patient gender also very precisely with a high AUC of 0.9435 ± 0.0067. The sensitivity and specificity with 87.8% and 85.9% is also high. Finally, to evaluate the performance of the ResNet-50-large-age we report the mean absolute error (MAE) with standard deviation because age prediction is a regression task. The model achieved a mean absolute error of 9.13 ± 7.05 years. The results show that the image features already encode information about the non-image features. This might be the reason that our proposed model architecture with the non-image features at hand did not increased the performance by a large margin.

Furthermore, the similarity between the trained models in terms of their predictions was investigated. Therefore, Spearman’s rank correlation coefficient was computed for the predictions of all model pairs, and averaged over the folds. The pairwise correlations coefficients for the models are given in Table 5. Based on the degree of correlation, three groups can be identified. First, we note that the “from scratch models” (i.e. “1channel” and “large”) without non-image features have the highest correlation of 0.93 amongst each other, followed by the fine-tuned models with 0.81 and 0.80 for “1channel” and “large”, respectively. Second, the OTS model surprisingly has higher correlation with the from scratch models than the fine-tuned model. Third, for models with non-image feature, no such correlation is observed and their value is between 0.32 to 0.47. This indicates that models which have been trained exclusively on X-ray data achieve not only the highest accuracy, but are furthermore most consistent.

Table 5 Spearman’s rank correlation coefficient is calculated between all model pairs and is averaged over all five splits.

While our proposed network architecture achieves high AUC values in all categories of the ChestX-ray14 dataset, the applicability of such a technology in a clinical environment depends considerably on the availability of data for model training and evaluation. In particular, for the NIH dataset the reported label noise8 and the medical interpretation of the label are an important issue. As mention by Luke Oakden-Rayner24, the class “pneumothorax” is often labeled for already treated cases (i.e. a drain is visible in the image which is used to tread the pneumothorax) in the ChestX-ray14 dataset. We employ Grad-CAM to get an insight, if our trained CNN picked up the drain as a main feature for “pneumothorax”. Grad-CAM visualizes the areas which are most responsible for the final prediction as a heatmap. In Fig. 4, we show two examples of our test set where the highest activations are around the drain. This indicates that the network learned not only to detect an acute pneumothorax but also the presence of chest drains. Therefore, the utility of the ChestX-ray14 dataset for the development of clinical applications is still an open issue.

Figure 4
figure 4

Grad-CAM result for two example images. In the first one, we marked the location of the pneumothorax with a yellow box. As shown in the Grad-CAM image next to it, the models highest activation for the prediction is within the correct area. The second row shows a negative example where the highest activation, which was responsible for the final predication “pneumothorax”, is at the drain. This indicates that our trained CNN picked up drains as a main feature for “pneumothorax”. We marked the drain with yellow arrows.

Comparison to other approaches

In our evaluation, we noticed a considerable spread of the results in terms of AUC values. Next to the employed data splits, this could be attributed to the (random) initialization of the models, and the stochastic nature of the optimization process.

When ChestX-ray14 was made publicly available, only images and no official dataset splitting was released. Hence, researcher started to train and test their proposed methods on their own dataset split. We noticed a large diversity in performance with different splits of our re-sampling. Therefore, a direct comparison to other groups might be miss leading in the sense of state-of-the-art results. For example, Rajpurkar et al.12 reported state-of-the-art results for all 14 classes on their own split. In Fig. 5, we compare our best performing model architecture (i.e. ResNet-50-large-meta) of the re-sampling experiments to Rajpurkar et al. and other groups. For our model, we plot the minimum and maximum AUC over all re-samplings as error bars to illustrate the effect of random splitting. We achieve state-of-the-art results for “effusion” and “consolidation” when directly comparing our AUC (i.e. averaged over 5 times re-sampling) to former state-of-the-art results. Comparing the maximum AUC over all re-sampling splits results in state-of-the-art performance for “effusion”, “pneumonia”, “consolidation”, “edema”, and “hernia” and indicates that a fair comparison between groups without the same splitting might be non-conclusive.

Figure 5
figure 5

Comparison of our best model to other groups. We sort the pathologies with increasing average AUC over all groups. For our model, we report the minimum and maximum over all folds as error bar to illustrate the effect of splitting.

Later, Wang et al.8 released an official split of the ChestX-ray14 dataset. To have a fair comparison to other groups, we report results on this split for our best performing architecture with different depths – ResNet-38-large-meta, ResNet-50-large-meta, and ResNet-101-large-meta – in Table 6. First, we compare our results to Wang et al.8 and Yao et al.13 because Guendel et al.14 used an additional dataset – PLCO dataset25 – with 185,000 images. While the ResNet-101-large-meta already has a higher average AUC with 0.785 and in 12 out of 14 classes a higher individual AUC, the performance is compared to our ResNet-38-large-meta and ResNet-50-larg-meta lower. Reducing the number of layers increased the averaged AUC from 0.785 to 0.795 and 0.806 for ResNet50-large-meta and ResNet38-larg-meta, respectively. Hence, our results indicate that training a model with less parameter on Chest-Xray14 is beneficial for the overall performance. Secondly, Guendel et al.14 reported state-of-the-art results for the official split in all 14 classes with an averaged AUC of 0.807. While our ResNet-38-large-meta is trained with 185,000 images less, it still achieved state-of-the-art results for “Emphysema”, “Edema”, “Hernia”, “Consolidation”, and “Pleural Thicken.” and a slight less average AUC of 0.806.

Table 6 AUC result overview for our experiments on the official split. In this table, we present results for our best performing architecture with different depth (i.e. ResNet38-large-meta, ResNet50-large-meta, ResNet101-large-meta) and compare them to other groups.

Discussion and Conclusion

We present a systematic evaluation of different approaches for CNN-based X-ray classification on ChestX-ray14. While satisfactory results are obtained with networks optimized on the ImageNet dataset, the best overall results can be reported for the model that is exclusively trained with CXRs and incorporates non-image data (i.e. view position, patient age, and gender).

Our optimized ResNet-38-large-meta architecture achieves state-of-the art results in five out of fourteen classes compared to Guendel et al.14 (who had state-of-the-art results in all fourteen classes on the official split). For other classes even higher scores are reported in the literature (see e.g. Rajpurkar et al.12). However, a comparison of the different CNN methods with respect to their performance is inherently difficult, as most evaluations have been performed on individual (random) partitions of the datasets. We observed substantial variability in the results when different splits are considered. This becomes especially apparent for “Hernia”, the class with the fewest samples in the dataset (see also Fig. 5).

While the obtained results suggest that the training of deep neural networks in the medical domain is a viable option as more and more public datasets become available, the practical use of deep learning in clinical practice is still an open issue. In particular, for the ChestX-ray14 datasets, the rather high label noise8 of 10% makes an assessment of the true network performance difficult. Therefore, a clean test set without label noise is needed for clinical impact evaluation. As discussed by Oakden-Rayner24, the quality of the (automatically generated) labels and their precise medical interpretation may be a limiting factor addition to the presence of treated findings. Our Grad-CAM results proves Oakden- Rayner’s concerns about the “pneumothorax” label. In a clinical setting, i.e. for the detection of critical findings, the focus would be on the reliably identification of acute cases of pneumothorax, while a network trained on ChestX-ray14 would also respond to cases with a chest drain.

Future work will include investigation of other model architectures, new architectures for leveraging label dependencies and incorporating segmentation information.