Introduction

Lung cancer is the leading cause of death from cancer, with about half of all cases comprised of lung adenocarcinoma (ADC), which is remarkably heterogeneous in morphological features1,2 and highly variable in prognosis. Through sophisticated visual inspection of tumor pathology images, ADC can be further classified into different subtypes with drastically different prognoses. Some contributing morphological features have been recognized, such as tumor size or vascular invasion in lung ADC. However, there is a lack of systematic studies on the relationship between tumor shape in pathology images and patient prognosis.

Tumor tissue image scanning is becoming part of routine clinical practice for the acquisition of high resolution tumor histological details. In recent years, several computer algorithms for hematoxylin and eosin (H&E) stained pathology image analysis have been developed to aid pathologists in objective clinical diagnosis and prognosis3,4,5,6,7. Examples include an algorithm to extract stromal features8 and an algorithm to assess cellular heterogeneity6 as a prognostic factor in breast cancer. More recently, studies have shown that morphological features are associated with patient prognosis in lung cancer as well4,5,7. Deep learning methods, such as convolutional neural networks (CNNs), have been widely used in image segmentation, object classification and recognition9,10,11 and are now being adapted in biomedical image analysis to facilitate cancer diagnosis. To some extent, the performances of deep learning algorithms are similar to, or sometimes even better than, those of humans12,13. For analysis of H&E-stained pathology images, deep learning methods have been developed to distinguish tumor regions14, detect metastasis15, predict mutation status16, and classify tumors17 in breast cancer as well as in other cancers. However, due to the complexity of lung cancer tissue structures (such as microscopic alveoli and micro-vessel), deep learning methods for automatic lung cancer region detection from H&E-stained pathology images are not currently available.

Automatic tumor region detection allows for tumor size calculation and tumor shape estimation. Tumor size is a well-established lung cancer prognostic factor18,19,20; the effect of tumor shape has also been investigated in regard to its relationship with drug delivery21,22 and prognosis prediction23,24,25,26,27,28. In X-Ray and computer tomography (CT) image studies, the rough tumor boundary has been reported as a marker for malignant tumor in breast cancer29, and found to be associated with local tumor progression and worse prognosis in lung cancer patients24,28. Compared with CT images, which are most commonly used to evaluate tumor shape, pathology images have much higher spatial resolution30. Thus, automatic tumor region detection in pathology images allows us to better characterize tumor region boundaries and extract tumor shape and boundary-based features.

In this study, we developed a deep CNN model to automatically recognize tumor regions for lung ADC from H&E pathology images. More importantly, based on a systematic study of the detected tumor regions of lung ADC patients from the National Lung Screening Trial (NLST) cohort (n = 150), we found that many features that characterize the shape of the tumors are significantly associated with tumor prognosis. Finally, we developed a risk-prediction model for lung cancer prognosis using the tumor shape-related features from the NLST lung ADC patient cohort. The prognostic model was then validated in lung ADC patients from The Cancer Genome Atlas (TCGA) dataset. The design of this study is summarized in a flow chart (Fig. 1).

Figure 1
figure 1

Flow chart of analysis process. CNN, convolutional neural network; NLST, the National Lung Screening Trial; TCGA, The Cancer Genome Atlas.

Results

CNN model distinguishes tumor patches from non-malignant and white (empty region) patches

5344 tumor, non-malignant, and white image patches were extracted from 27 lung ADC H&E pathology images (Supplemental Fig. 1). The image patches were split into training, validation, and testing datasets (see Methods section). The CNN model was trained on the training set. The training process stopped at the 28th epoch after validation accuracy failed to improve after 10 epochs. The learning curves for the CNN model in the training and validation sets are shown in Supplemental Fig. 2. The overall prediction accuracy of the CNN model in the testing set was 89.8%; the accuracy was 88.1% for tumor patches and 93.5% for non-malignant patches (Supplemental Table 1).

Tumor region recognition for pathology images

In the NLST dataset, the pathology images have sizes ranging from 5280 × 4459 pixels to 36966 × 22344 pixels (median 24244 × 19261 pixels). To identify tumor regions, each image was partitioned into 300 × 300 image patches. To speed up prediction, tissue regions were first identified (see Methods section) and only the image patches within the tissue regions were predicted by the CNN model (Supplemental Fig. 3). The predicted probabilities of the image patches were summarized into heatmaps of tumor probability (Fig. 2). An example of a tumor probability heatmap is shown in Fig. 2B. The tumor region heatmap, predicted as the category with highest probability, is shown in Fig. 2C. Each pixel in the heatmaps corresponds to a 300 × 300 pixel image patch in the original 40X pathology image.

Figure 2
figure 2

Example results of image-level tumor region detection. (A) Original image. (B) Predicted tumor probability. Each point in the heatmap corresponds to a 300 × 300 pixel image patch in original 40x image. (C) Predicted region labels. Yellow: white (empty) region; green: tumor region; blue: non-malignant region.

Shape and boundary-based features from predicted tumor regions correlate with survival outcome

Based on the predicted tumor region heatmap, tissue samples were identified (Supplemental Fig. 4) and 22 shape and boundary-based features were extracted for each tissue sample (see Methods section). For each patient, the image features from multiple tissue samples of the same patient were averaged. The associations between tumor region features and prognostic outcome are summarized in Table 2 in the NLST dataset. It shows that many features were associated with survival outcome. Most tumor area-related features, including area, perimeter, convex area, filled area, major axis length, and minor axis length, both for all tumor regions and for the main tumor region, were associated with poor survival outcome. Interestingly, the number of holes and the perimeter2 to area ratio (an estimation of circularity and boundary roughness), were also associated with poor survival outcome (for all tumor regions: per 100 number of holes, hazard ratio [HR] = 1.087, p value = 0.033; per 1000 perimeter2 to area ratio, HR = 1.15, p value = 0.016; similar results for main tumor region; Table 2). Examples comparing tumor regions with high and low values of eccentricity and perimeter2 to area ratio of main tumor region are illustrated in Fig. 3. As expected, the angle between the X-axis and the major axis of the main region was not correlated with survival, which serves as a negative control of the feature extraction process.

Table 1 Patient characteristics of training and validation datasets.
Table 2 Univariate analysis of tumor region features in NLST training dataset.
Figure 3
figure 3

Comparison of tumor shapes with high or low values of eccentricity and PA ratio of main tumor region. Original heatmaps are cropped to the same size with the same image scale. Yellow, main tumor region; green, non-main tumor region; dark blue, non-malignant tissue; blue, blank part of pathology image. PA ratio, perimeter2 to area ratio.

Development and validation of prognostic model

Utilizing the tumor shape features extracted from the pathology images in the NLST dataset, we developed a prognostic model to predict patient survival outcome. The model was then independently validated in the TCGA cohort. Each patient was assigned into a predicted high- or low-risk group based on the extracted tumor shape and boundary features of the patient (see Methods section). The survival curves for the predicted high- and low-risk groups are shown in Fig. 4. The patients in the predicted high-risk group had significantly worse survival outcome than those in the predicted low-risk group (log rank test, p value = 0.0029). The multivariate analysis shows that the predicted risk groups independently predicted survival outcome (high- vs. low-risk, HR = 2.25, 95% CI 1.34–3.77, p value = 0.0022, Table 3) after adjusting for age, gender, smoking status and stage. This indicates the risk group defined by tumor shape features is an independent prognostic factor, in addition to other clinical variables.

Figure 4
figure 4

Prognostic performance in TCGA validation dataset illustrated by Kaplan-Meier plot. Patients are dichotomized according to median predicted risk score. Difference between the two risk groups: log-rank test, p value = 0.0029.

Table 3 Multivariate analysis of predicted risk and clinical variables in TCGA.

Discussion

In this study, we developed image processing, tumor region recognition, image feature extraction, and risk prediction algorithms for pathology images of lung ADC. The algorithms successfully visualized the tumor region from the pathology images, and serve as a prognostic method independent of other clinical variables. The patient prognostic model was trained in the NLST cohort and independently validated in the TCGA cohort, which indicates the generalizability of the model to other lung ADC patient cohorts.

For tumor region detection, the pathology image was divided into 300 × 300 pixel image patches, which were then classified into tumor, non-malignant, or white categories using a CNN model. The CNN model was trained on 3,848 image patches and tested on 1,068 patches, with 89.8% accuracy in the testing sets. Within the 109 incorrectly predicted patches, 27 contained an insufficient number of cells, which caused confusion between the tissue and background. For the other 82 cases where non-malignant patches were misclassified as tumor or vice versa, the cause seems to be interference from red blood cells, stroma, macrophages, and necrosis (Supplemental Table 1). The prediction errors related to out-of-focus tissues (such as macrophages and stroma cells) could be reduced by improved image scanning quality and training set labelling. A similar problem has also been reported in breast cancer recognition14.

The patch-level tumor prediction results were then arranged to generate tumor region heatmaps (Fig. 2). In total, 22 well-defined image features were extracted for each tissue region, and averaged to generate patient-level features (Supplemental Fig. 3). 15 of the 22 features were significantly correlated with survival outcome in the NLST dataset (Table 2). The features related to tumor shape, boundary and perimeter were associated with worse prognosis.

Interestingly, both for all tumor regions and for the main tumor region, the perimeter2 to area ratio was negatively correlated with survival outcome. The perimeter2 to tumor area ratio is a quantification of the smoothness of the tumor boundary; a large perimeter2 to tumor area ratio indicates a large tumor surface and thus a rough tumor boundary. The negative correlation between perimeter2 to area ratio and survival outcome is consistent with studies conducted on lung cancer CT images, which reported that a more irregular shape predicted worse survival24,28. To date, new biomarkers have been identified and several genes have been reported to be associated with tumor shape31,32. Since cancer hallmark-associated genes can boost patient survival prediction performance33,34, incorporating pathological image features into the cancer hallmark framework and understanding the relationship among gene expression, tumor shape, and survival outcome can provide insight into tumor development and guide therapeutic decision35,36.

This is the first study to quantify tumor shape-related features using a CNN-based model in lung cancer. In addition, both the main tumor body and the tumor spread through air spaces (STAS, sometimes referred as aerogenous spread with floating cancer cell clusters [ASFC]) can be easily detected in the heatmaps37,38. Since the median size of 40X pathology images is 24244 × 19261 pixels and the STASs usually only occupy 1 image patch (300 × 300 pixels) in the NLST dataset, it is labor intensive for human pathologists to circle accurate tumor boundaries and indicate all the tumor STASs. Thus, automatically generating the tumor region heatmap will facilitate pathologists in finding tumor regions and quantifying STASs. More importantly, our study has developed a computation-based method to quantify tumor shape, circularity, irregularity and surface smoothness, which can be an essential tool to study the underlying biological mechanisms. Although tumor size is a well-known prognostic factor, quantifications of the tumor area and perimeter-related features from pathology images are challenging and time-consuming for human pathologists. Thus, it is a natural step to extract image features directly from the predicted tumor heatmaps, thereby avoiding a subjective assessment by a human pathologist.

There are several limitations to our tumor region detection and image feature extraction pipeline. First, pathology images only capture the characteristics of part of the tumor, and may not be representative of the whole tumor, in terms of size, shape, and other characteristics. Furthermore, pathological images are 2-dimensional, which loses the 3-dimensional spatial information. Combining the tumor prediction and feature extraction algorithms with other imaging techniques, such as CT or X-Ray, may produce more comprehensive descriptions of the tumor region and improve the performance of the current risk prediction model. Second, as mentioned before, our CNN model is sensitive to out-of-focus tissue such as red blood cells, macrophages, and stroma cells. Better pathology image scanning quality and more comprehensive labeling of the training set will help solve the problem. Third, the image features can be affected by image preparation artifacts, such as artificially damaged tumor tissues and failure to select the images that faithfully represent the tumor. Thus, to ensure the representability of the predicted risk score, a representative tumor image is required.

Conclusion

Our pipeline for tumor region recognition and risk-score prediction based on tumor shape features serves as an objective prognostic method independent of other clinical variables, including age, gender, smoking status and stage. The tumor region heatmaps generated by our model can help pathologists locate tumor regions in pathology tissue images swiftly and accurately. The model development pipeline can also be used in other cancer types, such as breast and kidney cancer.

Methods

Datasets

The pathology images together with the corresponding clinical data were obtained from two independent datasets: 267 40X images for 150 lung ADC patients were acquired from the NLST dataset; 457 40X images for 389 lung ADC patients were acquired from the TCGA dataset. In the NLST dataset, the H&E stained images were sampled from lung tumor tissues that were resected during diagnosis and treatment of lung cancer; more information can be found on the NLST website, https://biometry.nci.nih.gov/cdas/learn/nlst. Clinical characteristics of patients in this study are presented in Table 1. The prognostic model was trained on the NLST dataset and independently validated on the TCGA dataset.

Image patch generation

A CNN model was trained to classify non-malignant tissues, tumor tissues, and white regions based on image patches of H&E stained pathology images. The patch size was determined as 300 × 300 pixels under 40X magnification, to ensure at least 20 cells within one patch (Supplemental Fig. 1). Tumor and non-malignant patches were randomly extracted from tumor regions and non-malignant regions labeled by a pathologist, respectively. The patches were classified as white if the mean intensity of all pixel values was larger than a threshold determined from sample images. Examples of each patch class are shown in Supplemental Fig. 1. 2139 non-malignant, 2475 tumor and 730 white patches were generated in total. Images were scaled to the range [0, 1] by dividing by 255 before being fed into the model.

CNN training process

The Inception (V3) architecture39 with input size 300 × 300 and weights pre-trained on ImageNet was used to train our CNN model. The network was trained with stochastic gradient descent algorithms in Keras with TensorFlow backend40. The batch size was set to 32, the learning rate was set to 0.0001 without decay, and the momentum was set to 0.9. From the extracted 5,344 image patches, 3,848 patches (72%) were allocated to the training set, 428 patches (8%) to the validation set, and the remaining 1,068 patches (20%) to the testing set, with equal proportions among the three classes. Keras Image Generators were used to normalize and flip the images, both horizontally and vertically, to augment the training and validation datasets. The maximum number of epochs to train was set to 50. To avoid overfitting, the training process automatically stopped after validation accuracy failed to improve for 10 epochs.

Prediction heatmap generation

To avoid prediction on a large empty image area and to speed up the prediction process, the Otsu thresholding method followed by morphological operations such as dilation and erosion was first applied to pathology images to generate the tissue region mask (Supplemental Fig. 2)41,42. A 300 × 300 pixel window was then slid over the entire mask without overlapping between any two windows. The image patches were predicted with batch size 32, and one image patch was predicted only once without rotation or flipping. For each image patch, probabilities of being in each of the three classes were predicted, and a heatmap of the predicted probability was generated for each pathology image (Fig. 2). For each image patch, the class with the highest probability was determined as the predicted class.

Image feature extraction

In a pathology image, sometimes there are multiple tissue samples. To distinguish different tissue samples in the same image, disconnected tissue regions were first identified by morphological operations on heatmaps of predicted classes (Supplemental Fig. 3A–C)42. To remove the effects of some very small tissue samples, the tissue regions with area smaller than half of the largest tissue region in the same image were removed from analysis. Within each tissue region, the tumor region with the largest area was regarded as the “main tumor region” (Supplemental Fig. 3D). The following features of tumor regions were estimated for each tissue sample: number of regions, area, convex area, filled area, perimeter, major axis length, minor axis length, number of holes, and perimeter2 to area ratio for all tumor regions and the main tumor region separately; eccentricity, extent, solidity, and angle between the X-axis and the major axis for the main tumor region (22 features in total)43. Here, 8-connectivity was used to determine disconnected tumor regions and disconnected holes43. When multiple tissue regions were available for one patient, either due to multiple tissues within one image or multiple images for one patient, the 22 image features were averaged to generate patient-level image features.

Prognostic model development

A univariate Cox proportional hazard model was used to study the association between the 22 tumor shape features and patient survival outcome in the NLST dataset. The image features that were significantly associated with survival outcome were selected to build the prediction model for patient prognosis. To avoid overfitting, a Cox proportional hazard model with an elastic-net penalty44 was used; the penalty coefficient λ was determined through 10-fold cross-validation in the NLST cohort.

Model validation in an independent cohort

The model developed from the NLST cohort was then independently validated in the TCGA cohort (n = 389) for prognostic performance. First, the tumor region(s) in each pathology image from the TCGA dataset were detected using the CNN model, and the tumor shape features were extracted based on the detected tumor region(s) in each image and summarized to patient level. Finally a risk score was calculated based on the extracted tumor features. The patients were dichotomized into high- and low-risk groups based on their predicted risk score, using the median as the cutoff. A log-rank test was used to compare survival difference between predicted high- and low-risk groups. The survival curves were estimated using the Kaplan-Meier method. A multivariate Cox proportional hazard model was used to test the prognostic value of the predicted risk score after adjusting for other clinical factors, including age, gender, tobacco history, and stage. Overall survival, defined as the period from diagnosis until death or last contact, was used as response. Survival analysis was performed with R software, version 3.3.245. R packages “survival” (version 2.40-1) and “glmnet” (version 2.0-5) were used44,46. The results were considered significant if the two-tailed p value ≤ 0.05.

Data availability

Pathology images that support the findings of this study are available online in NLST (https://biometry.nci.nih.gov/cdas/nlst/) and The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD, https://wiki.cancerimagingarchive.net/display/Public/TCGA-LUAD).

Code availability

The codes are available upon request. We will share the codes through Github following manuscript acceptance.