An Advanced Deep Learning Approach for Ki-67 Stained Hotspot Detection and Proliferation Rate Scoring for Prognostic Evaluation of Breast Cancer

Being a non-histone protein, Ki-67 is one of the essential biomarkers for the immunohistochemical assessment of proliferation rate in breast cancer screening and grading. The Ki-67 signature is always sensitive to radiotherapy and chemotherapy. Due to random morphological, color and intensity variations of cell nuclei (immunopositive and immunonegative), manual/subjective assessment of Ki-67 scoring is error-prone and time-consuming. Hence, several machine learning approaches have been reported; nevertheless, none of them had worked on deep learning based hotspots detection and proliferation scoring. In this article, we suggest an advanced deep learning model for computerized recognition of candidate hotspots and subsequent proliferation rate scoring by quantifying Ki-67 appearance in breast cancer immunohistochemical images. Unlike existing Ki-67 scoring techniques, our methodology uses Gamma mixture model (GMM) with Expectation-Maximization for seed point detection and patch selection and deep learning, comprises with decision layer, for hotspots detection and proliferation scoring. Experimental results provide 93% precision, 0.88% recall and 0.91% F-score value. The model performance has also been compared with the pathologists’ manual annotations and recently published articles. In future, the proposed deep learning framework will be highly reliable and beneficial to the junior and senior pathologists for fast and efficient Ki-67 scoring.

identification of accurate grading (grade I, grade II and grade III) remains always a challenge for pathologists. Till now, the clinical decision on BC grading is mostly made manually based on both predictive and prognostic pathological markers. The manual assessment of Ki-67 is subjective, error-prone and dependent on the Intra and inter-observer ambiguities. Moreover, in the rural and urban areas with minimum or few advanced instrumentation, manual inspection of Ki-67 scoring may provide wrong results. Henceforth, automated assessment of Ki-67 scoring is highly required. The automatic scoring will provide high throughput, more objective and reproducible results in comparison with the manual evaluation. The proliferation score is calculated as the ratio between total numbers of immunopositive nuclei and a total number of nuclei present in the image 11 . The immunopositive (brown color) and immunonegative (blue color) nuclei together called as hotspots. Figure 1 shows the Ki-67 stained BC images according to their proliferation score and their corresponding color distribution map using open source ImageJ software. To date, the Ki-67 automated assessment was done mainly based on conventional imaging techniques. But due to the heterogeneous and massive dataset in medical imaging or biomedical applications scientists are getting interested in deep learning. Deep learning is a versatile biomedical research tool with numerous potential applications. P. Mamoshina et al. (2016) proposed a deep learning framework in biomedicine application 12 . Y. Xu et al. (2014) reported deep learning for medical image analysis 13 . This paper has been structured as an introduction, literature review, experimental setup, results & discussion and finally conclusion.
Literature review. Many machine learning techniques have been published for Ki-67 scoring using IHC stained BC images. To the best of our knowledge, most of the Ki-67 scoring methods are based on conventional machine learning techniques. Based on the extensive literature survey, we conclude that there are no such reports available till date specially aimed to the deep learning approaches for considering much finer information inherent in the microscopic images. Table 1 shows the characterization of different Ki-67 scoring methods.
Review on conventional techniques. M. Abubakar et al. 14 proposed a computer vision algorithm for Ki-67 scoring in BC tissue microarray images. Their algorithm shows promising performance measure in comparison with other scoring technique. The authors achieved 90% classification accuracy with 0.64 kappa value. The automated quantification of Ki-67 using nasopharyngeal carcinoma has been described in P. Shi et al. 15 . Their algorithm mainly consists of smoothing, decomposition, feature extraction, K-means clustering, and quantification. They achieve 91.8% segmentation accuracy. F. Zhong et al. (2016) compared the visual and automated Ki-67 scoring on BC images 16 . The authors used total 155 Ki-67 immunostained slides of invasive BC. The scoring has been performed based on hotspots and average score. They employed correlation coefficient to analyze the consistency and to minimize the errors between the two techniques. Z. Swiderska et al. 17 employed computer vision algorithm for hotspots selection from the whole slide meningioma images. The authors used color channel selection, Otsu thresholding, morphological filtering, feature selection and classification. The authors achieved high correlation between manual and automated hotspots detection. F. Xing et al. (2014) proposed an automated machine learning algorithm for Ki-67 counting and scoring using neuroendocrine tumor images 11 . The proposed algorithm has three stages. Stage-I comprises of seed point detection, segmentation, feature extraction and cell level probability measurement. Stage-II consists of tumor or non-tumor cell classification, probability map generation, and feature extraction. Finally, stage-III provide immunopositive and negative cell classification along with Ki-67 scoring. This approach achieved 89% precision, 91% recall and 90% F-score. J. Konsti et al. 18 reported virtual application for Ki-67 assessment in BC. The algorithm mainly developed using ImageJ software. At first, images were processed using color deconvolution to separate hematoxylin and diaminobenzidine stain color channels. Then a mask was moved over the images to get target objects. This approach showed 87% agreement and 0.57 kappa value. The Ki-67 scoring using Gamma-Gaussian Mixture Model (GGMM) has not been attempted yet. Khan et al. (2012) proposed GMM model for mitosis identification from histopathological images 19 .
Review on deep learning techniques. To the best of our knowledge, automatic Ki-67 scoring and hotspots detection using deep learning approach were not attempted yet.
The major contributions of this paper include: sequence of N number of layers or functions (f 1 , f 2 …f N ). The mapping between input (w) and output (u) vector of a CN can be represented as 20 : Conventionally, f N has been assigned to perform convolution or, non-linear activation or, spatial pooling. Where X N denotes the bias and weight vector for the N th layer f N . Given a set of η training data where f LOSS indicates loss function. The equation 2 can be performed using stochastic gradient descent and backpropagation methods.
Convolution layers. In deep learning, the convolution operation extracts different low-level (e.g. lines, edges, and corner) and higher-level hierarchical features from the input images. In our proposed deep learning framework multiple layers are stacked in a way so that the input of h th layer will be the output of (h−1) th layer. A convolutional layer usually learns convolutional filters to calculate feature map. The equation 21

FM in
h 1 and FM out h represent a number of input and output feature maps at h th level. G jm h and α m h represent biases and corresponding kernels respectively. In each convolution layer, there are two components which create feature maps. The first element is Local Receptive Field and the second part is shared weights. A feature map is the output of one filter applied to the previous layer. The each unit in a feature map looks for the same feature but at different positions of the input image.
Max-pooling layer. The pooling layers have been employed to get spatial invariance by reducing the feature maps' resolution. The pooling operation makes the features more robust against distortion and noise. There are two types of pooling mostly used in deep learning. Those are average pooling and max-pooling. In both the cases, the input is divided into two-dimensional spaces (non-overlapping). Based on our requirement and image characteristics we have chosen max-pooling operations in our proposed framework. The advantages of this type of layer are the capability of downsampling the input image size and create positive invariance over the local regions. The max-pooling function has been calculated using the below equation 22,23 .
The max-pooling window can be overlapped and arbitrary size. Here ψ is the input image, z denotes window function and n × n = 71 × 71 is the input patch size.
Rectified Linear Unit. In deep learning, Rectified Linear Units (ReLUs) have been used as an activation function and as a gradient descent vector. It is defined by the below equation 24 where q denotes model's output function with an input r. The size of input and output of this layer is same. The ReLU enhances the performance of the network without disturbing receptive fields and increases nonlinearity of the decision function. ReLU trains the CN much faster than the other existing non-linear functions (e.g., sigmoid, hyperbolic tangent and absolute of hyperbolic tangent).
Fully Connected (FC) Layer. The FC layer is often used as a final layer of a CN in a classification problem. This layer mathematically sums a weighting of features of a previous layer. It works like a classifier. This layer is not spatially located and serves as a simple vector. In the proposed model, FC layer height and width of each blob is set to 1.
Dropout Layer. Dropout is a regularization technique which is mostly used for reducing overfitting and preventing complex-co-adaptions on training data. Due to this layer, the learned weights of nodes become more insensitive to the weights of the other nodes. This layer helps to increase the accuracy of the model by switching off the unnecessary nodes in the existing network. The dropout neurons do not contribute in the backpropagation and forward pass.

Decision
Layer. An additional decision layer comprises of decision trees, has been introduced in the proposed deep learning framework. As per our knowledge, the concept of decision layer in deep learning framework has not been used so far. The inclusion of decision layer increases the performance of the proposed model. In our proposed method we used decision trees inspired by P. Kontschieder et al. 26 . The proposed decision layer consists of decision nodes and prediction nodes. The decision layer algorithm is a recursive algorithm and implemented in C++ 27 . The column of the blob data table has been split based on information gain or least entropy.
Let, the input and finite output spaces are denoted by X and Y respectively. In the decision tree, decision nodes are also called internal nodes of the tree and indexed by D. Similarly, prediction nodes are called terminal nodes and indicated by P. Each decision node 0 ∈ D assigned a decision function . Θ → f X ( ; ): [0, 1] d . Each projection node p ∈ P possesses probability distribution π p over Y. When a sample x ∈ X reaches a decision node d it will send to the right or left subtree based on the output of On decision trees f d are binary, and the routine is deterministic. The final prediction result for sample x from tree T with decision notes parametrized by Θ is denoted by

T p P py p
Here, π py and π = (π p )p ∈ P represents the probability of a sample reaching leaf p on class y and routine func- In decision nodes decision function works based on stochastic routine and is defined as d r Here σ(x) is a sigmoid function and defined as σ r is a real-valued function. An ensemble of decision trees are called decision forest and are denoted by The learning of decision trees along with decision nodes and prediction nodes have been done using CAFFE stochastic gradient descent approach. The pictorial representation of decision layer connections in CAFFE has been shown in Fig. 2.
Patch selection. Patch selection is a very much essential part of the proposed methodology. The overall patch selection work flow diagram has been shown in Fig. 3. Due to variations of nuclei size, shape and the localization of nuclei, image patches may vary. In the case of overlapping nuclei, it is very complicated to crop a patch which will only contain a single nucleus (immunopositive or immunonegative). Henceforth, we have detected seed point using Gamma mixture model (GMM) with Expectation-Maximization algorithm. The algorithm is an iterative method and used to find maximum posterior or maximum likelihood. The iteration alternates between performing an expectation (E) and maximization (M) for each parameter.
Let I is an image I = (I 1 , I 2 , I 3 , …, I Q ) where Q represents a number of pixels and I Q denotes gray-level intensity of a pixel. To infer a configuration of positive labels L, K = (K 1 , As per the hypothesis, we are assuming that the segmented region's intensity will follow a Gaussian distribution with parameters σ xi = (μ xi , σ xi ). This hypothesis is unable to model real-life objects. So for complex distribution GMM is the best choice for the engineers. A GMM with c components is represented by below equations 28 : The Gaussian distribution with parameters can be written as Comparing the equation 12 with equation 13, we get the weighted probability as follows The seed points of immunopositive and immunonegative nuclei have been denoted as red and yellow color respectively. Finally, each patch of size 71 × 71 have been cropped using centroid points of each selected seed points, and lastly, patches have been feed into our proposed deep learning framework.

Proposed Deep Learning Model (DLM).
The proposed DLM has been developed using CAFFE deep learning framework, and CUDA enabled parallel computing platform 29 . The architectural details of the proposed DLM have been shown in Table 2. The proposed model includes one decision layer, two fully connected layers, four max-pooling layers, five convolution layers and six ReLUs. The decision layer has been added after fifth convolutional layer. ReLU has been employed after each convolutional layer to fasten the computing time. Dropout layer has been inserted after first FC layer to avoid the over-fitting. After the rigorous experiment, it was found that dropout ratio = 0.5 is provided the best result in this dataset. The workflow diagram of the proposed DLM has been illustrated in Fig. 4. Our proposed model learns from the labeled data.

Results and Discussion
In this portion, we assess the efficacy of our proposed deep learning framework. We randomly divided the image patch dataset into five subsets (5-fold cross validation); each subset includes 20% of the total data. It should be noted that during training phase each time we performed patch selection, model learning and classification using the four subsets. Finally, the selected patches and trained model were used to assess the performance of the left-out testing sub-dataset. Five-fold cross validation results have been shown in Table 3. Furthermore, performance based on various combinations of training and testing dataset has been indicated in Table 4. The seed point selection and object detection algorithms have been developed using MATLAB and Python tools on a machine with AMD Opteron processor 128 GB RAM, NVIDIA Titan X pascal GPU. The proposed cascaded framework achieved almost 0.974 training accuracy and 0.0945 loss.
Quantitative evaluation. The quantitative assessment results have been shown in Table 5. Figure 5 illustrates the regression curve (R 2 = 0.9991) between automatic and manual hotspots detection. The graph indicates that the model generated immunopositive and immunonegative nuclei count provides almost exact results in comparison to the pathologists' count. The model is evaluated using precision (Pr), recall (Re) and F-score as below 30 Pr Re Pr Re 100 (19) Our proposed model achieved 93% precision, 88% recall and 91% F-score value. We also added confusion matrix for better understanding the results. Figure 6 shows precision and recall curve across 5-fold cross-validation.
Qualitative evaluation. The first, second, third, fourth and fifth convolution layer feature maps of immunopositive and immunonegative nuclei patches have been displayed in Fig. 7. The feature maps generated by using various kernels in convolution layers decodes the signature of the expression level of color content of brown (for immunopositive) and blue (for immunonegative) nuclei. In this context, Fig. 7 has been revised by presenting two immunopositive and immunonegative images. It can be observed that the ki-67 expression is different with respect to filters for immunopositive and immunonegative nuclei. Basically from the feature maps, we can assume a nucleus is immunopositive or, not. But for the confirmation, we have to classify the image. Due to a clear    100 (20) here, the TIP = total number of immunopositive nuclei and the TIN = total number of immunonegative nuclei. Table 6 shows the overall proliferation score based on two pathologists and our automated technique. In this table reference range shows the standard proliferated category and their ranges, which are already gold standard in pathology. We compared the proliferation score of both the pathologists' with the proposed technique. It is observed that in both the cases error rate is negligible. More specifically, in the less proliferated category error rate is 0.06%, average proliferated category error rate is 0.01% and highly proliferated category error rate is 0%. It can  be observed that the proposed deep learning framework provides consistent and efficient results as evident from the similar performance. The training performance graph has been shown in Fig. 10. After 297,000 iterations the accuracy and loss graph become saturated. Hence we have only shown the graph up to 297,000 iterations. Figure 11 illustrates the ROC curve and the area under the curve (AUC) is 91.
Comparison with the existing methods. From the exhaustive literature review, it is evident that the quantification and proliferation rate scoring of Ki-67 stained BC or other cancer images using deep learning approach has not been attempted so far. Moreover, there have few limitations, e.g., nonstandard dataset, conventional imaging approach, etc. for which we cannot directly measure the performances of our proposed method with the existing methodologies. Based on some technical understanding of image similarities, we compared the qualitative and quantitative performances with the two recently published articles 15, 31 on Ki-67 scoring in Table 7. Furthermore, we measured the efficiency of our proposed framework with other conventional methods. Table 8 shows the comparison of performance measures with various combinations, e.g. proposed method but without decision layer, GMM and random forest but without deep network, GMM plus SVM but without deep network, replacing the decision layer with additional FC layer and the proposed methodology. From the Table 8 it is obvious that proposed method including decision layer provides better performance in terms of precision, recall and F-sore value in comparison with the other techniques. Henceforth, the proposed methodology is far better than the existing methods. Reference range 26 Pathologists

Conclusion
In this manuscript, our contribution is twofold, (i) development of an efficient deep learning model comprises of decision layer for automated detection of hotspots, and (ii) development of an automatic proliferation rate scoring technique of Ki-67 positively stained BC images. The proposed deep learning model is capable of computing the scoring index with any IHC image, provided that immunopositive nuclei will manifest as brown color and immunonegative nuclei will show as blue color. The proposed framework starts with a seed point detection using GMM which makes the algorithm more robust. This step substantially eliminates unnecessary background objects. Our proposed deep learning model considers both, the pathologist's information as well as spatial similarity while detecting hotspots. Our quantitative and qualitative evaluation results showed the better performance of our proposed model. The model provides higher learning accuracy and performance scores as measured by precision, recall and F-score, in comparison with the existing conventional techniques for Ki-67 scoring. The model performance has also been compared with the pathologists' manual annotations. Prospectively, this model will be highly beneficial to the pathologists for fast and efficient Ki-67 scoring from breast IHC (cancer) images.