Diagnosing thyroid nodules with atypia of undetermined significance/follicular lesion of undetermined significance cytology with the deep convolutional neural network

To compare the diagnostic performances of physicians and a deep convolutional neural network (CNN) predicting malignancy with ultrasonography images of thyroid nodules with atypia of undetermined significance (AUS)/follicular lesion of undetermined significance (FLUS) results on fine-needle aspiration (FNA). This study included 202 patients with 202 nodules ≥ 1 cm AUS/FLUS on FNA, and underwent surgery in one of 3 different institutions. Diagnostic performances were compared between 8 physicians (4 radiologists, 4 endocrinologists) with varying experience levels and CNN, and AUS/FLUS subgroups were analyzed. Interobserver variability was assessed among the 8 physicians. Of the 202 nodules, 158 were AUS, and 44 were FLUS; 86 were benign, and 116 were malignant. The area under the curves (AUCs) of the 8 physicians and CNN were 0.680–0.722 and 0.666, without significant differences (P > 0.05). In the subgroup analysis, the AUCs for the 8 physicians and CNN were 0.657–0.768 and 0.652 for AUS, 0.469–0.674 and 0.622 for FLUS. Interobserver agreements were moderate (k = 0.543), substantial (k = 0.652), and moderate (k = 0.455) among the 8 physicians, 4 radiologists, and 4 endocrinologists. For thyroid nodules with AUS/FLUS cytology, the diagnostic performance of CNN to differentiate malignancy with US images was comparable to that of physicians with variable experience levels.

www.nature.com/scientificreports/ stratify the risk of Bethesda class III lesions 3,4 , US assessment is limited in application due to its inherent limitations of poorly reproducible tests 5 .
Recently, machine learning and deep learning methods have been developed, and have rapidly become the methodology of choice for medical image analysis 6,7 . The deep convolutional neural network (CNN) is trained with an automated process using raw image pixels rather than engineered features extracted by experts of the traditional machine learning algorithm 7 . For thyroid cancer diagnosis, many machine learning and deep learning techniques have been implemented [8][9][10][11][12] . When machine learning techniques using support vector machines were compared with an experienced radiologist, they showed lower accuracy 13 , while deep learning techniques showed similar accuracies to experienced radiologists and higher accuracies than inexperienced radiologists 12,14 .
Recently, we developed a computer-aided program that uses a deep convolutional neural network (CNN) to diagnose thyroid nodules according to US features 14 . This CNN can be an objective, operator-independent method to identify benign lesions and malignancy, and these advantages are thought to be especially helpful for nodules with AUS/FLUS cytology on FNA in predicting malignant risk and determining the next management step.
The purpose of this study was to compare the diagnostic performances of physicians with varying experience levels and CNN to predict malignancy using US images of thyroid nodules with Bethesda class III results on FNA. Table 1 summarized the demographic features of the included 202 nodules. There were 86 (42.6%) benign nodules and 116 (57.4%) malignancies confirmed after surgery. The pathologic results after surgery were shown in Table 2. Of 202 nodules, preoperative FNA found 158 with AUS cytology and 44 with FLUS cytology. There was no statistical difference between the benign and malignant nodules for sex and age. Malignant nodules had significantly smaller size than benign ones (P = 0.009), and higher cancer probabilities than benign nodules using CNN (P < 0.001).

Discussion
The AUS/FLUS cytology includes a heterogeneous and broad spectrum of diagnoses which contain more pronounced cells with architectural and/or nuclear atypia than benign lesions but not enough of these cells to be considered malignant, and have a malignancy risk of 6-18% after NIFTP is removed which can make it difficult for clinicians to reach a decision on further management 2 . For nodules of this category, we can perform repeat www.nature.com/scientificreports/ FNA/CNB or molecular tests as supplementary evaluation methods instead of proceeding to surgery; however, even results from repeated FNA show the same cytology in 10-30% of the nodules 15 . In nodules with AUS/FLUS cytology, US features can help stratify the malignancy risk of thyroid nodules 3,4,[16][17][18] . A meta-analysis study showed that the more suspicious US features a nodule has, the more likely it is to be malignant 3 , with similar results being observed in nodules with AUS cytology, but not in those with FLUS cytology 16,17 . However, the US examination itself is highly subjective, operator dependent and less reproducible than other imaging methods 5,19 . CNN is a typical deep learning algorithm based on feature recognition 9,20,21 . It can extract regular features automatically from 2D images including thyroid US to achieve good diagnostic results; thus, CNN is more objective and highly reproducible compared to US when assisting diagnosis 20,[22][23][24][25] . Several recent studies have shown comparable diagnostic performance between radiologists and CNN for evaluating thyroid nodules on US [22][23][24][25] . This study mainly aimed to suggest a possible supportive role of CNN for predicting malignancy in AUS/FLUS lesions. Past studies have compared the diagnostic performances of CNN and human physicians, but to our knowledge, all of the physicians in these past studies were radiologists 22,[24][25][26] . Our study compared the diagnostic performances of 8 physicians and CNN for diagnosing thyroid malignancy and the physicians in our study were a heterogeneous group of 4 radiologists and 4 endocrinologists with variable levels of experience.
Among the machine learning and deep learning methods newly developed,, CNN showed the highest accuracy and specificity to differentiate Bethesda category III nodules from Bethesda IV/V/VI nodules using US images 27 . This previous study was performed to make decisions on treatment, but diagnostic accuracy was not compared between the clinician and the machine or deep learning approaches. In contrast, both radiologists and endocrinologists with varying levels of experience performed US analyses in our study to predict malignancy in thyroid nodules with AUS/FLUS cytology. We found the AUC of CNN to be similar to those of the 8 physicians for diagnosing malignancy. CNN showed higher sensitivity and lower specificity for diagnosing malignancy in AUS/FLUS lesions than the 8 physicians and these results were comparable to those of other recent studies with higher sensitivity and lower specificity for CNN compared to radiologists 13,22,25,26 . However, our results for both CNN and radiologists showed relatively lower sensitivity, higher specificity, and lower AUC values than other studies 22,25,26 . Our study only included nodules with AUS/FLUS confirmed at FNA. Furthermore, the structures of CNNs are varying in each study and used cut-off values to make the decision based on the probability results from CNNs (there are diverse approaches to determine the cut-off value) are different. In comparison, other studies included thyroid nodules without considering their cytologic results of FNA. Thus, the absolute values of the diagnostic performances are affected by these differences. Rather than weighing the absolute values of the diagnostic performances, it would be more appropriate to check and compare trends. Moreover, most of our study population consisted of AUS nodules (78.2%), and CNN also showed similar diagnostic performances with AUS/FLUS. Interobserver variability is a very important issue because US is highly subjective and operator dependent as mentioned above, and diagnosis using captured JPEG images is more subjective 5,19 . There was a study evaluating the interobserver variability of three radiologists with various experience levels (a resident, a fellow, and a staff), and moderate agreement was observed for each US characteristic (k = 0.473-0.634) except for shape (k = 0.034) 26 . Ko et al. reported fair interobserver variability between two radiologists using TI-RADS by Kwak et al., and criteria by Kim et al. 25 . We only analyzed risk levels according to the ACR TI-RADS system for interobserver variability, and did not analyze each US feature. Our results showed moderate interobserver variability among the 8 physicians. Substantial agreement was observed between the 4 radiologists, which is slightly superior to the interobserver variability of all 8 physicians and also the interobserver variability of 4 endocrinologists. Our 4 radiologists had different levels of experience with thyroid US, but their daily work exposed them much more to US images, making them also much more familiar with US images and the ACR TI-RADS system than endocrinologists.
Our study has several limitations. First, there was selection bias due to its retrospective study design. Second, the total sample size was not large despite it being a multicenter study, and the number of FLUS cytology nodules was only 44 (21.8%), which is relatively small for generalizing its findings to an entire population. Third, the malignancy rate after surgery was 57.4%, much higher than the rate recommended by the Bethesda system 2 . For AUS/FLUS cytology, excision can be considered when repeated FNA/CNB or molecular tests are not helpful or nodules show suspicious US characteristics. We used the inclusion criteria of surgery-performed lesions only, thus, a higher malignancy rate is expected. Fourth, we only compared the risk levels of the ACR TI-RADS system without considering each US feature, which again was a point of conflict between the 8 physicians (Supplementary Table 1).
The diagnostic performance of CNN was comparable to that of physicians with variable experience levels in differentiating malignancy from thyroid nodules with AUS/FLUS cytology on US.

Methods
This multicenter study was based on patient data collected from three tertiary referral institutions in South Korea. The institutional review boards (IRB) of all three institutions approved this retrospective observational study and the need of informed consent was waived for the review of patient images and records by three IRBs (Kangbuk Samsung Hospital Institutional Review Board, 2020-03-020; Yonsei University Health System, Severance Hospital, Institutional Review Board, 4-2020-0106; and Seoul National University College of Medicine/ Seoul National University Hospital Institutional Review Board, 1911-039-1076). This study was performed in accordance with relevant guidelines and regulations.
We US examinations and imaging interpretation. US examinations were performed using several types of US machines ( Supplementary Information 1). One clinician at each hospital reviewed the preoperative thyroid US images, selected the most representative image of each thyroid nodule, and saved them as JPEG files (Fig. 3). A square region-of-interest (ROI) was then drawn to cover each whole nodule using the Microsoft Paint program (version 6.1; Microsoft Corporation, Redmond, WA, USA). The saved images from the 3 hospitals were randomly mixed and numbered by an experienced radiologist (Fig. 3). They were independently reviewed by the following 8 physicians, none who had information on the cytopathologic results of each thyroid nodule: 2 faculty radiologists (7 and 10 years of experience in thyroid imaging), 2 less experienced radiologists (2 and 4 years of experience), 2 faculty endocrinologists (more than 5 years of experience), and 2 less experienced endocrinologists (1 year of experience). Before reviewing the captured images, all of 8 physicians were trained using the user's guide by ACR TI-RADS 28 . The 8 physicians evaluated the following US features using the TI-RADS system proposed by the ACR 28 : composition (cystic or almost completely cystic, spongiform, mixed cystic and solid, solid or almost completely solid), echogenicity (anechoic, hyperechoic or isoechoic, hypoechoic, very hypoechoic), shape (wider-than-taller, taller-than-wide), margin (smooth, ill-defined, lobulated or irregular, extrathyroidal extension), and echogenic foci (none or large comet-tail artifacts, macrocalcifications, peripheral calcifications, punctate echogenic foci). Eight physicians determined malignancy risk using the ACR TI-RADS system and the assigned risk levels ranged from TI-RADS (TR) 1 (benign, 0 points), TR2 (not suspicious, 2 points), TR3 (mildly suspicious, 3 points), TR4 (moderately suspicious, 4-6 points), to TR5 (highly suspicious, 7 or more points) (Supplementary Table 2) 28 .
Deep convolutional neural network. In this study, we used a computer-aided diagnosis (CAD) program to differentiate malignancy from benign lesions, which was recently developed with 13,560 US images of thyroid nodules using a deep convolutional neural network 14 . The CAD program was based on a transfer learning technique equipped with fine-tuning in order to overcome the limited amount of data and maximize accuracy www.nature.com/scientificreports/ through a combination of big data and deep learning. Four sophisticated pre-trained nets (AlexNet, SqueezeNet, GoogLeNet, and Inception-ResNet-v2) were used and a weighted average process was performed (see Supplementary Information 2 and Supplementary Fig. 1 for details on the averaging process). To train the networks with the fine-tuning process, the stochastic gradient descent method with momentum was used as a solver, and various parameter values (initial learning rate, learning rate dropping periods, max epochs, mini-batch sizes, etc.) were chosen through a selection process including Bayesian optimization.

Statistical analysis.
We collected data on the final diagnosis of each thyroid nodule after surgery that had been recorded in the electronic medical records of each hospital. Cancer probabilities were calculated using CNN, and were presented as percentages (0-100%). Categorical data were summarized as frequencies and percentages, and continuous variables were presented as means ± standard deviations or median (interquartile range). The Shapiro-Wilk test was performed to assess the normality of continuous variables. We evaluated differences in variables using the independent two-sample t-test, Mann-Whitney U test, Chi-square test, or Fisher's exact test. Sensitivities and specificities of the 8 physicians and CNN for predicting malignancy were evaluated and compared by generalized estimating equation (GEE). Of the risk levels of the ACR TI-RADS system, we used a cut-off point of TR 5 for the 8 physicians. The cut-off values of CNN were determined with Youden's index. A receiver operating characteristic (ROC) curve analysis and areas under the curve (AUCs) were compared by DeLong's test. The diagnostic performances of the 8 physicians and CNN were evaluated in each AUS and FLUS group, and also compared using the ROC curve analysis.
We evaluated interobserver variability among all 8 physicians using Fleiss' Kappa, and then divided the physicians into 2 groups to also compare interobserver variability among the 4 radiologists and among the 4 endocrinologists separately with Fleiss' Kappa. A kappa value (k) of less than 0 indicated no agreement; 0-0.20, slight agreement; 0.21-0.40, fair agreement; 0.41-0.60, moderate agreement; 0.61-0.80, substantial agreement; and 0.81-1.00, almost perfect agreement 29 .
All P values were calculated using the two-tailed t-test and a P < 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SAS software, version 9.4 (SAS Institute, Inc., Cary, NC, USA) and R Core Team (2020) (R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https:// www.R-proje ct. org/). There was a 12 mm-sized thyroid nodule diagnosed as FLUS on US-guided FNA. The cancer probability calculated by CNN was 88.1%. The patient underwent surgery, and pathology confirmed encapsulated angioinvasive follicular carcinoma.