Introduction

In hospital emergency rooms, radiologists are often asked to examine patients with fractures in various parts of the body, such as the wrist and arm. Fractures can generally be classified as open or closed, with open fractures occurring when the bone pierces the skin, and closed fractures occurring when the skin remains intact despite the broken bone. Before performing surgery, the surgeon must inquire about the medical history of the patients and conduct a thorough examination to diagnose fracture. In recent medical imaging, three types of devices, including X-ray, Magnetic Resonance Imaging (MRI), and Computed Tomography (CT), are commonly used to diagnose fracture1. And X-ray is the most widely used device due to its cost-effectiveness.

Fractures of the distal radius and ulna account for the majority of wrist trauma in pediatric patients2,3. In prestigious hospitals of developed areas, there are many experienced radiologists who are capable of correctly analyzing X-ray images; while in some small hospitals of underdeveloped regions, there are only young and inexperienced surgeons who may be unable to correctly interpret X-ray images. Therefore, a shortage of radiologists would seriously jeopardize timely patient care4,5. Specifically, some hospitals in Africa have even limited access to specialist reports6, which badly affects the probaility of the sucess of surgery. According to the survey7,8, the percentage of X-ray images misinterpreted have reached 26%.

With the advancement of deep learning, neural network models have been introduced in medical image processing9,10,11,12. In recent years, researchers have started to apply object detection models13,14,15 to fracture detection16,17,18,19, which is a popular research topic in computer vision (CV).

Deep learning methods in the field of object detection are divided into two-stage and one-stage algorithms. Two-stage algorithm models such as R-CNN13 and its improved models20,21,22,23,24,25,26 generate location and class probabilities in two stages. Whereas one-stage algorithm models directly produce the location and class probabilities of objects, resulting in the improvement of the model inference speed. In addition to the classical one-stage algorithm models, such as SSD27, RetinaNet28, CornerNet29, CenterNet30, and CentripetalNet31, You Only Look Once (YOLO) series algorithm models32,33,34 are preferred for real-time applications35 due to the good balance between the model accuracy and inference speed.

In this paper, we first use YOLOv8 algorithm36 to train models of different sizes on the GRAZPEDWRI-DX37 dataset. After evaluation of the model performances of YOLOv8, we train the models by using data augmentation to detect wrist fractures in children. We compare YOLOv8 models using our training method with YOLOv7 and its improved models, and the experimental results demonstrate that our models have the highest the mean average precision (mAP 50) value.

The contributions of this paper are summarized as follows:

  • We use data augmentation to improve the model performance of YOLOv8 model. The experimental results show that the mean average precision of YOLOv8 model using our training method for fracture detection on the GRAZPEDWRI-DX dataset reaches SOTA value.

  • This work develops an application to detect wrist fracture in children, which aims to help pediatric surgeons interpret X-ray images without the assistance of the radiologist, and reduce the probability of X-ray image analysis errors.

This paper is structured as follows: Section "Related work" describes the deep learning methods for detecting fracture, and describes the application of YOLOv5 model in medical image processing. Section "Proposed method" introduces the whole process of training and the architecture of our model. Section "Experiments" presents the improved performance of YOLOv8 model using our training method compared with YOLOv7 and its improved models. Section "Application" describes our proposed application to assist pediatric surgeons in analyzing X-ray images. Finally, Sect. "Conclusions and future work" discusses the conclusions and future work of this paper.

Table 1 Experimental results of other studies on fracture detection in various parts of the body based on deep learning method.

Related Work

In recent years, neural networks have been widely utilized in image data for fracture detection. Guan et al.38 achieved the average precision of 82.1% on 3,842 thigh fracture X-ray images using the Dilated Convolutional Feature Pyramid Network (DCFPN). Wang et al.39 employed a novel R-CNN13 network ParalleNet as the backbone network for fracture detection on 3842 thigh fracture X-ray images. In addition to thigh fracture detection, about arm fracture detection, Guan et al.40 used R-CNN for detection on Musculoskeletal-Radiograph (MURA) dataset41 and obtained an average precision of 62.04%. Ma and Luo43 used Faster R-CNN21 for fracture detection on a part of 1052 bone images of the dataset and the proposed CrackNet model for fracture classification on the whole dataset. Wu et al.42 proposed Feature Ambiguity Mitigate Operator (FAMO) model based on ResNeXt10147 and FPN48 for bone fracture detection on 9040 radiographs of various body parts. Qi et al.49 utilized Fast R-CNN20 with ResNet5050 as the backbone network to detect nine different types of fractures on 2,333 fracture X-ray images. Xue et al.44 utilized the Faster R-CNN model for hand fracture detection on 3067 hand trauma X-ray images, achieving an average precision of 70.0%. Sha et al.45,46 used YOLOv251 and Faster R-CNN21 models for fracture detection on 5134 CT images of spine fractures respectively. Experiments showed that the average precision of YOLOv2 reached 75.3%, which was higher than 73.3% of Faster R-CNN, and inference time of YOLOv2 for each CT image is 27 ms, which is much faster than 381 ms of Faster R-CNN. From Table 1, it can be seen that even though most of the works using R-CNN series models have shown excellent results, the inference speed is not satisfactory.

Figure 1
figure 1

Flowchart of the model training, validation and testing on the dataset. The extended training set is used to double the number of X-ray images by data augmentation.

Figure 2
figure 2

The architecture of YOLOv8 algorithm, which is divided into four parts, including backbone, neck, head, and loss.

YOLO series models32,33,34 offer a balance of performance in terms of the model accuracy and inference speed, which is suitable for mobile devices in real-time X-ray images detection. Hržić et al.52 proposed a machine learning model based on YOLOv4 method to help radiologists diagnose fractures and demonstrated that the AUC-ROC (area under the receiver operator characteristic curve) value of YOLO 512 Anchor model-AI was significantly higher than that of radiologists. YOLOv5 model53, which was proposed by Ultralytics in 2021, has been deployed on mobile phones as the “iDetection” application. On this basis, Yuan et al.54 employed external attention and 3D feature fusion techniques in YOLOv5 model to detect skull fractures in CT images. Warin et al.55 used YOLOv5 model to detect maxillofacial fractures in 3407 maxillofacial bone CT images, and classified the fracture conditions into frontal, midfacial, mandibular fractures and no fracture. Rib fractures are a precursor injury to physical abuse in children, and chest X-ray (CXR) images are preferred for effective diagnosis of rib fracture conditions because of their convenience and low radiation dose. Tsai et al.56 used data augmentation with YOLOv5 model to detect rib fractures in CXR images. And Burkow et al.57 applied YOLOv5 model to detect rib fractures in 704 pediatric CXR images, the model obtained the F2 score value of 0.58. To identify and detect mandibular fractures in panoramic radiographs, Warin et al.58 used convolutional neural networks (CNNs) and YOLOv5 model to implement it. Fatima et al.59 used YOLOv5 model to localize vertebrae, which is important for detecting spinal deformities and fractures, and obtained an average precision of 0.94 at an IoU (Intersection over Union) threshold of 0.5. Moreover, Mushtaq et al.60 applied YOLOv5 model to localize the lumbar spine and obtained an average precision value of 0.975. Nevertheless, relatively few researches have been reported on pediatric wrist fracture detection using YOLOv5 model. While YOLOv8 was proposed by Ultralytics in 2023, we use this algorithm to train the model for the first time in pediatric wrist fracture detection.

Proposed method

In this section, we introduce the process of the model training, validation and testing on the dataset, the architecture of YOLOv8 model, and the data augmentation technique employed during training. Figure 1 illustrates the flowchart depicting the model training process and performance evaluation. We randomly divide the 20,327 X-ray images of the GRAZPEDWRI-DX dataset into the training, validation, and test set, where the training set is expanded to 28,408 X-ray images by data augmentation from the original 14,204 X-ray images. We design our model according to YOLOv8 algorithm, and the architecture of YOLOv8 algorithm is shown in Fig. 2.

Data augmentation

During the model training process, data augmentation is employed in this work to extend the dataset. Specifically, we adjust the contrast and brightness of the original X-ray image to enhance the visibility of bone-anomaly. This is achieved using the addWeighted function available in OpenCV (Open Source Computer Vision Library). The equation is presented below:

$$\begin{aligned} Output = Input_1 \times \alpha + Input_2 \times \beta + \gamma , \end{aligned}$$
(1)

where \(Input_1\) and \(Input_2\) are the two input images of the same size respectively, \(\alpha \) represents the weight assigned to the first input image, \(\beta \) denotes the weight assigned to the second input image, and \(\gamma \) represents the scalar value added to each sum. Since our purpose is to adjust the contrast and brightness of the original input image, we take the same image as \(Input_1\) and \(Input_2\) respectively and set \(\beta \) to 0. The value of \(\alpha \) and \(\gamma \) represent the proportion of the contrast and the brightness of the image respectively. The image after adjusting the contrast and brightness is shown in Fig. 3. After comparing different settings, we finally decided to set \(\alpha \) to 1.2 and \(\gamma \) to 30 to avoid the output image being too bright.

Figure 3
figure 3

Examples of pediatric wrist X-ray images using data augmentation. (a) the original images, (b) the adjusted images.

Figure 4
figure 4

Detailed illustration of YOLOv8 model architecture. The Backbone, Neck, and Head are the three parts of our model, and C2f, ConvModule, DarknetBottleneck, and SPPF are modules.

Model architecture

Our model architecture consists of backbone, neck, and head, as shown in Fig. 4. In the following subsections, we introduce the design concepts of each part of the model architecture, and the modules of different parts.

Backbone

The backbone of the model uses Cross Stage Partial (CSP)61 architecture to split the feature map into two parts. The first part uses convolution operations, and the second part is concatenated with the output of the previous part. The CSP architecture improves the learning ability of the CNNs and reduces the computational cost of the model.

YOLOv836 introduces C2f module by combining the C3 module and the concept of ELAN from YOLOv732, which allows the model to obtain richer gradient flow information. The C3 module consists of 3 ConvModule and n DarknetBottleNeck, and the C2f module consists of 2 ConvModule and n DarknetBottleNeck connected through Split and Concat, as illustrated in Figure 4, where the ConvModule consists of Conv-BN-SiLU, and n is the number of the bottleneck. Unlike YOLOv553, we use the C2f module instead of the C3 module.

Furthermore, we reduce the number of blocks in each stage compared to YOLOv5 to further reduce the computational cost. Specifically, our model reduces the number of blocks to 3,6,6,3 in Stage 1 to Stage 4, respectively. Additionally, we adopt the Spatial Pyramid Pooling - Fast (SPPF) module in Stage 4, which is an improvement from Spatial Pyramid Pooling (SPP)62 to improve the inference speed of the model. These modifications lead to our model with a better learning ability and shorter inference time.

Neck

Generally, deeper networks obtain more feature information, resulting in better dense prediction. However, excessively deep networks reduce the location information of the object, and too many convolution operations will lead to information loss for small objects. Therefore, it is necessary to use Feature Pyramid Network (FPN)48 and Path Aggregation Network (PAN)63 architectures for multi-scale feature fusion. As illustrated in Fig. 4, the Neck part of our model architecture uses multi-scale feature fusion to combine features from different layers of the network. The upper layers acquire more information due to the additional network layers, whereas the lower layers preserve location information due to fewer convolution layers.

Inspired by YOLOv5, where FPN upsamples from top to bottom to increase the amount of feature information in the bottom feature map; and PAN downsamples from bottom to top to obtain more the top feature map information. These two feature outputs are merged to ensure precise predictions for images of various sizes. We adopt FP-PAN (Feature Pyramid-Path Aggregation Network) in our model, and delete convolution operations in upsampling to reduce the computational cost.

Head

Different from YOLOv5 model utilizing a coupled head, we use a decoupled head33, where the classification and detection heads are separated. Figure 4 illustrates that our model deletes the objectness branch and only retains the classification and regression branches. Anchor-Base employes a large number of anchors in the image to determine the four offsets of the regression object from the anchors. It adjusts the precise object location using the corresponding anchors and offsets. In contrast, we adopt Anchor-Free64, which identifies the center of the object and estimates the distance between the center and the bounding box.

Loss

For positive and negative sample assignment, the Task Aligned Assigner of Task-aligned One-stage Object Detection (TOOD)65 is used in our model training to select positive samples based on the weighted scores of classification and regression, as shown in Eq. 2 below:

$$\begin{aligned} t=s^\alpha \times u^\beta , \end{aligned}$$
(2)

where s is the predicted score corresponding to the labeled class, and u is the IoU of the prediction and the ground truth bounding box.

In addition, our model has classification and regression branches, where the classification branch uses Binary Cross-Entropy (BCE) Loss, and the equation is shown below:

$$\begin{aligned} Loss_{n}=-w\left[ y_{n} \log x_{n}+\left( 1-y_{n}\right) \log \left( 1-x_{n}\right) \right] , \end{aligned}$$
(3)

where w is the weight, \(y_n\) is the labeled value, and \(x_n\) is the predicted value of the model.

The regression branch uses Distribute Focal Loss (DFL)66 and Complete IoU (CIoU) Loss67, where DFL is used to expand the probability of the value around the object y. Its equation is shown as follows:

$$\begin{aligned} DFL(\mathscr{S}_{n}, \mathscr {S}_{n+1})=-((y_{n+1}-y) \log (\mathscr {S}_{n}) +(y-y_{n}) \log (\mathscr {S}_{n+1})), \end{aligned}$$
(4)

where the equations of \(\mathscr {S}_n\) and \(\mathscr {S}_{n+1}\) are shown below:

$$\begin{aligned} \mathscr {S}_{n}=\frac{y_{n+1}-y}{y_{n+1}-y_n}, \; \mathscr {S}_{n+1}=\frac{y-y_n}{y_{n+1}-y_n}. \end{aligned}$$
(5)

CIoU Loss adds an influence factor to Distance IoU (DIoU) Loss68 by considering the aspect ratio of the prediction and the ground truth bounding box. The equation is shown below:

$$\begin{aligned} CIoU_{Loss}=1-IoU+\frac{Distance_2^2}{Distance_C^2}+\frac{v^2}{(1-IoU)+\nu }, \end{aligned}$$
(6)

where \(\nu \) is the parameter that measures the consistency of the aspect ratio, defined as follows:

$$\begin{aligned} \nu =\frac{4}{\pi ^2}(arctan\frac{w^{gt}}{h^{gt}}-arctan\frac{w^p}{h^p})^2, \end{aligned}$$
(7)

where w is the weight of the bounding box, and h is the height of the bounding box.

Ethics approval

This research does not involve human participants and/or animals.

Experiments

Dataset

Medical University of Graz provides a public dataset named GRAZPEDWRI-DX37, which consists of 20,327 X-ray images of wrist trauma in children. These images were collected from 6,091 patients between 2008 and 2018 by multiple pediatric radiologists at the Department of Pediatric Surgery of the University Hospital Graz. The images are annotated in 9 different classes by placing bounding boxes on them.

To perform the experiments shown in Table 5 and Table 6, we divide the GRAZPEDWRI-DX dataset randomly into three sets: training set, validation set, and test set. The sizes of these sets are approximately 70%, 20%, and 10% of the original dataset, respectively. Specifically, our training set consists of 14,204 images (69.88%), our validation set consists of 4,094 images (20.14%), and our test set consists of 2029 images (9.98%). The code for splitting the dataset can be found on our GitHub. We also provide csv files of training, validation and test data on our GitHub, but it should be noted that each split is random and therefore not reproducible.

Evaluation metric

Intersection over Union (IoU)

Intersection over Union (IoU) is a classical metric for evaluating the performance of the model for object detection. It calculates the ratio of the overlap and union between the generated candidate bounding box and the ground truth bounding box, which measures the intersection of these two bounding boxes. The IoU is represented by the following equation:

$$\begin{aligned} IoU=\frac{area(C)\cap area(G)}{area(C)\cup area(G)}, \end{aligned}$$
(8)

where C represents the generated candidate bounding box, and G represents the ground truth bounding box containing the object. The performance of the model improves as the IoU value increases, with higher IoU values indicating less difference between the generated candidate and ground truth bounding boxes.

Precision-recall curve

Precision-Recall Curve (P-R Curve)69 is a curve with recall as the x-axis and precision as the y-axis. Each point represents a different threshold value, and all points are connected as a curve. The recall (R) and precision (P) are calculated according to the following equations:

$$\begin{aligned} Recall=\frac{T_P}{T_P+F_N}, \; Precision=\frac{T_P}{T_P+F_P}, \end{aligned}$$
(9)

where True Positive (\(T_P\)) denotes the prediction result as a positive class and is judged to be true; False Positive (\(F_P\)) denotes the prediction result as a positive class but is judged to be false, and False Negative (\(F_N\)) denotes the prediction result as a negative class but is judged to be false.

Table 2 Validation results of YOLOv8 for each class on the GRAZPEDWRI-DX dataset when the input image size is 1024.
Table 3 Validation results of our model for each class on the GRAZPEDWRI-DX dataset when the input image size is 1024.
Table 4 Model performance comparison of YOLOv8 models using SGD and Adam optimizers.
Table 5 Quantitative comparison of fracture detection when the input image size is 640.
Table 6 Quantitative comparison of fracture detection when the input image size is 1024.
Figure 5
figure 5

Detailed illustration of the validation at the input image size of 1024, (a) is our model, and (b) is YOLOv8 model.

Figure 6
figure 6

Examples of pediatric wrist fracture detection on X-ray images. (a) manually labeled images, (b) predicted images.

Table 7 Evaluation of wrist fracture detection with other state-of-the-art (SOTA) models on the GRAZPEDWRI-DX dataset.
Figure 7
figure 7

Example of using the application “Fracture Detection with YOLOv8 Application” on macOS operating system .

F1-score

The F-score is a commonly used metric to evaluate the model accuracy, providing a balanced measure of performance by incorporating both precision and recall. The F-score equation is as follows:

$$\begin{aligned} {\text {F-score }} = \frac{\left( 1+\beta ^{2}\right) \times Precision \times Recall}{\beta ^{2} \times Precision + Recall} \end{aligned}$$
(10)

When \(\beta \) = 1, the F1-score is determined by the harmonic mean of precision and recall, and its equation is as follows:

$$\begin{aligned} F_{1}=\frac{2 \times Precision \times Recall}{Precision + Recall}=\frac{2T_P}{2T_P+F_P+F_N} \end{aligned}$$
(11)

Experiment setup

During the model training process, we utilize pre-trained YOLOv8 model from the MS COCO (Microsoft Common Objects in Context) val2017 dataset72. The research reports provided by Ultralytics36,53 suggests that YOLOv5 training requires 300 epochs, while training YOLOv8 requires 500 epochs. Since we use pre-trained model, we initially set the total number of epochs to 200 with a patience of 50, which indicate that the training would end early if no observable improvement is noticed after waiting for 50 epochs. In the experiment comparing the effect of the optimizer on the model performance, we notice that the best epcoh of all the models is within 100, as shown in Table 4, mostly concentrated between 50 and 70 epochs. Therefore, to save computing resources, we adjust the number of epochs for our model training to 100.

As the suggestion36 of Glenn, for model training hyperparameters, the Adam73 optimizer is more suitable for small custom datasets, while the SGD74 optimizer perform better on larger datasets. To prove the above conclusion, we train YOLOv8 algorithm models using the Adam and SGD optimizers, respectively, and compare the effects on the model performance. The comparison results are shown in Table 4.

For the experiments, we choose the SGD optimizer with an initial learning rate of 1\(\times 10^{-2}\), a weight decay of 5\(\times 10^{-4}\), and a momentum of 0.937 during our model training. We set the input image size to 640 and 1024 for training on a single GPU GeForce RTX 3080Ti 12GB with a batch size of 16. We train the model using Python 3.8 and PyTorch 1.8.2, and recommend readers to use Python 3.7 or higher and PyTorch 1.7 or higher for training. It is noteworthy that due to GPU memory limitations, we choose 3 worker threads to load data on GPU GeForce RTX 3080Ti 12GB when training our model. Therefore, using GPUs with larger memory and more computing power can effectively increase the speed of model training.

Ablation study

In order to demonstrate the positive effect of our training method on the performance of YOLOv8 model, we conduct an ablation study on YOLOv8s model by calculating each evaluation metric for each class, as shown in Table 2. Among all classes, YOLOv8s model has good accuracy in detecting fracture, metal and text, with mAP 50 of each above 0.9. On the opposite, the detection ability of bone-anomaly is poor, with mAP 50 of 0.11. Therefore, we increase the contrast and brightness of X-ray images to make bone-anomaly easier to detect. Table 3 presents the predictions of YOLOv8s model using our training method for each class. Compared with YOLOv8s model, the mAP value predicted by the model using our training method for bone-anomaly increased from 0.11 to 0.169, an increase of 53.6%. Figure 5 also shows that our model has a better performance in detecting bone-anomaly, which enables the improvement of the overall model performance. From the ablation study presented above, we demonstrate that the model performance can be improved by using our training method (data augmentation). In addition to the data enhancement, researchers can also improve model performance by adding modules such as the Convolutional Block Attention Module (CBAM)70.

Experimental results

Before training our model, in order to choose an optimizer that has a more positive effect on the model performance, we compare the performance of models trained with the SGD74 optimizer and the Adam73 optimizer. As shown in Table 4, using the SGD optimizer to train the model requires less epochs of weight updates. Specifically, for YOLOv8m model with an input image size of 1024, the model trained with the SGD optimizer achieves the best performance at the 35th epoch, while the best performance of the model trained with the Adam optimizer is at the 70th epoch. In terms of mAP and inference time, there is not much difference in the performance of the models trained with the two optimizers. Specifically, when the input image size is 640, the mAP value of YOLOv8s model trained with the SGD optimizer is 0.007 higher than that of the model trained with the Adam optimizer, while the inference time is 0.1ms slower. Therefore, according to the above experimental results and the suggestion by Glenn36,53, for YOLOv8 model training on a training set of 14,204 X-ray images, we choose the Adam optimizer. However, after using data augmentation, the number of X-ray images in the training set extend to 28,408, so we switch to the SGD optimizer to train our model.

After using data augmentation, our models have a better mAP value than that of YOLOv8 model, as shown in Table 5 and Table 6. Specifically, when the input image size is 640, compared with YOLOv8m model and YOLOv8l model, the mAP 50 of our model improves from 0.621 to 0.629, and from 0.623 to 0.637, respectively. Although the inference time on the CPU is increased from 536.4 ms and 1006.3 ms to 685.9 ms and 1370.8 ms, respectively, the number of parameters and FLOPs are the same, which means that our model can be deployed on the same computing power platform. In addition, we compare the performance of our model with that of YOLOv7 and its improved models. As shown in Table 7, the mAP value of our model is higher than those of YOLOv732, YOLOv7 with Convolutional Block Attention Module (CBAM)70 and YOLOv7 model with Global Attention Mechanism (GAM)71, which demonstrates that our model has obtained SOTA performance.

This paper aims to design a pediatric wrist fracture detection application, so we use our model for fracture detection. Figure 6 shows the results of manual annotation by the radiologist and the results predicted using our model. These results demonstrate that our model has a good ability to detect fractures in single fracture cases, but metal puncture and dense multiple fracture situations badly affects the accuracy of prediction.

Application

After completing model training, we utilize a Python library that includes the Qt toolkit, PySide6, to develop a Graphical User Interface (GUI) application. Specifically, PySide6 is the Qt6-based version of the PySide GUI library from the Qt Company.

According to the model performance evaluation results in Tables 5 and 6, we choose our model with YOLOv8s algorithm and the input image size of 1024, to perform fracture detection. Our model is exported to onnx format, and is applied to the GUI application. Figure 7 depicts the flowchart of the GUI application operation on macOS. As can be seen from the illustration, our application is named “Fracture Detection Using YOLOv8 App”. Users can open and predict the images, and save the predictions in this application. In summary, our application is designed to assist pediatric surgeons in analyzing fractures on pediatric wrist trauma X-ray images.

Conclusions and future work

Ultralytics proposed the latest version of YOLO series (YOLOv8) in 2023. Although there are relatively few research works on YOLOv8 model for medical image processing, we apply it to fracture detection and use data augmentation to improve the model performance. We randomly divide the dataset, consisting of 20,327 pediatric wrist trauma X-ray images from 6091 patients, into training, test, and validation sets to train the model and evaluate the performance.

Furthermore, we develop an application named “Fracture Detection Using YOLOv8 App” to analyze pediatric wrist trauma X-ray images for fracture detection. Our application aims to assist pediatric surgeons in interpreting X-ray images, reduce the probability of misclassification, and provide a better information base for surgery. The application is currently available for macOS, and in the future, we plan to deploy different sizes of our model in the application, and extend the application to iOS and Android. This will enable inexperienced pediatric surgeons in hospitals located in underdeveloped areas to use their mobile devices to analyze pediatric wrist X-ray images.

In addition, we provide the specific steps for training the model and the trained model in our GitHub. If readers wish to use YOLOv8 model to detect fracture in other parts of the body except the pediatric wrist, they can use our trained model as the pre-training model, which can greatly improve the performance of the model.