Machining feature recognition based on deep neural networks to support tight integration with 3D CAD systems

Recently, studies applying deep learning technology to recognize the machining feature of three-dimensional (3D) computer-aided design (CAD) models are increasing. Since the direct utilization of boundary representation (B-rep) models as input data for neural networks in terms of data structure is difficult, B-rep models are generally converted into a voxel, mesh, or point cloud model and used as inputs for neural networks for the application of 3D models to deep learning. However, the model’s resolution decreases during the format conversion of 3D models, causing the loss of some features or difficulties in identifying areas of the converted model corresponding to a specific face of the B-rep model. To solve these problems, this study proposes a method enabling tight integration of a 3D CAD system with a deep neural network using feature descriptors as inputs to neural networks for recognizing machining features. Feature descriptor denotes an explicit representation of the main property items of a face. We constructed 2236 data to train and evaluate the deep neural network. Of these, 1430 were used for training the deep neural network, and 358 were used for validation. And 448 were used to evaluate the performance of the trained deep neural network. In addition, we conducted an experiment to recognize a total of 17 types (16 types of machining features and a non-feature) from the B-rep model, and the types for all 75 test cases were successfully recognized.

www.nature.com/scientificreports/ The graph-based method represents the adjacent relation between the faces and edges of the whole shape and features as a graph structure and then finds and recognizes sub-graphs of the features in the graph of the whole shape. Joshi and Chang 3 used heuristics to solve feature intersection problems in machining feature recognition. Chuang and Henderson 4 utilized the graph-based method using vertex and edge to search for patterns in machining features. Gavankar and Henderson 5 proposed a method for separating the connections after showing that the graph connection between the protrusion and depression areas is doubly constructed. Graph-based methods have the advantage of being able to easily add new features to be recognized and apply to various domains. However, it is difficult to apply when the topology structure of features is variable or features intersect. In addition, since graph search takes exponential time, it is difficult to apply to complicated shapes.
In convex decomposition and cell-based decomposition methods, features are recognized from simple shapes after complicated shapes are decomposed into simple ones. The convex decomposition method decomposes the target shape using the convex hull and delta volume. Tang and Woo 6 proposed alternating sum of volumes (ASV), which allows features to be recognized by decomposing shapes in the convex decomposition method. However, there is a problem in ASV when decomposition on a particular shape does not converge. Kim 7 proposed alternating sum of volumes with partitioning (ASVP) decomposition to solve ASP problems and used it to recognize features. The convex decomposition method can well recognize features even when they intersect, but since it cannot be applied to shapes with curved surfaces, fillets or rounds need to be removed and curved parts need to be converted to polyhedron in advance.
In cell-based decomposition, the shapes are decomposed into simple cells and the decomposed cells are recombined to form a maximum volume to find features. Sakurai and Dave 8 proposed a method of decomposing the shapes into small cells with simple shapes and recombining these cells to form large volumes. Woo 9 presented a method to perform cell-based decomposition faster than traditional cell-based decomposition. Cell-based decomposition methods can also be applied even when features intersect, and feature recognition is possible when secondary curved surfaces are included. However, it cannot be applied to complicated shapes because the process of recombining cells has high time complexity.
The hint-based method starts with minimal traces or hints for feature recognition, instead of finding complete feature patterns, and finds features through a geometric inference process for surrounding shapes. Vandenbrande and Requisha 10 developed an object-oriented feature finder (OOFF), an algorithm that explores hints from faces regarding slots, holes, and pockets. Regli 11 developed an algorithm to explore hints using edges and vertices rather than faces. Han and Requicha 12 developed incremental feature finder (IF 2 ), which extends the functionality of OOFF. Disadvantageously, recognition rules need to be individually defined for each feature in hint-based methods.
The similarity-based method recognizes features by examining how similar two comparison shapes are. Hong et al. 13 generated low-and high-resolution models from the B-rep model via multi-resolution modeling. They used the low-resolution model for comparing the whole shape, and the high-resolution model for comparing detailed shapes. Ohbuchi and Furuya 14 and Liu et al. 15 proposed a method to compare the similarity of the shapes contained in images after generating images of the 3D model from multiple viewpoints. Sánchez-Cruz and Bribiesca 16 compared similarities after converting 3D models to voxel formats. However, this method cannot consider the properties of faces or other properties that features have, such as adjacency relations.
Recently, methods have been proposed for recognizing features in 3D models using artificial neural networks [17][18][19][20][21] . Jian et al. 22 proposed an improved novel bat algorithm (NBA) incorporating NBA, which was developed to complement existing neural networks with long learning time, using a graph-based method. Zhang et al. 23 recognized 24 kinds of machining features by applying 3D convolutional neural networks. Shi et al. 24 proposed MsvNet, a deep learning technology based on multiple sectional view (MSV) representation, which was used to recognize machining features. Peddireddy et al. 25 proposed a method to identify machining processes based on 3D convolutional neural networks and transfer learning. Zhang et al. 26 constructed the PointwiseNet based on 3D point clouds and showed high performance by applying the constructed deep learning model to 3D shape retrieval. However, in the artificial neural network-based method, tightly integrating the B-rep model with 3D CAD systems is difficult because the B-rep model cannot be directly used and it needs to be converted to other formats, such as a voxel. In particular, the accurate identification of the faces on the B-rep model is difficult, which corresponds to the detection of areas corresponding to features from a voxel.

System construction and process
In this study, the target range of machining methods is limited to tuning, milling, and drilling as shown in Fig. 2a. Sixteen machining features of Fig. 2b are to be recognized. The machining features to recognize are classified as five features of hole-related type, three features of slot-related type, two features of pocket-related type, two features of island-related type, two features of fillet-related type, and two features of chamfer-related type.
The proposed method using deep learning-based machining feature recognition is shown in Fig. 3, comprising online and offline processes. The online process generates feature descriptors for each face of the B-rep model loaded into a 3D CAD system, inputs them into deep neural networks, and then classifies feature types of the face. Then, the recognized type is returned to the 3D CAD system. The offline process generates feature descriptors, builds a training dataset composed of them, and then trains the deep neural networks for feature recognition.
Previous feature recognition studies identified whether the associative pattern for the face and edge that make up the B-rep model is similar to a particular feature type pattern. Here, the associative patterns between many faces and edges are used for comparison. However, in this study, we defined a base face for each feature type and recognized the target face as the feature's base face by identifying whether the target face's attributes are similar to the particular feature's base face. Here, as shown in Fig. 4, the feature descriptor explicitly represents and stores the main attributes of the face. Therefore, in this study, the machining feature recognition determines whether www.nature.com/scientificreports/ each face corresponds to a base face for each type of machining feature. Use of feature descriptors is effectively applicable even if interference between features makes it difficult or ambiguous to match faces or edges. Moreover, it is possible to extend the recognizable types of features by extending the descriptor. The biggest challenge encountered when developing deep neural networks that directly use B-rep models as inputs to recognize machining features is the hierarchical complexity of B-rep models and the variability of data size. Thus, previous studies used neural networks after converting B-rep models into fixed-sized voxels or multiple images. Converted voxels or images with low resolution may not properly represent curves or surfaces.  www.nature.com/scientificreports/ Furthermore, it may result in the loss of features (e.g., hole, pocket, fillet, or chamfer). Even if feature areas in a converted voxel or image are detected, the recognition of features in the B-rep model is challenging because of the resolution differences between the B-rep model and converted voxel or image. Additionally, the lack of training datasets about segmented 3D models for each area of feature makes it difficult to conduct relevant research.
In the proposed method of machining feature recognition, input data of deep neural networks is the feature descriptor generated on each face of the B-rep model. Therefore, the proposed method is analogous to directly using the B-rep model information without conversion. Furthermore, this method can fix the input data format and size of deep neural networks because feature descriptors generated according to predefined structure for each face rather than those for a set of faces are used in feature recognition. The proposed method enables tight integration between a 3D CAD system and a deep learning model for machining feature recognition due to these characteristics, as shown in Fig. 5.

Machining feature recognition using deep learning technology
Feature descriptor. Base face of a machining feature. This study introduces the concept of a base face on machining features. The existing studies 3-26 recognized most of the machining features from the relationship between the faces and the geometrical characteristics of each face. Referring to existing studies, we selected the base face for each feature in this study. Then, we devised a method to express the relationship between the faces constituting the feature and geometrical characteristics in terms of a descriptor based on this base face. The base face of a feature can be used as a reference for recognizing the features, even if the topology or geometry that makes up the feature partly changes. In other words, among the many faces that make up a feature, the base face best represents the feature characteristics. Figure 2 shows the base face for each type of feature covered in this study. For a hole, the base face depends on the number of faces that make up the hole. A simple hole or taper hole is a rotational shape, each with a cylinder and cone. A cylinder (cone) can be represented as either one cylindrical (conical) face or two half-cylindrical (half-conical) faces. This study assumes that a cylinder (cone) is represented by two half-cylindrical (half-conical) faces. The base face of a countersink hole or counterdrilled hole is the conical face. The base face of a counterbore hole is the planar face between the two cylindrical faces.  Feature descriptor definition. The faces of the B-rep model have information about the faces themselves, information about the edges that make up the boundaries of the faces, information about the vertices that make up the edges, and relation information for the adjacent faces. The feature descriptor uses the type of face (e.g., planar face, cylindrical face, toroidal face, etc.), the normal vector, and loop type (inner or outer loop) for face information. Edges have the edge type (e.g., linear or curved type) and length. Vertices have coordinates information. Relation information for the adjacent face has the angle with the adjacent face, the convexity type, and the continuity type. Convexity is divided into concave or convex depending on the angle between the two adjacent faces. If the angle between the two adjacent faces is less than 180°, it is called concave, and if not, convex. Continuity is divided into C0, G1, C1, G2, and C2 according to the tangency and curvature between the two adjacent faces. Other continuities except C0 continuity have a tangent condition. In a previous study 30 , we proposed feature descriptors to recognize machining features based on similarity comparison. The proposed feature descriptors, as shown in Table 1, include a type of base face, information on the relation of the base face with adjacent faces, parallel information of adjacent faces, and distance between parallel adjacent faces. Methods for using feature descriptors in the previous study 30 are as follows. First, a feature descriptor D f of the base face is defined for each feature type. Only the minimum information necessary for distinguishing features is stored in D f . Moreover, in the feature recognition phase, feature descriptor D i is generated for each face F i that makes up the B-rep model. The similarity of feature descriptor D i is compared with D f for each feature type. If the calculated similarity value is higher than the predefined threshold value, F i is determined as the base face of the feature.
In this study, as shown in Table 2, a new feature descriptor suitable for applying deep learning technology was constructed by referring to the feature descriptor proposed in the previous study 30 . There are ten types of faces: Bezier, BSpline, Rectangular Trimmed, Conical, Cylindrical, Planar, Spherical, Toroidal, Linear Extrusion, and Revolution. If any face is not of analytic type, we mark it as Unknown. The curvature of the target face is represented as positive (if the target face is convex in the normal direction), negative (if the target face is concave in the normal direction), or flat (if the target face is flat).
The width of a target face has two types of information, face-and edge-machining types, which mark whether the target face is longer (marked as Longer) or shorter (marked as Shorter) based on the predefined threshold. The width of the target face is the distance between two adjacent planes that are parallel to each other. If there are no adjacent faces parallel to each other, the minimum distance is calculated between adjacent faces that are not in Table 1. The feature descriptor used in a previous study 30 .

Item description Notation
Base face type  www.nature.com/scientificreports/ contact. In the face-machining type, the threshold means the maximum diameter used in machining. This item has been defined for identification between features of face-machining type (holes, slots, pockets, and islands). Among the features of face-machining types, only the width of the slot has a value smaller than the threshold 31 .
If the width of the target face is less than the threshold, the target face is more likely to be determined to be a slot. In the edge-machining type, the threshold has been defined to identify the edge-machining type features (fillets and chamfers) from the face-machining type features. The threshold for edge-machining is inputted by the user, considering the following conditions. Since the width of the edge-machining is generally shorter than the width of the face-machining, the threshold of edge-machining should be smaller than the threshold of the face-machining. If the width of the target face is smaller than the threshold of the edge-machining type, the target face is more likely to be determined as an edge-machining type feature. The adjacent face information in an outer loop shows the number of adjacent face type and convexity pairs for adjacent faces of the target face. Convexity is marked as Concave, Convex, and Unknown. If two faces have a tangent relation, the convexity is generally not considered. However, even in this case, convexity can be calculated using the cross product of the normal vectors on both the faces and direction vector of the edge shared by both sides 32 . If convexity is not calculated, it is marked as Unknown. Moreover, the adjacent face information in an outer loop also includes the number of adjacent face type and convexity pairs that have a C0 continuity relation and a perpendicular relation with the target face.
Finally, the number of location and convexity pairs represents the location of the inner loop in the target face and the convexity relation between the target face and inner loop. If the inner loop is a decomposition shape, such as a hole and pocket, it is marked as Convex. On the other hand, if the inner loop is a composition shape, such as an island, it is marked as Concave. If the inner loop's center and the target face's center are the same, it is marked as Center; otherwise, it is marked as Anywhere. Table 3 shows the feature descriptor created in the base face of an Opened island according to the structure of the feature descriptor defined in this study. As shown in the table, the descriptor used in this study differs from those in the previous study 30 in terms of the descriptor item. When feature descriptors are created, the values for all the items that make up the descriptor are recorded.
Deep neural network for feature recognition. Feature descriptor encoding. We used the integer encoding technique to apply feature descriptors in deep learning models. Integer encoding is a natural language processing technique, wherein the data format is converted from natural language to integer. In this section, we describe how to encode with the contents of Table 3 to aid understanding. We also describe the encoding method by dividing feature descriptor items into face, outer loop, and inner loop information.
For the type, curvature, and width items of the target face corresponding to the face information, different integer values were assigned according to the value of each descriptor item, as shown in Table 4. Table 5 shows an encoding example of the feature descriptor items regarding the target face information. As shown in the table, the descriptor's items regarding a target face represent four integer values.
Outer loop information and descriptor item values are normalized in ratio form, as shown in Fig. 6, and then encoded as integers. The descriptor item values regarding the outer loop represent the number of adjacent faces corresponding to a specific type in the defined feature descriptor. Thus, the more adjacent faces, the larger the value of the descriptor's item naturally attains. To prevent this issue, we have normalized the number of all www.nature.com/scientificreports/ adjacent faces that compose the outer loop by calculating the ratio of the adjacent faces of a specific type, as shown in Fig. 6b. It is necessary to note here that the items of adjacent faces with C0 continuity in an outer loop and perpendicular adjacent faces in an outer loop calculate only the ratio of adjacent faces with concave convexity. If two faces in contact with each other are perpendicular, convexity must be concave. Additionally, if two faces in contact with each other are in a C0 continuity relation, convexity can be either convex or concave. However,   www.nature.com/scientificreports/ most machining features have concave convexity because machining features are made into shapes by removing volumes from the stock. Thus, in these items, the ratio of adjacent faces of Concave convexity is calculated. An additional point to note is to calculate the ratio of the number of faces to the total number of adjacent faces. For example, for perpendicular adjacent faces in an outer loop shown in Fig. 6b, the ratio becomes 50% because the number of Planar face types is 3 and the total number of adjacent faces is 6. When the normalization of the feature descriptor's item values associated with outer loop information is completed, the normalized values are encoded, as shown in Fig. 6c. In the encoding process, the normalized value was rounded to the nearest tenth and then multiplied by 0.1 to ensure that the resulting value was represented as an integer value between 0 and 10. Encoding values were separately represented according to the type of faces (11 types) and the descriptor items (5 types) considering convexity, as shown in Fig. 6c. This process will result in 55 (11 × 5) descriptor values corresponding to the outer loop.
Features that utilize inner loop information in the machining feature classification include Islands and Counterbore holes. Counterbore holes typically have an inner loop in the form of "Center|Convex. " The Islands must have an inner loop in the form of "Anywhere|Concave. " Accordingly, the inner loop-related descriptor items were subdivided into "Anywhere|Concave" and "Center|Convex. " In the encoding process of an inner loop-related descriptor item, as shown in Table 6, if an inner loop corresponding to the above item was present, it was recorded as 1 and as 0 if it was not present. Table 6 shows two inner loops of "Anywhere|Concave" and no inner loops of "Center|Convex" inside a target face. Thus, it was recorded as 1 and 0.
After feature descriptor encoding, a feature descriptor (Fig. 7a) is encoded into a total of 61 integer arrays (Fig. 7b)

to have 4 values in face information, 55 values in outer loop information, and 2 values in inner loop
information. This feature descriptor has the same size regardless of the type of faces that makes up the B-rep model, making it easy to use as input for deep neural networks.

Development of the deep neural network for feature recognition.
In this study, we have developed a deep neural network of standard feed-forward fully connected method for feature recognition. Deep neural network means an artificial neural network comprising one input layer, one output layer, and n hidden layers [33][34][35][36] .
To determine the optimal number of layers and node size of deep neural networks, training was performed by changing the number of hidden layers and node size according to the procedure given in Fig. 8a. We first fixed the node size of the input and output layers as 61 and 17, respectively. As the activation functions of the hidden and output layers, we used the ReLU and Softmax, respectively. As the loss function and optimization function of the neural network, we used Cross entropy loss and Adam optimizer, respectively. Then, we created the first hidden layer and trained while reducing the node size by an increment of 5 from 60, as shown in Fig. 8b, and selected the node size when the validation accuracy was the highest. When the number of nodes in the first hidden layer was determined, a new hidden layer was added to similarly select the optimal node size. With this To enhance the completion level of the deep neural network constructed in this study, we trained by applying dropout 37 and batch normalization 38 techniques. Table 7 shows the validation accuracy when the dropout layer and batch normalization layer are placed behind the nth hidden layer. As shown in Table 7, we can confirm that the validation accuracy is less when optimization techniques are applied than that when they are not applied. Based on Table 7, we confirmed that dropout and batch normalization techniques are not suitable for deep neural networks configured in this study. Through training experiments for the optimal configuration of neural networks, we finally developed the deep neural network configured as shown in Fig. 9.   www.nature.com/scientificreports/ To construct training data for feature descriptors, we generated about 170,000 B-rep models through parametric modeling techniques using CATIA V5 39 and Microsoft Excel 40 , as shown in Fig. 10. All B-rep models generated had one or more machining features, and the base face of each feature was given a different color. The reason for assigning different colors to the base face in the modeling process is to easily identify whether it corresponds to the base face having a feature on a particular face when creating a descriptor from the B-rep model. Since machining feature recognition aims to evaluate manufacturability, in this study, it is important  www.nature.com/scientificreports/ to distinguish between machinable features and unmachinable features. Therefore, the generated dataset also contains unrealistic B-rep models.
In the 170,000 B-rep models generated, many of them had different shapes but the same descriptors as shown in Fig. 11. We eliminated the duplicate data to prevent overfitting in the neural network's training process.
The training dataset for developing the deep neural network for machining feature recognition comprises 2236 feature descriptors, as shown in Fig. 12. This training dataset can be downloaded from the EIF lab homepage 41 . The composition of the training dataset is described in Fig. 12a   www.nature.com/scientificreports/ In the course of developing a deep neural network for the machining feature recognition, as shown in Fig. 12b, the entire dataset was divided by a ratio of 8:2, and 1788 feature descriptors were used as a training set. There is no set optimal division ratio between the training and test sets. Most of the existing studies randomly divide the training and test sets by a 7:3 or 8:2 ratio [42][43][44][45] . So, in this study, we randomly divided the entire dataset into training and test set by a ratio of 8:2. In the training set, the real training set and the validation set were randomly divided by a ratio of 8:2. As a result, the entire dataset was randomly divided into the real training set, validation set, and test set by a ratio of 64:16:20. For the training of deep neural networks, we set Batch size to 8 and Epoch to 1000. In the course of the training, deep neural networks showed a training accuracy of 0.9517, a training loss of 0.0946, a validation accuracy of 0.9609, and a validation loss of 0.1018, as shown in Fig. 13.
After the training of the deep neural networks, we validated the performance of the trained model with 448 feature descriptors that were not used in training. A confusion matrix was calculated, as shown in Fig. 14. The confusion matrix is primarily used to evaluate the performance of classification models and represents performance measures including accuracy, precision, and recall. As a result of validation, the trained deep neural network showed an accuracy of 0.9308, a mean precision of 0.9224, and a mean recall of 0.9108.

Implementation and experimentation
Implementation. According to the proposed method for machining feature recognition, a prototype system was developed, as shown in Fig. 15. We implemented a module SW for feature recognition based on the deep neural network in Python language on the Windows 10 operating system. The PyQt library was utilized to configure the module's GUI. The deep neural network was implemented using TensorFlow-based Keras and Scikit-learn libraries. In the 3D CAD system 30 for feature recognition developed in the previous study, we integrated the developed recognition module SW using the python embedding method, as shown in Fig. 15. For module development and recognition experiments, computers with Intel Core i7 CPU, 64 GB RAM, and NVIDIA GeForce GTX 760 graphics card were used.
Experimentation. The training dataset comprises descriptors that are generated from B-rep models by parametric modeling. These B-rep models have relatively simple shapes compared to the actual 3D models applied in the manufacturing field. Therefore, this experimentation tested the machining feature recognition of the complicated 3D models used in the field.
We prepared 15 B-rep models for the recognition experiment, as shown in Fig. 16, referring to the study 30,46-48 on machining feature recognition. These 3D models are parts manufactured by turning, milling, and drilling, which possess 57 machining features and 18 non-features. Accordingly, the experiment was conducted for 75 test cases. Seventy-five test cases are the generated descriptors according to the procedure explained in "Feature descriptor encoding". Nos. 15,25,38, and 52 of the test cases were recognition failures in the previous study 30 . Table 8 shows the results of the recognition experiment of machining features on the test cases. The probability of columns 3 and 6 in Table 8 means the highest value among the possibility of feature labels. The possibility represents the likelihood that a particular feature label is correct. The results of the recognition experiment show that the feature type, which is true, was recognized as the first priority for all 75 test cases.
Experimental To analyze features with probability of over 70%, we represented the descriptors of the two features (4 Opened pocket and 43 Opened slot), which exhibited the lowest probability, in the graph, as shown in Table 9. The graph's horizontal axis represents the index of the feature descriptor encoded by an integer, and the vertical axis represents the value at each index. Column 1 of this table represents all descriptors corresponding to a particular type in the training data. Column 2 (column 3) represents a feature descriptor after the selection of a feature with probability of 90% or more (70% or more) from the experimental results.
The feature descriptor graphs in row 2 show that sections A, B, C, and D are important for determining the recognition target face as the Opened pocket. We can confirm that all graphs for the training data and the The feature descriptor graphs in row 3 show that sections E, F, G, and H are important for determining the recognition target face as the Opened slot. We can confirm that all graphs for the training data and the recognized faces completely correspond to each other in Section E. Sections F, G, and H show that the recognized face graphs tend to be similar to those in the training data. However, we can see that the values of sections F and H of the 43 Opened slot are located in different indexes. Consequently, the 43 Opened slot seems to output a relatively low probability, although it tends to be similar to that of the training data.

Conclusions
We proposed a method of machining feature recognition based on the deep neural network using feature descriptors to ensure tight integration with 3D CAD systems. The proposed method supports 16 types of machining feature recognition. To recognize the machining features, the proposed method generates feature descriptors from the B-rep model's face and recognizes feature types by inputting the descriptors into the deep neural network, and it returns the recognized feature types to the 3D CAD system.   30 . Moreover, we used the integer encoding technique to apply the feature descriptor to the deep learning model. Since this technique can create feature descriptors having the same structure and size for each face composing the B-rep model, it can be easily used as input for the deep neural network.
The standard feed-forward fully connected method was applied to develop the deep neural network for machining feature recognition. The deep neural network has five hidden layers in addition to input and output layers. As the activation functions of the hidden and output layers, we used the ReLU and Softmax, respectively. In addition, as loss and optimization functions of the neural network, we used Cross entropy loss and Adam optimizer, respectively.
The training dataset used in the development of the deep neural network has a total of 2236 feature descriptors. We used 1788 training data for the learning of the deep neural network. We then tested the performance of the model with 448 feature descriptors that were not used for training. Consequently, the trained deep neural network showed an accuracy of 0.9308, a mean precision of 0.9224, and a mean recall of 0.9108.
In the experiment, we prepared 75 test cases for 15 B-rep models, referring to existing machining feature recognition studies 30,[46][47][48] . In the recognition experiment for the test cases, 68 cases recognized true feature types with over 90% probability as the first priority, while the remaining seven cases recognized true feature types with over 70% probability as the first priority.
It is not easy to prove that the data for training is sufficient. So, for most DNN training, researchers generate as much different data as possible and ensure that the resulting data distribution is uniform. After creating 3D CAD models in this study, we generated the various data by transshaping the models through the parametric modeling method. The generated data distribution was made uniform by adjusting the case that transshapes for each 3D CAD model type. After training the proposed deep neural network with the generated dataset, we experimented with recognizing features with the proposed deep neural network for 75 test cases of Fig. 16. Through this experiment, we confirmed that the deep neural network trained well.
In the future, we will further subdivide the items that constitute the feature descriptor to expand recognizable machining feature types. We will also improve the probability of three feature types (Closed pockets, Opened pockets, and Opened slots) that show relatively low probability of over 70%. According to the subdivision of the items constituting the feature descriptors, we will increase the number of training data because the number of training data (2236) in this study may be insufficient. Finally, we plan to conduct a study to recognize machining features by applying a deep learning model, such as a convolutional neural network or a recurrent neural network, with higher performance than the deep neural network used in this study. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.