Abstract
In order to extract more important morphological features of neuron images and achieve accurate classification of the neuron type, a method is proposed that uses Sugeno fuzzy integral integration of three optimized deep learning models, namely AlexNet, VGG11_bn, and ResNet50. Firstly, using the pretrained model of AlexNet and the output layer is finetuned to improve the model’s performance. Secondly, in the VGG11_bn network, Global Average Pooling (GAP) is adopted to replace the traditional fully connected layer to reduce the number of parameters. Additionally, the generalization ability of the model is improved by transfer learning. Thirdly, the SE(squeeze and excitation) module is added to the ResNet50 variant ResNeXt50 to adjust the channel weight and capture the key information of the input data. The GELU activation function is used to better fit the data distribution. Finally, Sugeno fuzzy integral is used to fuse the output of each model to get the final classification result. The experimental results showed that on the Img_raw, Img_resample and Img_XYalign dataset, the accuracy of 4category classification reached 98.04%, 91.75% and 93.13%, respectively, and the accuracy of 12category classification reached 97.82%, 85.68% and 87.60%, respectively. The proposed method has good classification performance in the morphological classification of neurons.
Similar content being viewed by others
Introduction
Neurons are the fundamental units of the brain, which constitutes the essential tissue of the nervous system. They transmit information through electrical and chemical signals, transmiting information from one neuron to another neuron or other type of cell. The transmission and processing of information is crucial to brain function. There are various forms and functions of neurons in the brain, which are connected together through complex networks to form complex neural circuits and functional networks^{1,2,3}. By solving the neuronal classification problem, the characteristics of different neuronal types can be revealed, thereby helping researchers understand the pathogenesis of neurological diseases. In addition, it is also of great significant to explore the basic structure and function of the brain, human development, aging and other aspects of research, as well as related research in the fields of pathology and pharmacology^{4,5,6,7}.
Deep learning is developing very rapidly in biomedical processing and has made many important advances^{8,9}. However, the morphology of neurons is diverse and complex, and there are small morphological differences between different neurons such as the number, length and shape of branches, which increases the difficulty of classification. Besides, the ability of a single model to extract features is limited and cannot provide more comprehensive information to help with classification. Integration of multiple models based on ensemble learning can reduce the bias and variance of a single model classification^{10}. However, traditional integration techniques, such as simple average method, random weight average and majority voting principle, usually assume that each model is independent and has the same weight, ignoring the differences among models, and the fixed integration and weight allocation methods lack flexibility^{11}. Therefore, when dealing with neuron data of the same type but different subclasses, it is important not only to extract more neuron image features, but also to adopt appropriate integration methods to effectively fuse the features. The following main contributions of this research are as follows.

In this paper, a method of multiclassifier fusion using Sugeno fuzzy integral is proposed for morphological classification of neurons.

Before the model fusion, the three basic networks were optimized to improve the accuracy of classification.

Experimental results of multiclassifier fusion on the NeuroMorphorat dataset show that this model achieves better classification accuracy than existing neuromorphic classification models.
Furthermore, the remaining structure and organization of the paper are as follows. In "Related work", the related work on morphological classification of neurons is introduced. In "Methodology", three improved network structures and the Sugeno fuzzy integration algorithm are introduced. "Experiments" discusses the experimental results and analysis. Finally, "Conclusions" summarizes the work accomplished.
Related work
In recent years, remarkable progress has been made in the application of deep learning to neuron morphological classification, and the commonly employed models in this field include DenseNet^{12}, GoogLeNet^{13}, ResNet^{14}, etc. In the classification of neuron morphology, the deep learning model can directly process the original morphological image data. By training largescale neural networks, it can learn more distinctive feature representations. However, the structure of these models is too simple to fully account for the complexity of neuronal morphology, resulting in limited classification accuracy.
Alavi et al.^{15} performed twodimensional and threedimensional (3D) image analysis of neurons to obtain neuronal morphological information, and then classified dopaminergic neurons in the rodent brain stem. They compared the performance of three commonly used classification methods, including Support Vector Machines (SVM), Back Propagation Neural Networks (BPNNs), and Polynomial Logistic Regression. Han et al.^{16} proposed a new method for classifying neurons, which is based on the spatial structure of neurons. SVM was used to classify neurons, treating them as irregular fragments, and utilizing fractal geometry to describe their spatial structure. Evelyn et al.^{17} proposed a new neuronal morphological analysis method to characterize and classify neuronal cells by considering different levels of the dendritic tree. The method involves employing various strategies to dismantle the hierarchy of dendritic trees, enabling the identification of crucial components associated with the classification task, potentially linked to specific neuronal functions. Song et al.^{18} proposed a neuron model with dendrite morphology called the Logical Dendrite Neuron Model (LDNM) for neuron classification.
Lin et al.^{19} proposed two classification methods, one based on local accumulative connection deep neural network, the other based on complete accumulative deep neural network, and used the adaptive projection algorithm of neurons to project threedimensional voxel coordinate points into twodimensional, so as to obtain twodimensional neuron images. Reference^{20} utilized full 3D neuronal voxel data for scaling and conducted experiments on neuron geometric morphology classification using a 3D convolutional neural network. The study by reference^{21} used naive Bayes and other machine learning methods to classify neurons on a dataset of 200 threedimensional neurons. Zhang et al.^{22} proposed that different types of deep neural network modules can handle various types of features well, and built a TreeRNN module for compressed but unstructured SWC format data. A convolutional neural network (CNN) module is constructed for 2D or 3D slice format data, and finally the features extracted from the two models are fused for final classification. Ofek et al.^{23} proposed a method for classifying neuronal cell types using local sparse networks, based on the study of electrophysiological activity in neurons.
These deep learningbased methods for classifying neuronal morphology play a crucial role in the field of brain neuroscience. However, these methods have not completely solved the problem of accurately classifying neuron morphology, and there is a need to improve the accuracy rate. This paper proposes a method that utilizes AlexNet^{24} , VGG11_bn^{25}, and ResNet50^{14} networks as neural network classifiers. The output results of these classifiers are then fused using the Sugeno fuzzy integral^{26,27}. By synthesizing the output of multiple information sources, we can obtain more comprehensive and accurate results.
Methodology
The AlexNet network uses the ReLU activation function and Dropout regularization to avoid the gradient disappearance problem and reduce the risk of overfitting. VGG11_bn model has good generalization ability by using small convolution kernel and maximum pooling layer. The advantage of ResNet50 is the introduction of residual connections, making its network structure deeper and able to extract more advanced features, which can be applied to complex image classification tasks. All three network models have distinct advantages and significant differences, ensuring that their predictions on samples are not biased towards any specific class. As a result, for the classification of neuron morphology, AlexNet network, VGG11_bn network and ResNet50 network are selected as the basic network to construct the multiclassifier model.
In addition, in order to improve the classification performance of the model, each network is improved according to its own characteristics. For example, using AlexNet’s pretrained model and finetuning its output layer to speed up the training process and improve model performance. In the VGG11_bn network, the GAP^{28} is used to replace the traditional fully connected layer to reduce the number of parameters. The SE^{29} module is introduced into ResNeXt50^{30} to make it better extract neuron cell features, and the activation function is changed to GELU^{31} to help the model better adapt to changes in input data. The resulting Network is named Multiple Classifiers Fusion Network(MCFNet). The overall structure of MCFNet network is shown in Fig. 1. Firstly, neuron dataset images are input into the improved AlexNet, improved VGG11_bn and improved ResNet50 networks for feature extraction, and classification prediction probabilities of each model are saved in a csv file. Then, the output results of each model are fused using Sugeno fuzzy integration algorithm. Finally, the classification output for the test set is obtained.
Improved AlexNet
AlexNet model was proposed by Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton^{24}. The model has 8 layers, of which 5 are convolutional layers and 3 are fully connected layers. This model has achieved breakthrough results in the ImageNet^{32} image classification challenge. The kernels of the second, fourth, and fifth convolution layers are only connected to the kernel maps in the previous layer located on the same GPU (as shown in Fig. 2). The kernels of the third convolution layer are connected to all the kernel maps in the second layer, and the neurons in the fully connected layer are connected to all the neurons in the previous layer.
Because of the simple structure of the AlexNet model, it has some limitations when applied directly to complex neurons.Therefore, AlexNet’s structure is modified to remove its two fully connected layers, leaving only one fully connected layer, and the weights of this new layer are randomly initialized. The convolutional layer weights in the pretrained model are then used to initialize the modified network and finetune it to fit the new task. The AlexNet network structure is shown in Fig. 2.
Improved VGG11_bn
The VGG11_bn model proposed by Karen Simonyan and Andrew Zisserman^{25}, uses multiple 3\(\times \)3 convolutional layers stacked to increase the depth of the network. It also includes pooling layers for downsampling to reduce the size of the feature map while retaining important features. In addition, the VGG11_bn model adds a batch normalization layer between each convolutional layer and the fully connected layer, which can accelerate the convergence speed of the network and improve the stability and generalization ability of the model.
However, too large network training parameters are not conducive to model training, and the model structure is too deep, which is prone to gradient disappearance or gradient explosion. In order to solve the problem caused by excessive parameters, inspired by GAP and transfer learning^{33}, the GAP is adopted to replace the traditional fully connected layer, and then a Softmax classification layer is connected and the output layer is finetuned, so a new training network structure is formed. Finally, transfer learning is performed on the weights pretrained by VGG11_bn on ImageNet. Replacing the dense fully connected layer with GAP not only reduces model parameters but also prevents model overfitting. The VGG11_bn network is shown in Fig. 3. The traditional fully connected layer and the global average pooling layer are shown in Fig. 4.
Improved ResNet50
ResNet50 has 50 layers including stacked convolution layers, pooling layers, fully connected layers and residual blocks. In the residual blocks, ResNet50 uses 1\(\times \)1 convolution kernel for dimensionality reduction and expansion operations, which reduce the number of parameters and improving computational efficiency. After ResNet50, researchers made some improvements to the model. For example, ResNeXt network combined residual network with Inception^{34} by introducing group convolution, which increased the diversity and expressiveness of the network.
In order to further optimize the ResNet50 network model and improve the classification performance of the model, ResNeXt network is taken as a variant of ResNet50, and SENet module is added to each basic unit of ResNeXt, so that the network can enhance the attention degree of important features, thus improving the expressiveness and accuracy of the model. SE module is shown in Fig. 5. Squeeze, Excitation and Scale operations are performed on the input feature map successively to obtain a feature map with channel weights. Relevant expressions are shown in Eqs. (1)–(3). In addition, GELU is used to replace the original RELU activation function, which helps improve the generalization ability of the model and made it easier to adapt to different data distributions. Figure 6 shows the residual structure of ResNet50 and ResNeXt50.
Sugeno fuzzy integral
Sugeno fuzzy integral is a nonlinear integration method based on fuzzy set theory, which can effectively integrate multiple fuzzy sets to obtain a comprehensive and more representative output. Compared with other integrated technologies, it shows significant advantages in pattern recognition problems such as image classification^{11,35}. Especially in dealing with ambiguity and uncertainty, Sugeno fuzzy integral can deal with noise, fuzzy boundary and missing information more flexibly by introducing fuzzy sets and membership functions. At the same time, it can also adapt to different data distributions and characteristics, capture and process nonlinear relationships to ensure the accuracy and reliability of classification results.
Let \(X=\left\{ x_{1},...,x_{n} \right\} \) is a nonempty finite set with n elements, and P(X) is a power set of the set X. If the set function \(\Omega \): \(P(X)\rightarrow [0,1]\)satisfies the following conditions:

1.
\(\Omega (\Phi )=0,\Omega (X )=1\)

2.
\(A\subset B\subset X\), implies \(\Omega (A)\le \Omega (B)\)
Then \(\Omega \) is a fuzzy measure defined on X^{36,37,38}.
The Sugeno fuzzy\(\lambda \) measure was introduced by Tahani et al^{39}. Let g be a fuzzy measure on the set \(X=\left\{ x_{1},...,x_{n} \right\} \), then N is the number of information sources (in this case, N=3) and \(A,B\in X\). The Sugeno\(\lambda \) measure is this function \(f_{\lambda }:2^{X} \rightarrow [ 0, 1]\), and thus satisfies the following conditions:

1.
\(f_{\lambda }(X)=1\)

2.
If \(A\cap B=\Phi \), then \(\exists \lambda > 1\), such that, Expression (4) holds true.
Where \(\lambda \ge 1\) and \(\lambda \ne 0\), then the fuzzy measure g is \(g_{\lambda } \). When g is a \(g_{\lambda }\) fuzzy measure, the fuzzy measure value \(g(\left\{ x_{i} \right\} )\) of the singleton set \(\left\{ x_{i} \right\} (i=1,2,...n)\)is also referred to as the fuzzy density^{35,40}.
\(\lambda \) value in \(g_{\lambda }\) fuzzy measure can be obtained by expression (5).
Sugeno integral is a fuzzy measure integral^{41}, the Sugeno fuzzy integral of the function \(f:X\rightarrow [0,1]\) on the fuzzy measure \(\Omega \) is shown in the expression (6).
where \(\Omega (A_{i} )=(\left\{ x_{i},x_{i+1},...,x_{n} \right\} ) \) and \(\left\{ f(x_{1}),f(x_{2}),f(x_{3}),...,f(x_{n})\right\} \) is defined as \(f(x_{1})\le f(x_{2})\le f(x_{3})\le ... \le f(x_{n})\). The Sugeno fuzzy integral fusion algorithm flow is shown in Fig.7.
Experimens
Evaluation index
In the experiment, a variety of criteria are used to evaluate the classification performance of the model, such as confusion matrix, ROC curve and AUC region, accuracy, precision, recall rate and F1 value. Confusion matrix can intuitively show the specific situation of classification of the model, and the confusion matrix is shown in Table 1. TP, FP, TN and FN represent true positive, false positive, true negative and false negative respectively.
Accuracy refers to the ratio of the number of samples correctly predicted by the classification model to the total number of samples, which is used to measure the accuracy of the overall prediction of the model. Precision refers to the ratio of the number of samples that are actually positive to the total number of samples that are predicted to be positive. Recall refers to the ratio of the number of samples correctly predicted by the model to the total number of samples that are actually positive among all samples. The F1 value is a harmonic average of accuracy and recall, which is used to comprehensively evaluate the classification performance of the model. The relevant expressions of each evaluation index are shown in Eqs. (7)–(10).
Neuron data set
NeuroMorpho.Org^{42}is an open neuromorphologic database that collects and shares neuronal morphological data from different species, different brain regions, and different cell types. It contains morphological data on thousands of neurons and is currently the largest publicly accessible 3D neuronal reconstruction and related neuronal dataset.
The twodimensional image dataset of neurons in the NeuromorphRAT dataset made by Zhang Tielin’s team^{22} is used for the experiment, including original neuron images (Img_raw) obtained by web crawler, the neuronal images with Zjumping issues repaired using the tree toolbox (Img_resample), and the neuronal image data with XY alignment (Img_XYalign). In the experiment, 4category classification and 12category classification were carried out respectively. The 4category classification includes principal cells, interneurons, glial cells, and sensory receptor cells, and 12category classification consists of 12 subclasses, including six principal cells, three interneurons, two glial cells, and one sensory receptor cell.
Experimental results and analysis
Improved tests of AlexNet, VGG11_bn and ResNet50
In order to verify the effectiveness of each improved network, the network model before and after AlexNet improvement, VGG11_bn improvement and ResNet50 improvement were tested on three datasets under the same experimental conditions. The test results of 4category classification are shown in Fig. 8. The test results of 12category classification are shown in Fig. 9.
As can be seen from Fig. 8, the accuracy of the improved AlexNet network, the improved VGG11_bn network and the improved ResNet50 network were all better than those before the improvement in the neuron morphology classification experiment. In addition, the accuracy of the improved AlexNet has increased by 3.45%, 3.41% and 3.06% compared to the original in the Img_raw dataset, Img_resample dataset, and Img_XYalian dataset, respectively. The accuracy of the improved VGG11_bn has increased by 1.78%, 1.89% and 0.83% on the three datasets, respectively, while the accuracy of the improved ResNet50 has increased by 0.91%, 0.09% and 0.20% on the three datasets, respectively. This proves that the improved model has better performance in neuronal morphology4 classification.
As shown in Fig. 9, in the 12category classification experiment on the three datasets, the improved AlexNet’s accuracy has increased by 6.91%, 3.77%, and 10.09% respectively compared to the original in the Img_raw dataset, Img_resample dataset, and Img_XYalian dataset. The accuracy of the improved VGG11_bn has been improved by 2.63%, 5.13% and 1.60% on the three datasets, respectively, and the accuracy of the improved ResNet50 has been improved by 2.29%, 0.50% and 0.08% on the three datasets, respectively.
Analysis of results of ablation experiment and MCFNet experiment
Under the same experimental conditions, four different fusion methods of AlexNet network, VGG11_bn network and ResNet50 network were used for ablation experiments. The experiment also used Img_raw dataset, Img_resample dataset and Img_XYalian dataset. The four fusion schemes include: (1) AlexNet, VGG11_bn and ResNet50 models before the improvement (2) AlexNet, VGG11_bn before improvement and ResNet50 after improvement (3) The improved AlexNet, VGG11_bn and ResNet50 models before the improvement (4) Improved AlexNet, VGG11_bn, and ResNet50 models. For each fusion method, the following steps are followed to conduct experiments: a. Train three network models to extract features from the images in the datasets. b. Save the probability predicted by the three classifiers for each category in csv file format. c. Use the Sugeno fuzzy integral to fuse the output of each classifier. d. Use the fused features for testing and calculate various experimental indicators. The experimental results are shown in Tables 2, 3 and 4.
By observing the results in Tables 2, 3 and 4, it can be found that the fusion improved model AlexNet, VGG11_bn and ResNet50, namely MCFNet model, has the highest classification accuracy in the three datasets, reaching 97.82%, 91.75% and 93.13% respectively. These prove that the proposed fusion scheme of MCFNet can obtain relatively optimal results, and also demonstrates that the proposed method is effective, and the improvement of a single network can still play its role after the network fusion.
In order to analyze the classification performance of MCFNet network, confusion matrix and ROC curve were plotted on Img_raw dataset to compare the classification accuracy of different types of neurons. The confusion matrix is shown in Fig. 10, and the ROC curve obtained under this data set is shown in Fig. 11. To simplify the graph, 0 represents Glia, 1 represents Interneuron, 2 represents Principal cell, and 3 represents Sensory Receptor when drawing the confusion matrix and ROC curve of 4category classification. When drawing the confusion matrix and ROC curve of 12category classification, the labels are as follows: 0 represents GABAergic, 1 represents Nitrergic, 2 represents Pakachromaffin, 3 represents Purkinje, 4 represents Astrocyte, 5 represents Basket, 6 represents Ganglion, and 7 represents Granule, 8 represents Medium Spiny, 9 represents Microglia, 10 represents Pyramidal, and 11 represents Sensory Receptor.
In Fig. 10a, it is indicated that when using the Img_raw dataset for 4category classification testing, except for the slightly lower accuracy of the sensory receptor cell classification, the other categories can be correctly classified. It is evident from Fig. 10b that the MCFNet model can correctly predict most types of neurons, especially Nitrergic, Pakachromaffin, and Pyramidal cells with an accuracy of 1. The area under the ROC curve (AUC) is a common indicator to measure the performance of a classifier, ranging from 0 to 1, and the closer the curve area is to 1, the better the performance of the classifier. As can be seen from the results in Fig. 11, the area under the ROC curve of the method based on multiclassifier fusion is relatively high, which indicates that the method can classify neurons more accurately.
Comparison with existing methods
In order to further verify the validity of the model, we compared the existing classical classification models on three neuronal morphological datasets with 4category classification and 12category classification, including the model used for integration in this paper. Zhang Tielin’s team^{22} used CNN(ResNet18) method to classify neuronal image dataset into 12 categories. The comparison results of 4category classification are shown in Table 5, and the comparison results of 12category classification are shown in Table 6. According to the experimental results, the MCFNet network model has a higher accuracy in the task of neuron morphology classification than a single network, which indicates that the network can better capture the characteristics of neuron morphology by combining the classification results of multiple models. In addition, the experimental results of MCFNet network are verified on three datasets, which further proves the validity and reliability of the network model. In general, the Sugeno fuzzy integral algorithm can be used to fuse the output results of multiple classifiers, which can achieve the morphological classification of neurons well.
As shown in Table 7, different integration techniques were used to classify neurons on the Img_raw dataset, including the majority voting rule, the simple average method, and the multiplication rule. The experimental results show that the Sugeno fuzzy integral algorithm has high classification accuracy and good classification performance in neuron classification.
The computational complexity analysis
The number of parameters refers to the sum of the number of ownership weight and bias terms in the neural network, which are optimized during the training process to improve the predictive power of the model. The number of parameters depends mainly on the number of filters in the CNN model, the size of each filter, and the number of layers in the network, which together determine the complexity of the model, which in turn has a significant impact on memory requirements and training time. On the other hand, the testing time of a CNN model refers to the time required to predict or classify new and unseen data after the completion of model training, which reflects the response speed and performance of the model in practical applications.
Table 8 presents the detailed parameter information of the three optimized basic networks, including their number of parameters, test time and test accuracy, which are utilized in the Sugeno fuzzy integral integration process. Compared to these three basic networks, the MCFNet model has a significantly higher number of parameters, almost equal to the sum of the parameters in the three basic networks. The model contains 37.5M parameters and the test time is 252 seconds. As shown in the table, the testing duration of the MCFNet model includes the individual testing time of the three fundamental networks and the time needed for model integration. Since the model integration time is relatively short, we can roughly consider that the testing time of the MCFNet model mainly comes from the cumulative testing time of three basic networks. Although the MCFNet model has a large number of parameters and requires the longest training time, it demonstrates a high accuracy and successfully realizes the goal of neuron classification without considering the limitations of computational cost and experimental conditions.
GradCAM analysis
GradCAM^{43}is a method for visualizing image regions of concern to convolutional neural network (CNN) models. By combining the convolution layer features and gradient information of the CNN model, it generates gradientweighted activation maps to highlight the image regions that have an important impact on classification tasks.
Figure 12 shows GradCAM activation graphs generated by three different models that serve as the basic learning network for ensemble learning, using sample images from the Img_raw dataset. As can be seen from the Fig. 12, different models focus on different parts of the same neuron image. For example, Fig. 12a shows Glia images, and Fig. 12b–d show class activation graphs of AlexNet, VGG11_bn, and ResNeXt50, respectively. It’s clear from Fig. 12 that AlexNet focuses on both sides of the neuron’s left and right boundaries, and VGG11_bn focused more on the upper right and lower left regions of neurons, while ResNet50 focused on the middle of neurons.
Different models focus on different regions of the same neuron image, which shows that there are differences and complementarities between models, which makes model fusion meaningful. From the examples, it is obvious that the three models focus on different image feature areas. Therefore, the method based on multiclassifiers fusion can synthesize the advantages of multiple models, improve the accuracy and stability of models and achieve better results in practical applications.
Conclusions
Morphological classification of neurons not only helps to understand the relationship between neuronal structure and function, but also can be used in the diagnosis and research of nervous system diseases. We propose a method to fuse the classification results of multiple classifiers using Sugeno fuzzy integral algorithm. Firstly, the pretrained model of AlexNet was utilized and its output layer was finetuned to reduce training time. Then, in the VGG11_bn network, the traditional fully connected layer was substituted for GAP to reduce the number of parameters, and transfer learning was used to improve the generalization ability of the model. In addition, adding the SE module to ResNeXt50 enables the network can have feature weights to better extract neuron morphological features. Furthermore, changing the activation function to GELU can improve the model’s ability to fit the data distribution. Finally, the classification results of the three improved networks are fused by using the Sugeno fuzzy integration algorithm. MCFNet conducted multiple experiments on the three datasets of NeuroMorphorat, and the accuracy of 4category classification was 98.04%, 91.75% and 93.13%, respectively and the accuracy of 12category classification was 97.82%, 85.68% and 87.60%, respectively. The experimental results show that compared with other commonly used methods, MCFNet has better performance in the neuron classification test on the three datasets, which can effectively improve the accuracy of neuron morphological classification, and demonstrate the effectiveness of our method. However, our method still has some shortcomings when classifying neurons with similar morphology and structure. In order to solve this problem, more data will continue to be collected for further research.
Although Sugeno fuzzy integration improves the accuracy of neuron morphology classification, it is still difficult to distinguish neuron cells with similar morphology. In order to solve this problem, future studies can focus on classifying the morphology of neurons in threedimensional space, extracting more comprehensive features using threedimensional slices, and verifying the generalization of the model. In addition, in order to improve the computational efficiency of neuron morphological classification, integration techniques such as snapshot integration can be considered. At the same time, the combination with other neuroscience research methods, such as electrophysiology and imaging techniques, can also be actively explored to fully understand neuron morphology and function.
Data availibility
The datasets generated during the current study are available in the https://github.com/thomasaimondy/neuromorpho_neuron_type _classification/tree/master/data_model.
References
Shepherd, G. M. The Synaptic Organization of the Brain (Oxford University Press, 2003).
Swanson, L. W. Brain Architecture: Understanding the Basic Plan (Oxford University Press, 2012).
Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492 (2015).
Ascoli, G. A. Mobilizing the base of neuroscience data: The case of neuronal morphologies. Nat. Rev. Neurosci. 7, 318–324 (2006).
Deitcher, Y. et al. Comprehensive morphoelectrotonic analysis shows 2 distinct classes of l2 and l3 pyramidal neurons in human temporal cortex. Cereb. Cortex 27, 5398–5414 (2017).
Gillette, T. A. & Ascoli, G. A. Topological characterization of neuronal arbor morphology via sequence representation: Imotif analysis. BMC Bioinformatics 16, 1–15 (2015).
Wei, Y., He, F., Qian, Y. & Feng, F. Neuronal morphology classification based on improved residual network. In 2023 IEEE 6th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) Vol. 6 (ed. Wei, Y.) 1371–1375 (IEEE, 2023).
Ching, T. et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15, 20170387 (2018).
Min, S., Lee, B. & Yoon, S. Deep learning in bioinformatics. Brief. Bioinform. 18, 851–869 (2017).
Deb, S. D. & Jha, R. K. Breast ultrasound image classification using fuzzyrankbased ensemble network. Biomed. Signal Process. Control 85, 104871 (2023).
Kundu, R., Singh, P. K., Mirjalili, S. & Sarkar, R. Covid19 detection from lung ctscans using a fuzzy integralbased cnn ensemble. Comput. Biol. Med. 138, 104895 (2021).
Huang, G., Liu, Z., Van Der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In: Proc. IEEE conference on computer vision and pattern recognition, 4700–4708 (2017).
Szegedy, C. et al. Going deeper with convolutions. In: Proc. IEEE conference on computer vision and pattern recognition, 1–9 (2015).
He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. In Computer VisionECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 (ed. He, K.) 630–645 (Springer, 2016).
Alavi, A. et al. Automated classification of dopaminergic neurons in the rodent brain. In 2009 International Joint Conference on Neural Networks (ed. Alavi, A.) 81–88 (IEEE, 2009).
Fengqing, H. & Jie, Z. Research for neuron classification based on support vector machine. In 2012 Third International Conference on Digital Manufacturing & Automation (ed. Fengqing, H.) 646–649 (IEEE, 2012).
Cervantes, E. P., Comin, C. H., Junior, R. M. C. & Costa, L. F. Morphological neuron classification based on dendritic tree hierarchy. Neuroinformatics 17, 147–161 (2019).
Song, S., Chen, X., Song, S. & Todo, Y. A neuron model with dendrite morphology for classification. Electronics 10, 1062 (2021).
Lin, X. & Zheng, J. A neuronal morphology classification approach based on locally cumulative connected deep neural networks. Appl. Sci. 9, 3876 (2019).
Lin, X. & Zheng, J. A 3d neuronal morphology classification approach based on convolutional neural networks. In 2018 11th International Symposium on Computational Intelligence and Design (ISCID) Vol. 2 (ed. Lin, X.) 244–248 (IEEE, 2018).
Zheng, J. & Lin, X. Quantitative analysis of influence of morphological feature selection on neuron classification. In 2019 11th International Conference on Intelligent HumanMachine Systems and Cybernetics (IHMSC) Vol. 2 (ed. Zheng, J.) 211–216 (IEEE, 2019).
Zhang, T. et al. Neuron type classification in rat brain based on integrative convolutional and treebased recurrent neural networks. Sci. Rep. 11, 7291 (2021).
Ophir, O., Shefi, O. & Lindenbaum, O. Neuronal cell type classification using locally sparse networks. In 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) (ed. Ophir, O.) 1–5 (IEEE, 2023).
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017).
Ioffe, S. & Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, 448–456 (pmlr,) (2015).
Sugeno, M. & Murofushi, T. Pseudoadditive measures and integrals. J. Math. Anal. Appl. 122, 197–222 (1987).
Yang, W.J., Xue, H.R. & Bai, J. Classification and recognition of milk somatic cells based on fusion of fuzzy integral multiple classifiers. In 2021 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA) (ed. Yang, W.J.) 386–389 (IEEE, 2021).
Lin, M., Chen, Q. & Yan, S. Network in network. Preprint at arXiv:1312.4400 (2013).
Hu, J., Shen, L. & Sun, G. Squeezeandexcitation networks. In: Proc. IEEE conference on computer vision and pattern recognition, 7132–7141 (2018).
Xie, S., Girshick, R., Dollár, P., Tu, Z. & He, K. Aggregated residual transformations for deep neural networks. In: Proc. IEEE conference on computer vision and pattern recognition, 1492–1500 (2017).
Hendrycks, D. & Gimpel, K. Gaussian error linear units (gelus). Preprint at arXiv:1606.08415 (2016).
Deng, J. et al. Imagenet: A largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (ed. Deng, J.) 248–255 (IEEE, 2009).
Ribani, R. & Marengoni, M. A survey of transfer learning for convolutional neural networks. In 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPIT) (ed. Ribani, R.) 47–57 (IEEE, 2019).
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In: Proc. IEEE conference on computer vision and pattern recognition, 2818–2826 (2016).
Wu, S.L. et al. Fuzzy integral with particle swarm optimization for a motorimagerybased braincomputer interface. IEEE Trans. Fuzzy Syst. 25, 21–28 (2016).
Sugeno, M. Fuzzy measure and fuzzy integral. Trans. Soc. Instr. Control Eng. 8, 218–226 (1972).
Liu, X., Ma, L. & Mathew, J. Machinery fault diagnosis based on fuzzy measure and fuzzy integral data fusion techniques. Mech. Syst. Signal Process. 23, 690–700 (2009).
Keller, J. M., Liu, D. & Fogel, D. B. Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation (Wiley, 2016).
Tahani, H. & Keller, J. M. Information fusion in computer vision using the fuzzy integral. IEEE Trans. Syst. Man Cybern. 20, 733–741 (1990).
Martínez, G. E. et al. Comparison between choquet and sugeno integrals as aggregation operators for pattern recognition. In 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS) (ed. Martínez, G. E.) 1–6 (IEEE, 2016).
Ralescu, D. & Adams, G. The fuzzy integral. J. Math. Anal. Appl. 75, 562–570 (1980).
Ascoli, G. A., Donohue, D. E. & Halavi, M. Neuromorpho. org: A central resource for neuronal morphologies. J. Neurosci. 27, 9247–9251 (2007).
Selvaraju, R. R. et al. Gradcam: Visual explanations from deep networks via gradientbased localization. In: Proc. IEEE international conference on computer vision, 618–626 (2017).
Acknowledgements
This work was partly supported by the National Natural Science Foundation of China (No. 62062014), the grant from Guangxi Key Laboratory of Braininspired Computing and Intelligent Chips (No. BCIC23Z1), the Key Scientific Research Project of Guangxi Normal University (No. 2018ZD007), and the Innovation Project of Guangxi Graduate Education (No. XYCSR2024093).
Author information
Authors and Affiliations
Contributions
F.H. and G.L. wrote the main manuscript text. G.L. prepared all figures. H.S. provided guidance on the article. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
He, F., Li, G. & Song, H. Morphological classification of neurons based on Sugeno fuzzy integration and multiclassifier fusion. Sci Rep 14, 16003 (2024). https://doi.org/10.1038/s41598024667971
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598024667971
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.