Meningioma, glioblastoma, and the hypothalamus are distinct forms of brain tumors. Meningiomas are often non-cancerous tumors that grow in thin walls and typically encircle the brain. Brain tumors are among the disorders that directly endanger human lives. Precise knowledge of the brain tumor phases is crucial for disease prevention and treatment. This study aimed to determine whether the brain is healthy or abnormal. In contrast, it specifies the type of tumor if an anomaly is detected. With the advent of machine learning, MRI image processing has become essential for rapid and accurate identification of brain tumors. Currently, there are different types of meningioma tumors at present. However, Clival, Convexity, and Suprasellar meningiomas are the main types of meningioma. The first two types of meningioma tumors were identified as mild, and the third type was identified as severe. In the United States, Approximately 38% of patients are affected by meningioma brain tumors. It mostly affects older people, as stated in the World Health Organization (WHO) report 2021 (https://www.cancer.net/cancer-types/meningioma/statistics). Figure 1 shows an MRI imageof a brain meningioma1.

Figure 1
figure 1

Meningioma brain MRI1.

The diagnosis of a patient is dependent on the manual assessment of the patient by a doctor as well as the test findings of the patient. In addition to the greater possibility that a doctor may make an incorrect diagnosis owing to the lack of automated technologies that can assist with diagnosis and the restricted number of physicians available, there is also a longer wait period for patients to be seen. Instead of investing time with the patient, doctors were required to manually evaluate test findings and photographs. This requires a valuable appointment time. It is vital to have improved medical technology in terms of automatic learning to increase the efficiency of doctors, which will, in turn, reduce the amount of time patients spend in hospitals and the amount of time it takes for patients to recover. This study aimed to create automated methods that will assist physicians in identification to reduce the amount of time that patients are required to wait for treatment and to avoid incorrect diagnoses. In particular, the automation of this process is accomplished by this study via the categorizing different forms of brain tumors based on photographs of the patient’s brain. When analyzing images, a physician must look at several image slices to identify potential health problems, which require months away from more difficult diagnoses. These constraints were eliminated because of the implementation of deep learning algorithms in the tumor picture identification process. In this study, the structure of deep learning was altered with respect to the centric technique. The centric method uses a shorter computing time period for the categorization of brain pictures compared to current deep learning methods.

Literature survey

Although machine learning has applications in a wide variety of sectors, the vast majority of research has concentrated on its use in agriculture2 and healthcare to detect, predict, and classify illnesses3,4. The study of breast cancer takes precedence over research in other areas of medicine. The detection and segmentation of lung and colon cancers, the detection and segmentation of lung and brain tumors, and the categorization and diagnosis of respiratory and brain tumors have been presented5. The diagnostic procedure, which also involves excision and clinical investigation using a variety of cellular (histological) testing methods, is considered the diagnostic gold standard for brain tumors. Unfortunately, identification using biopsy is intrusive, which may lead to bleeding and possibly damage, which can lead to a loss of function, as stated by Roberts et al.6.

Consequently, noninvasive diagnosis of brain tumors using electromagnetic resonance imaging is the backbone of contemporary neuroimaging. This allows physicians to evaluate the morphological, molecular, metabolic, and functional characteristics of brain tumors, as stated by Roberts et al.6. White Matter (WM), gray matter (GM), and spinal fluid are the three components that may be observed in a usual operational MRI scan of a healthy brain, as stated in Rosenbloomet al.7. When performing a functional MRI scan, the degree to which these tissues vary is mostly determined by the amount of water they contain. Myelinated axons make up a snowy substance that seems to be composed of 70 percent water and is responsible for connecting the cerebral cortex to other parts of the brain. In addition, it acts as a conduit for the dissemination of data among nerve cells, and links the correct and leftward sides of the brain. Glial and neuronal cells, which are responsible for controlling brain activity, as well as cores, are found profoundly inside the brain substance and are composed of 80 percent water.

Fuzzy C Means (FCM) was used to determine the grade value of tumors as stated by Tiwari et al.8. A fuzzy cognitive mapping soft-computing system was used to represent and simulate the professional data. This method was used in this study for classification and precise grading. Despite this fact comprises two stages: the first stage involves charting a wanderer web based on wavelet information intended for the purpose of feature extraction, and the additional stage involves organization by means of a probabilistic neural network, which is applied to the features that have been extracted9. For the purpose of classification, adopted a backpropagation neural network approach10. Wavelet decomposition was used for feature extraction, and principal component analysis was used for the selection of features to incorporate decreased data and obtain improved outcomes. The findings of this approach were 100% accurate and required 0.0451 s to complete. However, the Support Vector Machine (SVM) classification approach was used by another author11. This Internet-based brain tumor library provides a source for an MRI image database of 140 brain tumors. When analyzing the data for tumor detection, a large dataset was employed, which resulted in a significantly enhanced quality. Shape, intensity, and texture are the three criteria used in the feature extraction process. Bal et al.12 used computerized MRI segmentation with the FCM clustering approach to create segments. A total of 820 photographs were retrieved. A MATLAB toolbox implementation of an SVM classifier was used for classification. This implementation resulted in an increase in accuracy of 97.95%. Radiologists can make diagnostic decisions based on the information provided by the system. Some machine learning and deep learning techniques have also been used by Bruntha et al.13 and Andrushia et al.14 for image classification to aid in the early detection of diseases.

Kumar et al.15 enhanced the accuracy by utilizing an image thickening and background thinning method to extract the performance calculation measurements. According to Elayaraja et al.16, a Genetic Algorithm (GA)—based convolutional neural network (CNN) classification process for segmenting particularized segments was developed, achieving 90.37% Se, 98.9% Sp, and 95.21% Ac. Thiyaneswaran et al.17 used k-means clustering in skin images to detect and segment cancerous regions. The authors achieved an average accuracy of 90.0% for open-access datasets. Kumarganesh et al.18 suggested using an adaptive fuzzy inference system (ANFIS) classifier system to detect tumors in basic images. They had a classification accuracy of 96.6%. Thiyaneswaran et al.19 calculated that AlexNet with an ADAM solver achieved a system accuracy of 98.21%. Kumarganesh et al.20 proposed an ANFIS classifier method to classify tumors from foundation pictures. They attained a sensitivity of 93.07%, specificity of 98.79%, and cancer segmentation accuracy of 97.63%.

The novelty of this paper is stated below.

  • The novel CNN architecture is proposed by modifying the conventional CNN architecture.

  • A novel meningioma brain tumor segmentation algorithm was proposed for segmenting tumor pixels more accurately.

Proposed methods

In the existing meningioma brain tumor detection process, a conventional CNN architecture is used for brain image classification. The conventional CNN method is structured using a large number of convolutional, pooling, and dense layers, which reduces the classification rate and increases classification time. These drawbacks were overcome by proposing a novel and highly efficient HCNN classifier to detect and classify meningioma brain images from non-meningioma brain images. The HCNN classification technique consists of the Ridgelet transform, feature computations, HCNN classifier, and segmentation algorithm. Figure 2 shows the proposed HCNN classifier for the classification of brain images.

Figure 2
figure 2

HCNN classifier-based brain image classification system.

Ridgelet transform

Because wavelets have been used for image denoising and decomposition over the past two decades, the pixel stability during the decomposition process is low. In addition, the singular directivity of the wavelet transform is poor. To reduce the error rate during the decomposition process, the singular directivity and pixel stability should be as high as possible.

Therefore, the Ridgelet transform was used in this study instead of the wavelet transform to decompose the source brain image into a number of subbands.

The Ridgelet transform is defined in the following equation.

$$R\left(i,j\right)=\left\{\left(i,j\right):j={k}_{i}+l \left(mod p\right)\right\};$$
(1)

where, \({k}_{i}\) is the radon projection factor and \(p\) is the histogram counts.

The Ridgelet approach decompose the image into Ridgelet coefficients \(R\left(i,j\right).\)

Feature computations

From the Ridgelet coefficients \(R\left(i,j\right)\), the features related to the pixel intensity in the brain image are computed. In this study, Pixel Intensity Feature (PIF), Pixel Variation Feature (PVF), Pixel Mean Feature (PMF), first-order Intensity Feature (FIF), and second-order Intensity Feature (SIF) were used.

$$Pixel\; Intensity\; Feature \left(PIF\right)=\frac{\sum_{i=1}^{C1}\sum_{j=1}^{C2}{R(i,j)}^{2}}{{i}^{2}\times {j}^{2}}$$
(2)

where \(R(i,j)\) is the coefficient of the ridgelet-transformed image and C1 and C2 represent the number of rows and columns in \(R(i,j)\).

$$Pixel \;Variation\; Feature \left(PVF\right)=\frac{\sum_{i=1}^{C1}\sum_{j=1}^{C2}{((R\left(i,j\right)-i)}^{2}+\sum_{i=1}^{C1}\sum_{j=1}^{C2}{((R\left(i,j\right)-j)}^{2}}{i\times j}$$
(3)
$$Pixel\; Mean \;Feature \left(PMF\right)=\frac{\sum_{i=1}^{C1}\sum_{j=1}^{C2}R(i,j)}{i\times j}$$
(4)
$$First \;order\; Intensity \;Feature \left(FIF\right)=\frac{\sum_{i=1}^{C1}\sum_{j=1}^{C2}R\left(i,j\right)\times i\times j}{\left(i+1\right)(j+1)}$$
(5)
$$Second\; order \;Intensity\; Feature \left(SIF\right)=\frac{\sum_{i=1}^{C1}\sum_{j=1}^{C2}R\left(i,j\right)\times {i}^{2}*{j}^{2}}{{\left(i+1\right)}^{2}{(j+1)}^{2}}$$
(6)

These computed pixel intensity features are fed into the proposed HCNN classifier for an effective classification process.

Proposed HCNN classifier

Classifiers play an important role in brain-image classification. In the existing meningioma image classification process, the conventional CNN architecture is used to perform meningioma and non-meningioma image classification processes. The CNN architecture, which is used in existing methods, receives brain images as an input pattern and produces output using the internal features that are generated through the internal layers in the conventional CNN architecture. Although this increases the classification rate of the HCNN approach, there is no optimal meningioma image classification. Therefore, the conventional CNN architecture was modified into an HCNN classification architecture that combines deep learning and machine learning modules, as depicted in Fig. 3.

Figure 3
figure 3

Proposed HCNN classifier using SFCM layer pattern.

The proposed HCNN architecture for meningioma and non-meningioma image classification system consists of two convolutional layers (Convolutional layer-1 and Convolutional layer-2) and two pooling layers (Pooling layer-1 and Pooling layer-2) and a Spatial Fuzzy C Means (SFCM) layer at the output. Convolutional layer 1 in the proposed HCNN architecture consisted of 512 filters with a 5 × 5 stride function. The Convolutional layer21 in the proposed HCNN architecture consisted of 1024 filters with a 7 × 7 stride function. The pooling layer-1 was placed between these two convolutional layers to reduce the output size of convolutional layer 1. The pooling layer-2 was placed at the output of Convolutional layer-2 to reduce the output size from convolutional layer 2. The pooling layer output responses were then transferred to the SFCM layer to produce the classification results (either meningioma or non-meningioma).

Figure 4a,b illustrate the images classified using the HCNN classifier-based meningioma detection system.

Figure 4
figure 4

(a) Meningioma images (b) Non-meningioma images1.

Segmentation

After the classification process was completed, the tumor pixels in the meningioma images were segmented using the probability-based algorithm proposed in this study. The steps of the proposed morphological segmentation approach are as follows.

Step 1:

Apply morphological open on meningioma brain image \(M (i,j)\) using the following equation.

$${M}_{o}=open (M\left(i,j\right))$$
(7)

Step 2:

Apply morphological close on meningioma brain image \(M (i,j)\) using the following equation.

$${M}_{c}=close (M\left(i,j\right))$$
(8)

Step 3:

Find probability density functions of opened and closed image using the following equation.

$$p1=pdf({M}_{o})$$
(9)
$$p2=pdf({M}_{c})$$
(10)

Step 4:

Find the average value of the computed probability density functions of the open and closed images using the following equation:

$${A}_{t}=\frac{1}{2}(p1+p2)$$
(11)

Step 5:

Compute probable open and close image using the following equations.

$${P}_{open}=\frac{{M}_{O}}{{A}_{t}}$$
(12)
$${P}_{close}=\frac{{M}_{c}}{{A}_{t}}$$
(13)

Step 6:

Compute difference image between probable open and close image using the following equation.

$${M}_{diff=| {P}_{open}-{P}_{close} |}$$
(14)

Results and discussions

The MATLAB R2020 version was used to simulate the HCNN method in this study, and the simulation dataset was constructed by obtaining images from the BRATS 20191 and Nanfang datasets21.

The BRATS 2019 dataset holds 350 numbers of meningioma images and 360 number of non-meningioma images. Among these images, 175 non-meningioma and 180 meningioma images are accessed from the dataset and being used for training the proposed system in this work. Moreover, another 175 non-meningioma and 180 meningioma images are accessed from the dataset and being used for testing the proposed system in this work. The size of the images in BRATS 2019 is about 240 × 240 and the images are quantized to 8 bit pixel resolution.

The Nanfang University dataset holds totally 600 non-meningioma images and 512 meningioma images for evaluating the proposed system. Among these images, 300 non-meningioma and 256 meningioma brain MRI images are accessed from the dataset and being used for training the proposed system in this work. Moreover, another 300 non-meningioma and 256 meningioma brain MRI images are accessed from the dataset and being used for testing the proposed system in this work. The size of the images in Nanfang university is about 512 × 512 and the images are quantized to 8 bit pixel resolution.

Table 1 shows the analysis of classification accuracy based on different classification algorithms on the BRATS 2019 dataset. The methodology for the meningioma brain tumor detection system stated in this paper achieved 99.7 a classification accuracy of 99.7%for the BRATS 2019 dataset.

Table 1 Analysis of classification accuracy based on different classification algorithms on BRATS 2019 dataset.

The HCNN method was tested by replacing the proposed HCNN classifier with conventional machine learning classification algorithms to verify the effectiveness of the proposed meningioma detection process on the Nanfang dataset brain images.

The tumor segmentation technique using Adaptive Neuro Fuzzy Inference System (ANFIS) classifier obtains 97.2% of classification accuracy, using SVM classifier obtains 95.9% of classification accuracy, using Neural Networks (NN) classifier obtains 94.3% of classification accuracy, using Adaboost classifier obtains 94.8% of classification accuracy and using Fuzzy C Means classifier obtains 93.9% of classification accuracy, on the Nanfang dataset brain images.

Furthermore, the Nanfang dataset was used in this study to verify the effectiveness of the HCNN-based meningioma classification technique.

The proposed system received the test brain MRI image from the testing dataset, and the testing function of the proposed algorithm was executed against the trained patterns. The HCNN methodology proposed in this study achieved 99.36% classification accuracy on the Nanfang dataset brain images.

Table 2 shows the classification accuracy analysis based on the feature combinations (HCNN classification results) for the Nanfang dataset.

Table 2 Analysis of classification accuracy based on different classification algorithms on Nanfang dataset.

The HCNN-based meningioma classification technique was tested by replacing the proposed HCNN classifier with conventional machine learning classification algorithms in this study on the Nanfang dataset brain images.

The HCNN-based meningioma classification technique obtains 95.29% classification accuracy, the SVM classifier obtains 93.98% classification accuracy, the NN classifier obtains 92.19% classification accuracy, the Adaboost classifier obtains 90.76% classification accuracy, and the Fuzzy C Means classifier obtains 91.76% classification accuracy on the Nanfang dataset brain images.

Further, the proposed method is applied and tested on the recent BRATS 2022 dataset and the experimental results of this dataset are compared with the existing datasets BRATS 2019 and Nanfang in this paper. Table 3 shows the classification accuracy comparisons with respect to different datasets used in this paper.

Table 3 Classification accuracy comparisons with respect to different datasets used in this paper.

Transforms are an important processing module in the meningionma image detection system; hence, the proposed HCNN-based classification technique was analyzed based on different transforms. In this study, different transforms were applied to decompose brain images, and their performances were compared in terms of classification accuracy.

Table 4 presents the analysis of classification accuracy based on different transforms of the BRATS 2019 dataset. The proposed HCNN based classification technique using Ridgelet transform attained 99.7% of classificaytion accuracy, where the proposed tumor segmentation technique using Gabor transform attained 93.8% of classificaytion accuracy, using Discrete Wavelet Transform (DWT) attained 92.1% of classificaytion accuracy and using Non-Sub sampled Contourlet Transform (NSCT) attained 94.8% of classificaytion accuracy.

Table 4 Analysis of classification accuracy based on different transforms on BRATS 2019 dataset.

Table 5 illustrates the impact of different transforms on Nanfang dataset images.

Table 5 Analysis of classification accuracy based on different transforms on Nanfang dataset.

Table 6 lists the classification accuracies of the BRATS 2019 dataset. The proposed meningioma detection system obtains a classification accuracy of 47.9% using PIF, 49.0% using PVF, 51.9% using PMF, 50.6% using FIF, 55.3% using SIF, 71.9% using PIF and PVF, 74.3% using PIF and PMF, 75.1% using PIF and FIF, 93.8% using PIF + PVF + PMF + FIF, and 93.8% using PIF + PVF + PMF + FIF + SIF features.

Table 6 Analysis of classification accuracy based on different features on BRATS 2019 dataset.

Table 7 presents an analysis of the classification accuracy based on different features on the Nanfang dataset. The HCNN approach obtains 56.9% classification accuracy using PIF, 55.3% classification accuracy using PVF, 59.1% classification accuracy using PMF, 63.9% classification accuracy using FIF, 65.2% classification accuracy using SIF, 70.1% classification accuracy using PIF and PVF, 68.6% classification accuracy using PIF and PMF, 75.8% classification accuracy using PIF and FIF, 94.3% classification accuracy using PIF + PVF + PMF + FIF, and 99.36% classification accuracy using PIF + PVF + PMF + FIF + SIF.

Table 7 Analysis of classification accuracy based on different features on Nanfang dataset.

The following Eqs. (15, 1617) were used to analyze the meningioma model:

$$Sensitivity =\frac{D}{C+D}*100\%$$
(15)
$$Specificity=\frac{B}{A+B}*100\%$$
(16)
$$Segmentation\;Accuracy=\frac{B+D}{A+B+C+D}$$
(17)
$$Precision \left(pr\right)=\frac{A}{A+C}*100\%$$
(18)
$$True\; Positive\; Rate \left(TPR\right)=\frac{A}{A+D}*100\%$$
(19)
$$False\; Positive \;Rate \left(TPR\right)=\frac{A}{A+D}*100\%$$
(20)

The true negative pixel pattern is represented by A, false positive pixel pattern by B, false negative pixel pattern by C, and true positive pixel pattern by D.

Table 8 presents the performance estimation of the brain tumor segmentation method using the BRATS 2019 dataset.

Table 8 Performance estimation of HCNN method on BRATS 2019 dataset.

The HCNN-based meningioma classification technique achieved 99.31% sensitivity, 99.37% specificity, and 99.24% segmentation accuracy, 99.23% of Pr, 99.03% of TPR and 99.05% of FPR, in BRATS 2019 images.

Table 9 presents the performance estimation of the brain tumor segmentation method for the Nanfang dataset.

Table 9 Performance estimation of HCNN method on Nanfang dataset.

The proposed HCNN technique achieved 99.35% sensitivity, 99.22% specificity, and 99.04% segmentation accuracy, 99.14% of Pr, 99.17% of TPR and 99.36% of FPR on brain MRI images in the Nanfang dataset.

In this study, different segmentation methods were applied to segment the tumor pixels in the brain images, and their performances were compared in terms of classification accuracy.

Table 10 presents the simulation results of the meningioma tumor segmentation method on the BRATS 2019 dataset with respect to the different segmentation algorithms. The meningioma tumor detection method using the proposed morphological algorithm achieved a 99.31% sensitivity, 99.37% specificity, and 99.24% segmentation accuracy, 99.23% of Pr, 99.03% of TPR and 99.05% of FPR.

Table 10 Simulation results of the proposed HCNN method on different segmentation algorithms.

The meningioma tumor detection method using the existing morphological algorithm achieved 97.98% sensitivity, 97.38% specificity, and 97.12% segmentation accuracy.

The meningioma tumor detection method using the region-growing algorithm yielded 95.39% sensitivity, 95.19% specificity, and 96.03% segmentation accuracy.

Table 11 lists the impact of different segmentation algorithms on the Nanfang dataset with respect to. The meningioma tumor detection method using the proposed morphological algorithm achieved a 99.35% sensitivity, 99.22% specificity, and 99.04% segmentation accuracy, 99.14% of Pr, 99.17% of TPR and 99.36% of FPR.

Table 11 Simulation results of meningioma tumor segmentation method on Nanfang dataset with respect to different segmentation algorithms.

The meningioma tumor detection method using the existing morphological algorithm achieved 97.98% sensitivity, 97.13% specificity, and 97.37% segmentation accuracy.

The meningioma tumor detection method using the region-growing algorithm yielded 95.39% sensitivity, 95.98% specificity, and 96.05% segmentation accuracy.

In this study, different classifiers were applied to decompose brain images, and their performances were compared in terms of classification accuracy.

Table 12 shows the simulation results of the meningioma detection methods on the BRATS 2019 dataset.

Table 12 Simulation results of meningioma detection methods on BRATS 2019 dataset.

Figure 5 shows the graphical simulation results of meningioma detection methods using the BRATS 2019 dataset.

Figure 5
figure 5

Graphical simulation results of meningioma detection methods on BRATS 2019 dataset.

Table 13 presents the simulation results of the meningioma detection methods for the Nanfang dataset.

Table 13 Simulation results of meningioma detection methods on Nanfang dataset.

Figure 6 shows the graphical simulation results of the meningioma detection methods on the Nanfang dataset.

Figure 6
figure 6

Graphical simulation results of meningioma detection methods on Nanfang dataset.

This meningioma detection framework was compared with conventional tumor segmentation methods, as shown in Table 13, with respect to the brain MRI images in the Nanfang dataset. As shown in Table 14, the proposed HCNN technique produces the best simulation results when compared with conventional methods27,29,30,31,32,33,34.

Table 14 Comparisons of proposed simulation results with conventional method simulation results on Nanfang dataset images.

This meningioma detection framework was compared with conventional tumor segmentation methods, as shown in Table 15, with respect to the brain MRI images in the BRATS 2019 dataset.

Table 15 Comparisons of proposed simulation results with conventional method simulation results on BRATS 2019 dataset.

The proposed system obtains 99.81% classification accuracy, 99.2% sensitivity, 99.7% specificity and 99.8% segmentation accuracy on BRATS 2022 dataset. Table 16 shows the comparisons of proposed simulation results with conventional method simulation results on BRATS 2022 dataset.

Table 16 Comparisons of proposed simulation results with conventional method simulation results on BRATS 2022 dataset.

Conclusion

In this study, an HCNN classifier was proposed for the classification of brain images. The proposed HCNN technique uses the Ridgelet transform to decompose the brain image, and the pixel intensity features are then computed from the decomposed coefficients. In this study, the computed pixel intensity features were trained and classified using an HCNN classifier. The proposed HCNN-based meningioma detection system achieved 99.31% sensitivity, 99.37% specificity, and 99.24% segmentation accuracy for the BRATS 2019 dataset. The proposed HCNN technique achieved99.35% sensitivity, 99.22% specificity, and 99.04% segmentation accuracy on brain MRI images in the Nanfang dataset. The proposed system obtains 99.81% classification accuracy, 99.2% sensitivity, 99.7% specificity and 99.8% segmentation accuracy on BRATS 2022 dataset. In addition, the impact of the proposed morphological method on tumor segmentation was compared with that of other existing tumor segmentation algorithms. The major advantages of this paper are to develop an complete computer based automated method for identifying the meningioma and non-meningioma images using an efficient deep learning architecture. Moreover, the classification accuracy and performance analysis parameters are typically high using the proposed deep learning architecture when compared with existing deep learning models.