Enhanced multimodal medical image fusion based on Pythagorean fuzzy set: an innovative approach

Medical image fusion is the process of combining a multi-modality image into a single output image for superior information and a better visual appearance without any vagueness or uncertainties. It is suitable for better diagnosis. Pythagorean fuzzy set (PFS)-based medical image fusion was proposed in this manuscript. In the first phase, a two-scale gaussian filter was used to decompose the source images into base and detail layers. In the second phase, a spatial frequency (SF)-based fusion rule was employed for detail layers to preserve the more edge-oriented details. However, the base layer images were converted into pythagorean fuzzy images (PFIs) using the optimum value obtained by pythagorean fuzzy entropy (PFE). The blackness and whiteness count fusion rule were performed for image blocks decomposed from two PFIs in the third phase. Finally, the enhanced fused image was obtained by reconstructions of fused PFI blocks, which performed the defuzzification process. The proposed method was evaluated on different datasets for disease diagnosis and achieved better mean (M), standard deviation (SD), average gradient (AG), SF, modified spatial frequency (MSF), mutual information (MI), and fusion symmetry (FS) values than compared to state-of-art methods. This advancement is important in the field of healthcare and medical imaging, including enhanced diagnostics and treatment planning.

and cerebral blood flow information.Several image fusion algorithms have emerged that occur at the pixel level, feature level, and decision level.Among them, the independent data points are extracted from the modalities by feature-level fusion and then form two-point sets that can be adapted for connection.Decision-level fusion associates divergent inputs with a common phenomenon using decision labels.Furthermore, pixel-level fusion is divided into spatial and spectral domains [7][8][9] .The maximum, median, and minimum methods are part of the spatial domain methods.
These methods produce a degraded fusion image with unwanted noise.The statistical techniques are subspace methods, namely, principal component analysis (PCA) 10 and independent component analysis (ICA) 11 , which are also included in the spatial approach, produces spectral distortions and low contrast.Pyramid-based decomposition methods [12][13][14] , such as the Gaussian pyramid and Laplacian pyramid, obtained fused images with loss of spatial data.
To address these drawbacks, wavelet transform-based fusion methods are introduced such as discrete wavelet transform (DWT) 15 , stationary wavelet transform (SWT) 16 , and duel-tree complex wavelet transform (DTCxWT) 17 .In addition, multiscale transforms such as curvelet and contourlet transforms were produce edge information in a fused image.These methods are not shift-invariant and have limited directionality.To overcome this, nonsubsampled contourlet transform (NSCT) 18 and nonsubsampled shearlet transform (NSST) 19 methods were employed.
Medical images are poor illuminated, as various structures are not visible and some parts are vague in nature.In image processing applications, the fuzzy set plays a tremendous role to improving the image quality in various aspects such as contrast, highlighting the region of interest, clear edges, etc. Zadeh proposed a mathematical tool known as fuzzy set in 1965 20 .Manchanda et al. 21proposed image fusion using fuzzy transform, which produces a fused image with edge distortions.Intuitionistic fuzzy set (IFS) is the generalized version of the fuzzy set, which was introduced by Atanassov in the year 1986 22 .IFS is used to remove vagueness and uncertainties by utilizing a hesitation degree.
Balasubramaniam et al. 23 , Tirupal et al. 24 proposed intuitionistic fuzzy sets-related fusion methods, and these are used to remove vagueness but do not cover complete uncertainties.According to the aforementioned discussion, the major issues with the previous techniques contained undesirable artifacts, distortions, and unwanted errors.Therefore, this paper proposed a Gaussian filtering-based image fusion method that leads to a two-layer decomposition.The decomposition technique extracts significant features from the original images.As a result, the fused image contains more reliable information without distortions and artifacts, which aids clinical diagnosis.The base layer images were fused by using pythagorean fuzzy set (PFS) for high contrast with maximum information, and detailed layer images were fused based on spatial frequency for clarity representation.Lastly, the enhanced fused image can be reconstructed by summing the base and detailed layer fused images.
The major contribution of the proposed work is as follows: • The two-layer decomposition model decomposes the original images into two layers: base and detail layers, for the extraction of structural and detailed information.• To deal with uncertainty, a new pythagorean fuzzy approach is used for medical image fusion, which will produce an enhanced fused image without artifacts and uncertainties.• The proposed fusion method produces better fusion results in terms of visual appearance and quantitative phenomena compared to other state-of-the art methods.
The construction of IFS includes both membership (µ) , and non-membership (ν) , with a hesitation margin (π ) .Such that, µ + ν ≤ 1 and µ + ν + π = 1 .The concept of IFSs offers a flexible framework for elaborating on vagueness and uncertainties.The IFS concept appears to be useful in many real time circumstances such as selection processes, medical diagnoses, multi-criteria decision-making, etc 25 .Where the situations µ + ν ≥ 1 occur in IFSs, which lead to construct a new mathematical tool, called Pythagorean fuzzy set (PFS) 26 .This was a new method to deal with vagueness and uncertainties more accurately and sufficiently than the IFSs by considering membership (µ) , and non-membership (ν) grades, with satisfying conditions µ + ν ≤ 1 (or)µ + ν ≥ 1 , and also it follows that µ 2 + ν 2 + π 2 = 1 , where π is the PFS index or indeterminacy degree.The Pythagorean fuzzy set was attracted by many researchers in various application areas such as segmentation, enhancement, medical diagnosis, decision-making, etc.
The remaining paper is arranged as follows: Section "Pythagorean fuzzy approach for image fusion" includes the pythagorean fuzzy approach for image fusion.Section "Image enhancement" describes a proposed medical image fusion in grayscale and color.Section "Experimental analysis" presents the experimental analysis.In Section "Fusion results and analysis", fusion result are analyzed.Lastly, the conclusion part is presented in Section "Conclusion".

Construction of a Pythagorean fuzzy set
Let H be a finite set.A Pythagorean fuzzy set (PFS) 27 G in H , is defining as: are the membership and non-membership degrees of the element h ∈ H of PFSs, with the condition that 0 ≤ (µ G (h)) 2 + (ν G (h)) 2 ≤ 1 , and the indeterminacy degree π G (h) of PFS reflects the uncertainty of membership and non-membership functions, which is defined as: (1) The concept of PFSs and IFSs are illustrated as in Table 1:

Pythagorean fuzzy image
PFS is an extension version of the IFS 30 .The construction of PFS is described as: Let us consider an increasing function 31 as: After solving, we obtain Where N(1) = 0, N(0) = 1.
Based on the IFS, the membership degree of PFS can be estimated as: The estimation of non-membership degree of PFS is as: Finally, the indeterminacy degree of PFS estimated as: As mentioned above, α is not a fixed value for all the images and was optimized by pythagorean fuzzy entropy (PFE).In this article, pythagorean fuzzy entropy was suggested by Peng X 32 and its mathematical formulated as follows: PFE is calculated by using Eq. ( 8) for α values ranging from [0.1-1.0].Similarly, the highest value of PFE correspond to the α value, is treated as the optimum value as shown in Table 2, and is denoted by: Substituting the α opt value in Eq. ( 5), an pythagorean fuzzy image (PFI) is created.

Image enhancement
The parameter, ( ) 33 is a fuzzy hedge, and it varies according to the image.The is used to modify the pythagorean fuzzy image µ PFS G (h) and also controls the contrast of the image, as in Eq. ( 10). (2) , α > 0 Table 1.IFSs and PFSs 27 .
Then, the contrast stretching is applied on modified PFI using INT operator 30 , and is mathematically represented as: The above-mentioned Eq. ( 11) forms the contrast enhanced image.

Proposed fusion method description
This section describes the proposed fusion algorithm based on the pythagorean fuzzy set in detailed.Firstly, the pre-registered source images are decomposed into two layers by using a gaussian filter.Secondly, the base layers are fused based on PFSs, and the detailed layers are fused by using spatial frequency (SF).Lastly, the enhanced fused image is rebuilt by combining both the fused base and detailed layered images.The schematic flowchart of the proposed grayscale image fusion method is described in Fig. 1.

Two-Layer decomposition using Gaussian filter
Let X 1 and X 2 are the two pre-registered input images of A × B dimension.Those images are decomposed into base and detail layers.The base layer is obtained by using (12), and the detail layer is obtained by removing the relevant base layer from the original images, as specified in (13).www.nature.com/scientificreports/

Base layer image fusion
The base layer contains more structural information from the source images.In general, medical images are poor illuminated and some parts are not visible.As a result, PFS is used to enhance the fusion results and remove uncertainties.The schematic flowchart of the base layer fusion method is shown in Fig. 2. The detailed explanation of the base layer fusion algorithm is summarized as: 1. Initially, the base layer images are fuzzified separately using the membership function (14) with A × B dimension.
where Y (a, b) is the gray level value of the image at pixel (a, b).Y max and Y min are the maximum and minimum gray level values of the image, respectively.2. Calculate optimum value,α , using Eqs.( 8), (9) for two base layer images separately, and this α value varies from image to image.3. Based on the optimum value,α , the calculation of membership, non-membership, and indeterminacy degrees of two base layers images separately using Eqs.( 5), ( 6), ( 7).
(  www.nature.com/scientificreports/ 4. Finally, the enhanced PFI images are obtained from Eq. ( 11) and represented as Y PFI1 , and Y PFI2 . 5. Decompose the two PFI images Y PFI1 and Y PFI2 into i × j , and l th block of each image can be represented as Y l PFI1 , and Y l PFI2 .6.The blackness and whiteness count of each block of the two images Y l PFI1 and Y l PFI2 are calculated by using min, max, and average operations, as given below: 7. The base layer fused image is obtained by reconstructing the Y fused image blocks and then perform defuzzification process to obtain a crisp image, which is the inversion of (14).

Detail layer image fusion
Detail layers contain information related to the edges of sub-images.In fact, spatial frequency (SF) 34 measures the clarity and active levels of the image and is also susceptible to changes in image intensity.Due to this reason, the SF-based fusion rule is used to combine detail layers to obtain more edge information in the fused image.
The schematic flowchart of the detail layer fusion method is shown in Fig. 3.The summary steps of the detail fusion algorithm are arranged as follows: 1. Decompose the detail layer sub images Z i into i × j , and l th block of each decomposed sub images are repre- sented as Z 1k , and Z 2k .2. Calculate the SF of each lth block and is given below: where RF and CF are the row and column frequencies respectively.3.These blocks are fused by using SF based fusion rule, and is given below:

Detail layer image fusion
Lastly, the final enhanced fused image U is obtained by summing of both fused base ( Y B ) and detailed ( Z D ) images.

Color medical image fusion
In addition to grayscale image fusion, color images (PET and SPECT) were fused with MRI images, which play an essential role in clinical diagnosis and medical treatment.During the fusion process, PET and SPECT images are treated as RGB images.In the fusion of MRI images with PET/SPECT images, the most common method is to first divide the PET/SPECT image into R, G, and B channels and then combine them with the MRI image.However, this approach provides color distortions and makes the fusion process more complicated.Therefore, PET/SPECT images were converted into YUV color space, which is a highly efficient method for getting more complementary information used for better diagnosis.In this article, the color medical image fusion is described as follows: Firstly, the color images are transformed into YUV color space, such as luminance component (Y) and chrominance components (U and V).Secondly, through the proposed grayscale fusion method, perform the fusion process of MRI with the Y component to obtain a fused Y component, as shown in Fig. 4. Lastly, the fused Y and unchanged components (U and V) were transformed into RGB to obtain the enhanced fused color image.

Experimental analysis
The experimental results are executed on a personal PC with an i3 processor using MATLAB R2016a.The proposed method was evaluated on 16 medical image datasets, as shown in Figs. 5, 6, 7, 8, 9, which are publicly available 28,29 .These are obtained from different medical image modalities, namely, T1-MR and T2-MR, MRI and CT, T1-weighted MR and MRA, MRI and PET, and MR-T2 and SPECT images.The experimental analysis is carried out to confirm the efficiency and superiority of the proposed method.Furthermore, we have compared the proposed method's results with existing methods, and Figs. 10, 11, 12, 13 and 14 show how the proposed method effectively identifies the tumour regions.

Objective measures
Objective metrics are used to assess the quality of the fused image.In this article, seven objective metrics were used, namely, mean (M) 34 , standard deviation (SD) 34 , average gradient (AG) 35 , spatial frequency (SF) 34 , modified spatial frequency (MSF) 36 , mutual information (MI) 37 , and fusion symmetry (FS) 38 .The higher value of each quality metric shows the efficiency of the fused image.The quality measures are listed as follows: 1. Mean (M) value represents the average pixel intensities, which shows the brightness of a fused image.It is denoted as, where U ab signifies the intensity values of a fused output image at (a, b)th pixel.2. Standard deviation (SD) represents the degree of variation of gray level values in the fused image.The higher value of SD indicates no artifacts in the fused image and shows overall contrast.It is written as, 3. Average gradient (AG) measures the directional changes of the pixels in the fused image, and is the formula as stated, ( 19) 2 2    www.nature.com/scientificreports/Vol where MI UX 1 is the mutual information between source image X 1 and fused image U , and MI UX 2 is the mutual information between source image X 2 and fused image U respectively.7. Fusion symmetry (FS) is representing the symmetry of the fused image with respect to the source image.It is formulated as The FS value is near to 2, which represents both source images equally contribute to producing a fused image.So, the fused image has better quality.

Fusion results and analysis
Various existing methods are included in this experiment such as PCA, DWT, Average (AVG) method, IFS 23 , SIFS 24 , GFF 39 , CSE 40 , and JBF-LGE 41 , In this paper, all comparative methods are utilized and meet the criterion and terms according to their publications.The proposed method shows superiority and efficiency in all aspects.
Vol:.( 1234567890 www.nature.com/scientificreports/information but fails to highlight the minute details.CSE and JBF-LEG-based methods produce a fused image with misplacement of primary details and distortions that may be observed.Finally, the proposed fusion method is superior to other methods and has greater luminance.Database-2 contains MRI and CT image fusion.CT scans are made up of a sequence of X-ray images collected from various angles that show hard tissue, such as bone structure.
On the other hand, an MRI scan employs magnetic fields and radio waves to display the details of inside organs and delicate tissues.As a result, the combination of these two images produces a single fused image with more complementary information and salient features.The proposed and other existing results are clearly observed in Fig. 11.As a result of database-2, the PCA method does not produce CT details.The DWT and AVG methods are not good in terms of contrast.It should be emphasized that IFS generates undesired artifacts that cause distortion of local characteristics.SIFS causes the incorrect production of soft tissues.The GFF method produces undesirable artifacts and is missing complementary information.Some distortion is visible in the resultant fusion results of the CSE and JBE-LGE methods.However, the proposed fusion method outperforms other state-of-the-art methods in terms of contrast.
Database-3 comprises the fusion of T1-weighted MR and MRA images, with some diseases shown as a white structure observed in Fig. 12.The T1-weighted MR image provides delicate tissue data, but it fails to identify the abnormalities in the image.MRA images can detect abnormalities easily, but not soft tissue information.
Therefore, the combination of these two T1-MR and MRA images provides more reliable information in the integrated image, which can aid in medical diagnosis.First, the outcome of PCA shows the loss of white structure information.DWT and AVG provide a blurred and degraded fusion image.Moreover, the textural changes are not produced by the IFS and SIFS-based fusion results.Also, the soft tissue information is not visible in the GFF-based fused image.In addition, the anatomical information and structural details are misaligned for CSE and JBF-LEG fusion results.However, the proposed fusion method is superior compared to other methods in terms of appearance and clarity.
Database-4 addresses the MRI and PET fusion images as shown in Fig. 13.In this paper, the MRI image is perfectly registered with the corresponding PET image.The MRI image shows anatomical brain tissue information but no functional information, whereas the PET image shows the functionality of the brain but has a limited spatial resolution.Fusing these two images obtained more functional information without distortions.The proposed fused images have more quantitative complementary information showing the size of the tumor and are more visible than other existing methods, which can help doctors make better disease diagnoses at an earlier stage, as observed in Fig. 13.The resultant fused images of PCA have low anatomical information, while DWT and AVG methods produce distorted fused images with low contrast.
It is noted that the IFS and SIFS methods produced a fused image with color distortions.Sufficient information is not present in GFF-based fusion results.In addition, CSE results displayed the structural details poorly.JBF-LEG method provides good results.Therefore, the proposed fusion results obtained better quality and enhanced features in the fused image than other methods.
Finally, the fifth database of images is MR-T2, and SPECT obtained from the whole brain Harvard medical school, which is exhibited in Fig. 14 for the assessment of different fusion methods.The MRI image provides anatomical information, whereas the SPECT image provides a functional understanding of the human brain.To get both anatomical and functional data into a single resultant image, the source images are to be fused.Compared to other existing methods, the tumor regions are clearly enhanced by using the proposed method and obtaining good complementary and redundant information from source images.Furthermore, the proposed results reveal better fusion performance in terms of contrast, luminance, and clarity.

Quantitative assessment
Visual evaluation alone cannot determine the quality of the integrated image.As a result, it is essential to calculate the fused image's objective values.The objective evaluation of eight existing fusion approaches and the proposed method on 16 pairs of medical datasets is shown in Tables 3, 4, 5, 6, 7.In each table, the highest value of each quality parameter is outlined in bold.The proposed fusion method provides better results and superiority over other methods.However, some of the quality metrics have low values in various databases, which are listed as follows: In database-1, MI value is in pair-3 (see Table 3).In database-2, MI value is in pair-2 (see Table 4).In database-4, MI and FS values are in pair-4 (see Table 6).In database-5, MI and FS values are in pairs 1 and 3 (see Table 7), MI value is in Pair-2 (see Table 6).Although the other pairs of databases have high values in the proposed method, they are observed in Tables 3, 4, 5, 6, 7.In this paper, the graphical representation of fusion methods with quality metric values for pair-1 in each database is shown in Fig. 15.
The proposed fusion method obtained remarkable performance, especially database-1 of pair-1 and pair-4, database-2 of pair-1, database-3 of pair-1, database-4 of pair-1, and database-5 of pair-5, and extracted the required details from the source images to produce a fused image with good contrast.Ultimately, the proposed fusion method removes the vagueness and uncertainties in the fused image and gives better-quality results than the other fusion methods.

Conclusion
This article presents a novel approach for medical image fusion using a pythagorean fuzzy set for better clinical diagnosis.The core concept of our proposed method is mainly described as four stages.In the first stage, the source images were decomposed into base and detail layer images using two-scale decomposition.Then, the base layers of two source images were fused using a pythagorean fuzzy set for the extraction of more structural information.After that, the SF was employed to fuse the detailed layers.Finally, the base and detailed fused images were combined to obtain an enhanced fused image.In this paper, we used five medical databases for the fusion process, and experimental results are proving its superiority both visually and quantitatively compared to other methods.An experimental result proves the quality of the proposed method in terms of visual and objective assessments, as shown in Figs. 10, 11, 12, 13, 14 and Tables 3, 4, 5, 6, 7.In Fig. 10, Pair-4 of database-1 shows that the edges and overall clarity of the proposed fused image are good compared to other methods, and quantitatively  3.Moreover, the tumor region is clearly enhanced in Fig. 13, and has a quantitatively high value in pair-1 (93.8402 for SD, 14.5608 for AG, 52.5326 for SF, and 1.9955 for FS) from Table 6.As aforementioned, the positive aspects of the proposed method provide a good contrast fused image without artifacts compared to other methods and removes uncertainties.This method is suitable for grayscale and color medical images.In the future, the extension of work will be based on novel fuzzy sets and fusion rules for better diagnosis.

Figure 1 .
Figure 1.Schematic flowchart of the proposed grayscale image fusion.

Figure 2 .
Figure 2. Schematic flowchart of the base layer fusion.

2 Figure 3 .
Figure 3. Schematic flowchart of the detailed layer fusion.

Figure 4 .
Figure 4. Schematic flowchart of the color medical image fusion.

Table 3 .
Fusion results of various methods with performance metrics values for Database-1.Significant values are in bold.

Table 4 .
Fusion results of various methods with performance metrics values for Database-2.Significant values are in bold.

Table 5 .
Fusion results of various methods with performance metrics values for Database-3.Significant values are in bold.

Table 6 .
Fusion results of various methods with performance metrics values for Database-4.Significant values are in bold.

Table 7 .
Fusion results of various methods with performance metrics values for Database-5.Significant values are in bold.