Underwater image quality assessment method based on color space multi-feature fusion

The complexity and challenging underwater environment leading to degradation in underwater image. Measuring the quality of underwater image is a significant step for the subsequent image processing step. Existing Image Quality Assessment (IQA) methods do not fully consider the characteristics of degradation in underwater images, which limits their performance in underwater image assessment. To address this problem, an Underwater IQA (UIQA) method based on color space multi-feature fusion is proposed to focus on underwater image. The proposed method converts underwater images from RGB color space to CIELab color space, which has a higher correlation to human subjective perception of underwater visual quality. The proposed method extract histogram features, morphological features, and moment statistics from luminance and color components and concatenate the features to obtain fusion features to better quantify the degradation in underwater image quality. After features extraction, support vector regression(SVR) is employed to learn the relationship between fusion features and image quality scores, and gain the quality prediction model. Experimental results on the SAUD dataset and UIED dataset show that our proposed method can perform well in underwater image quality assessment. The performance comparisons on LIVE dataset, TID2013 dataset,LIVEMD dataset,LIVEC dataset and SIQAD dataset demonstrate the applicability of the proposed method.

The underwater world is abundant in resources and images obtained through underwater observation equipment are crucial for exploring the underwater world and exploiting its resources.In practical underwater applications, high-quality underwater images can provide sufficient and accurate information to facilitate the realization of underwater tasks.However, the underwater environment is complex and variable, and factors such as light scattering, absorption, and environmental noise can affect the quality of underwater images, leading to issues such as color shift, low contrast, and low definition 1 .When underwater images are affected by quality degradation, the application value of these images may be reduced 2 , which in turn can impact their performance in practical underwater application tasks.In recent years, numerous effective methods have been proposed for enhancing underwater images [3][4][5][6][7][8][9] .However, in the field of underwater image enhancement, there is currently no effective, robust, and widely-accepted method for assessing the quality of underwater images.Therefore, it is of significant research importance to investigate an effective and robust UIQA method.Such an assessment method can help to measure the application value of underwater images and provide a credible and effective reference for the selection of original underwater images and underwater image enhancement methods in practical applications.
There are currently two categories of IQA methods, subjective IQA methods and objective IQA methods 10 .Subjective IQA methods rely on the subjective perception of the observer, which is time-consuming, costly, unstable, and dependent on expert knowledge 11 .As such, subjective methods are only suitable for small-scale image datasets, which hinder the efficient and large-scale quality assessment of underwater images.Therefore, objective IQA methods, which utilize computer-designed methods to automatically and accurately assess image quality, are more suitable for evaluating the quality of underwater images.
IQA methods can be classified according to the amount of reference information required, including Full-Reference Image Quality Assessment (FR-IQA), Reduced Reference Image Quality Assessment (RR-IQA), and No-Reference Image Quality Assessment (NR-IQA) 12,13 .The FR-IQA method requires all the reference information of the original image and compares the difference between the original image and the distorted image to obtain the quality of the distorted image.The RR-IQA method requires partial feature information of the original To address the issues with existing methods, this paper proposes a No-Reference Underwater Image Quality Assessment based on Multi-feature Fusion in Color Space (NMFC).The NMFC method utilizes a color space transformation method to extract luminance histogram, Local Binary Pattern (LBP), moment statistics, and morphological features in the CIELab color space to represent underwater image information through feature fusion.Support Vector Regression (SVR) is subsequently used to learn the relationship between the fused features and underwater image quality, enabling the establishment of a model and the realization of accurate UIQA .Experimental results obtained using the SAUD 32 and UIED 29 underwater image quality assessment datasets demonstrate that the proposed method effectively and accurately assesses underwater image quality and exhibits high consistency with human subjective perception.The generality of the method is further demonstrated using several traditional image datasets, including LIVE 33 , TID2013 34 , LIVEMD 35 , LIVEC 36 and SIQAD 37 .

Methods
The proposed method is illustrated in Fig. 1,begins by converting underwater images from RGB to CIELab color space through color space conversion.In this color space, six quantified underwater image distortion features (as detailed in Table 1) are extracted and fused with multiple features to form a feature vector.Finally, SVR is utilized to learn the mapping relationship between the feature vectors and the quality of underwater images, resulting in the establishment of a model for UIQA.

Color space conversion
The CIELab color space 38 is specifically designed to match human color perception and achieve perceptual uniformity.Consequently, in order to more effectively extract features, the proposed method converted the RGB underwater images to the CIELab color space.Specifically, the proposed method first transformed the images from the RGB color space to the XYZ color space, before subsequently converting them to the CIELab color space.Eq. ( 1) was used to accomplish this conversion, as detailed below.www.nature.com/scientificreports/ The equation for converting XYZ color space to CIELab color space is as follows: Where, X n , Y n , Z n is the CIE XYZ trichromatic stimulus value of the reference white point value [0.9504, 1.0000, 1.0888], simulating noon sunlight with the associated color temperature of 6504 K.

L component feature extraction
Underwater images are known to be impacted by the absorption and scattering of light by water, often resulting in insufficient brightness, low visibility, and low contrast, thereby leading to a degradation of underwater image quality.It is widely acknowledged that the luminance channel L component plays a crucial role in this regard.Therefore, there is a need to extract features that can effectively characterize changes in underwater image quality, with a particular emphasis on the L component.

L component luminance histograms
The luminance histogram features can be calculated by Eq. ( 4), the process is: (1) divide the image luminance values into k bins, (2) traverse the image pixels and count the number of pixels in each bin, (3) divide the number of pixels in each bin by the total number of pixels in the image to get the probability of each bin, thus the luminance histogram features are obtained.
This method is based on the luminance component map by Eq. ( 4) to obtain the corresponding histogram features of the underwater image as histogram features f 4 .

L component morphological parameters
Previous studies have suggested that performing nonlinear operations on images can help eliminate the correlation between pixels in the image 39 .Hence, to extract features from the underwater images I , this study utilizes the Mean Subtracted Contrast Normalized (MSCN) technique (as depicted in Eq. 5) and extracts statistical features based on the MSCN coefficients 16 .The extraction process is performed as follows.
I is an underwater image of size M*N, which has MSCN coefficient : where:µ(i, j) is the result after Gaussian filtering, andσ (i, j) is the standard deviation .C is the constant 1 that prevents the denominator from being 0. The variables µ(i, j) and σ (i, j) are defined as follows: (1)  where w is the two-dimensional cyclic symmetric Gaussian weight function, and(K = L = 3 ) .The statistical characteristics are obtained by fitting the MSCN coefficients using the Asymmetric Generalized Gaussian Distribution (AGGD) model.The AGGD model is shown in Eq. (9)   where, Ŵ(x) is the gamma function;t is any one of the variables taking the value [0, +∞].
The MSCN coefficients were obtained by AGGD fitting using the moment matching method 40 for the shape parameters α,the left and right scale parameters σ 2 l , σ 2 r , the mean value η .The skewness S as well as the kurtosis K 41 of the MSCN image are calculated Eqs. ( 14) and (15).Shape parameters α , left and right scale parameters σ 2 l , σ 2 r , mean η skew S and kurtosis K are taken as morphological featuresF ∀ S = (α ∀ , η ∀ , σ ∀ l , σ ∀ r , S ∀ , K ∀ ) and ∀ denote any one image.
Downsampling is applied to the luminance component map, and for the image I(M * N) , the proposed method downsample it to obtain a downsampled map Ĩ with a resolution of( M 2 ) * ( N 2 ).The morphological features of I and the downsampled map Ĩ as the features of f = [F I S , F Ĩ S ].

L component moment statistics
The moment statistics 42 of the luminance component maps, including the first-order moment mean µ , second- order moment variance σ , third-order moment skewness s, and fourth-order moment kurtosis k, were calculated using equations Eqs.(16-19).
Vol:.( 1234567890) where p ij represents the first pixel of the j pixel of the i component of the image and N represents the number of pixels in the image.
The four moment statistics are used as features f 3 ,which means

L component LBP histograms
The proposed method use the rotationally invariant uniform LBP operator 43 for the image I and its downsampled map Ĩ .The rotation-invariant uniform LBP model is defined as: Where p ij represents the center point (i, j) pixel value.p n represents the nth pixel value of the neighborhood, P is the number of neighborhoods, R is the radius of the neighborhood, and U ij is the pixel consistency parameter of the pixel point (i, j) the pixel consistency parameter of the neighboring pixel points, defined as: After processing the two maps using rotation-invariant uniform LBP, the LBP histogram is built based on the two LBP mapping maps by the Eqs.( 23) and ( 24): The obtained LBP histogram feature can effectively perceive the underwater image quality, so it is used as the luminance component feature f 1 .

AB component feature extraction
In order to quantify the degradation of underwater image quality resulting from color changes, our proposed method extracts features from the chromatic channels.

Chromatic feature maps
Inspired by the NUIQ method 32 build chromatic descriptor maps based on the two chromatic components(O 1 and O 2 ) to extract the chromatic component features of underwater images.The proposed method constructs five color feature maps, AB difference map, saturation map, AB angle map and AB derivation angle, based on A and B chromatic components,which can represent the color perception information of underwater images from different perspectives and effectively describe the features of chromatic channels in underwater images.
The construction process of the chromatic feature maps is defined as Eq. ( 25): Where ∇ x , ∇ y is the gradient along the horizontal and vertical directions.c is a small constant.The five feature maps feature maps a 1 − a 5 are illustrated in Fig. 3 To confirm the effectiveness of our proposed chromatic feature maps, three underwater images with varying image quality (as shown in Fig. 4) were selected.The corresponding five chromatic feature maps were fitted with a normal distribution function, and the resulting curves are shown in Fig. 5.

AB component morphological parameters
In Fig. 5, the subfigure displays fitted curves of color feature maps corresponding to three different quality levels of underwater images.Each fitted curve represents a distinct shape, highlighting the variations in the color component morphological features.This confirms that the five chromatic feature maps constructed in this study successfully reflect the variations in underwater image quality.Consequently, it is reasonable to utilize the (19)    morphological parameters of the fitted curves of these chromatic feature maps as morphological features for the analysis of underwater image quality.
Based on the five chromatic feature maps AGGD fitting was applied Eq. ( 9) to obtain the shape parameter α , the left and right scale parameters σ 2 l , σ 2 r , and the mean value of the distribution η .For the five chromatic feature maps skewness S and kurtosis K were calculated according to Eqs. 14 and 15.Combining the five feature maps corresponding parameters as the feature f 6 , that is

AB component moment statistics
Similar to the luminance channel, this method also performs the calculation of moment statistics in the chromatic channel.This method will use Eq.(16-19) to calculate the first-order moment mean µ,second-order moment vari- ance σ , third-order moment skewness s and fourth-order moment kurtosis k on the A-component map and B-component map of the chromatic channel to obtain a moment vector as the chromatic channel moment statistic f 5 , then

Building image quality assessment model
Feature fusion can realize the complementary features of chromatic and luminance components of underwater images, and describe the quality change of images more effectively.After the two types of features are extracted, feature fusion will be performed.We employ a feature fusion strategy that concatenates multiple features . However, the process of building an IQA model remains to be addressed.Considering the impressive generalization ability of SVR along with its widespread use in numerous methods for IQA 14,16,17 , it is deemed suitable for accurately establishing relationships between image qualityrelated features and image scores.Therefore, for the purposes of this study, the LIBSVM toolkit 44 was utilized to implement SVR with a Radial Basis Function (RBF) kernel.The mapping between fused features and subjective quality scores of underwater images was established using SVR in order to gain an UIQA model.In evaluating the quality of underwater images, features are extracted from the images to be measured according to the method outlined in this paper.These features are then fused and input into the UIQA model to obtain the quality score.

Dataset and assessment metric
In order to analyze the performance of the proposed method, this paper conducted comparative experiments on publicly available image datasets: SAUD 32 , UIED 29 , LIVE 33 ,LIVEMD 35 ,LIVEC 36 ,TID2013 34 , and SIQAD 37 .Each dataset contains subjective IQA results for each image, including mean opinion score (MOS), difference mean opinion score (DOMS), or Bradley-Terry score (B-T score) 45 .
The UIED dataset 29 includes 100 images of real underwater scenes,including 16 coral images,26 marine life images, 14 seabed rock images, 12 sculpture images, 10 wreck images and 22 diver images.Typical images in the database are shown in Fig. 6, with resolutions ranging from 183 × 275 to 1350 × 1800.10 representative underwater image enhancement algorithms are used to process the original images, and the resulting dataset was comprised of 1000 images.The SAUD database 32 is similar to the UIED database, as it is also an underwater image quality evaluation dataset,also containing 100 images of real underwater scenes.However, different enhancement methods were selected for processing the underwater images in this dataset.The LIVE 33 database contains 29 distortion-free high-resolution images as reference images and 779 distorted images corresponding to the reference images.These images have five distortion types, which are (1) 175 JPEG2000 Compression(JP2K) Figure 6.Typical underwater images in UIED database.To evaluate the performance of various objective IQA methods, it is necessary to compare the subjective assessment scores of images with the IQA results obtained by these methods.If an objective IQA method demonstrates high agreement with the subjective image assessment scores, it indicates that the corresponding method is more consistent with human subjective perception and has better performance.This paper utilize three widely adopted metrics, including the Spearman rank-order correlation coefficient (SROCC), Kendall rank-order correlation coefficient (KROCC), and Pearson linear correlation coefficient (PLCC), to measure the performance of IQA methods.The SROCC and KROCC values measure the monotonicity between the method's assessment results and the subjective assessment scores, while the PLCC value measures the accuracy of the method for IQA.The closer the absolute values of these three metrics are to 1, the higher the agreement between the assessment results of the corresponding objective quality assessment method and the subjective assessment scores, indicating better performance by the method.

Performance analysis of the method
To evaluate the effectiveness of the proposed NMFC method, this study conducted comparisons between our method and mainstream IQA methods on two datasets, including the SAUD dataset and the UIED dataset.The proposed method selected seven existing NR-IQA methods for comparison, including BIQI 14 , SSEQ 16 , BRISQUE 18 , NIQE 19 , ILNIQE 20 , BIQME 17 , NUIQ 32 , and classical UIQA methods,including UIQM 26 , UCIQE 27 , FDUM 28 ,UIF 29 .Among these methods, the NIQE and ILNIQE approaches do not require subjective scoring and are thus regarded as fully blind IQA methods, which therefore do not require retraining.The remaining methods were retrained using the SAUD and UIED datasets.In order to ensure the fairness of the experiments, this paper utilized the authors' published source code and trained the score assessment models using SVR in all experiments.
For image dataset division, the proposed method randomly split the images into a training set and a test set, with 80% and 20% of the images respectively.Additionally, the two parts were ensured to contain no duplicate images.The score assessment model was then trained using all the features extracted from the images in the training set, along with their corresponding subjective assessment scores.Once the model was obtained, it was used to evaluate the images in the test set and calculate the appropriate assessment metrics.To ensure the accuracy and reliability of the experimental results, proposed method conducted multiple iterations of the experiment.Specifically, the process was repeated 50 times for each method, with 50 rounds of training and testing carried out for every iteration.After 50 rounds of testing, we calculated the average value of each assessment metric, which was then taken as the overall assessment result of the corresponding IQA method.
Table 3 gives the overall performance comparison results of this paper method with other methods on SAUD and UIED datasets, and the values of the best and second performance corresponding to different assessment  3, we can get the following conclusions.First, the method proposed in this paper outperforms other IQA methods on the underwater image dataset.SROCC, KROCC, PLCC assessment index values corresponding to the overall assessment results of this method on the SAUD and UIED image datasets reach the optimum; NUIQ method achieves sub-optimal SROCC, KROCC, PLCC values on the SAUD and UIED dataset.Compared with the suboptimal method, the values of SROCC, KROCC, PLCC on the SAUD dataset are higher than the suboptimal method by 0.0463, 0.0443 and 0.0397; the values of SROCC, KROCC, PLCC on the UIED dataset are higher than the suboptimal method by 0.0215, 0.0188 and 0.0055 or so.Secondly,the UCIQE, UIQM, and FDUM algorithms exhibit subpar performance on the SAUD and UIED databases, as evidenced by their low SROCC, KROCC, and PLCC values.It is worth noting that the UIF method underperforms specifically on the SAUD database; however, it ranks third in terms of SROCC (0.6117), KROCC (0.4428), and PLCC (0.631) values for the UIED dataset.This discrepancy can be attributed to these methods' reliance on combining measurement component scores with weights to derive quality scores for underwater images.As described in the related work above, limitation of such approaches lies in their dependence on weight distributions constrained by specific databases, where coefficients differ across different datasets.This also one of the factors that motivated the design of our proposed method.
Lastly, the performance of some methods is poor, and the assessment index values of SROCC and PLCC are less than 0.5, which is a big gap with the methods in this paper.According to the above conclusions, it can be seen that the IQA method proposed in this paper has certain advantages compared with other IQA methods on different underwater image datasets, and the consistency with human subjective perception is higher than other methods, which can efficiently and accurately evaluate the quality of underwater images.

Intuitive comparison
To facilitate a more intuitive comparison between the proposed method and other methods, this research paper presents a series of underwater images with varying subjective quality scores in Fig. 7.These images are arranged in ascending order from left to right based on their respective scores.Various quality assessment methods are employed to evaluate these images, and the resulting subjective quality scores, NMFC values, UCIQE values, UIQM values, and BIQI values are depicted in Fig. 8. Additionally, line charts are utilized to emphasize the effectiveness of the proposed method in comparison to other IQA methods.
From the figure, it can be observed that the traditional underwater image quality evaluation methods, UCIQE and UIQM, primarily focus on the color components of the image.As a result, images with rich colors (such as Fig. 7c) often receive higher scores, leading to the highest evaluation for images with medium quality.On the other hand, the traditional air IQA method, BIQI, is designed for assessing image quality in natural scenes and does not take into account the factors that affect image quality in the underwater environment.Consequently,  www.nature.com/scientificreports/its performance is limited, and it is unable to differentiate the quality of underwater images.In contrast, the proposed NMFC method captures features that can effectively describe the degradation of underwater image quality, considering both the luminance and chromatic components.The scores obtained through this method demonstrate better alignment with subjective quality scores, thus achieving superior performance.

Ablation experiments
To validate the chosen CIELab color space for evaluating underwater image quality in this paper, ablation experiments were conducted on the SAUD and UIED databases using the CIElab color space, opponent color space, and YCbCr color space.The experimental results are presented in Table 4.
Based on the results presented in Table 4, the following conclusions can be drawn.Firstly, on the SAUD database, the method utilizing the CIELab color space outperforms the sub-optimal method employing the Opponent color space, with higher values of SROCC, KROCC, and PLCC evaluation indices by 0.0051, 0.0037, and 0.0014, respectively.Secondly, on the UIED database, the method utilizing the CIELab color space achieves the highest SROCC and KROCC values.However, the PLCC values is slightly lower compared to the method using the Opponent color space, with only a marginal difference of 0.0125 in the PLCC evaluation metric.Moreover, the utilization of the YCbCr color space yields unsatisfactory outcomes on both datasets.The SROCC, KROCC, and PLCC evaluation metrics exhibit lower values in comparison to the other two color spaces.This disparity suggests that the YCbCr color space inadequately captures color information, resulting in an inability to depict image intricacies and color variations.Consequently, the evaluation of underwater image quality suffers from diminished performance.In summary, the findings highlight the superior efficacy of employing the CIELab color space for underwater image quality assessment when contrasted with the alternative color spaces.
Furthermore, a series of ablation experiments were conducted on the SAUD dataset by the present study to demonstrate the effect of the multi-feature fusion strategy.The consistent results of the subjective and objective assessments of these ablation experiments are demonstrated in Table 5.
As indicated in Table 5, it is observed that the increasing feature numbers utilized leads to improved performance compared to using a single feature, particularly in the case of the methods utilizing only luminance or chromatic features.Specifically, the values of the three evaluation metrics, namely SROCC, KROCC, and PLCC,

Sensitivity to image type
In order to verify the generalizability of the method in this paper, we conduct comparison experiments on singly distorted synthetic datasets LIVE, TID2013, screen content datasets SIQAD,multiply distorted synthetic datasets LIVEMD datasets, and compare the performance of the NMFC method proposed in this paper with existing classical IQA methods such as SSIM 46 , PSNR 47 , FSIM 48 , VIF 49 , etc.The overall assessment results of the proposed method and other six classical image quality methods on three datasets are given in Table 6, and the better two results are bolded in the table.According to the results in Table 6, it can be seen that the method of this paper achieves better results on all three datasets.Firstly, on the SIQAD dataset, the proposed method outperforms other image quality evaluation methods, and the performance of SROCC and PLCC is almost higher than 0.2 compared to the worst method; still have some competitiveness on LIVE dataset, SROCC, KROCC, and PLCC values are below FSIM and VIF methods, but the difference is not large, PLCC is only 0.027, 0.0422 and 0.02 lower than the optimal assessment results on three assessment metrics.On the TID2013 dataset, the performance of this method is only worse than FSIM method and achieves sub-optimal results, but the method in this paper is a no-reference method, compared with the FSIM method, which requires reference information, the proposed method has a wider applicable scope.Meanwhile, based on the results presented in the table, it can be observed that the evaluation performance of FSIM and VIF methods surpasses that of the proposed method on certain datasets, however, it is noted that their performance on the SIQAD dataset is suboptimal, leading to a significant performance gap with the proposed method.Finally,in the evaluation on the LIVEMD dataset, our proposed method demonstrates slightly lower performance compared to the VIF and BIQI methods.VIF is a full reference algorithm that relies on more reference information, while our proposed method operates without any reference information and still achieves comparable results to VIF.The maximum difference in each evaluation index between our method and VIF is less than 0.035.Additionally, the SROCC, KROCC, and PLCC values of our method are 0.0163, 0.0319, and 0.0196 lower than the no-reference method BIQI.However, considering that our proposed method is specifically designed for underwater images, a certain gap in performance is still acceptable.
To demonstrate the generalizability of the proposed method, scatter plots were utilized to illustrate the correlation between the quality prediction results and subjective prediction results of the proposed method across various image categories, including underwater images, synthetic distortion images, screen content images, and real images in the wild.Fig. 9 displays the scatter plot depicting the predicted scores versus the subjective scores of the proposed method on SAUD, LIVE, SIQAD, and LIVEC datasets.From the a-c in Fig. 9, it can be observed that the quality prediction results of the proposed method align closely with the subjective evaluation results for underwater images, synthetic distortion images, and screen content images, where most samples compactly gather around the linear correlated line.It demonstrates that the proposed method predictions are highly consistent with human subjective perception.This observation provides evidence that the proposed method is capable of adapting to the task of image quality assessment in these scenarios.However, when evaluating the proposed method on the real scene dataset LIVEC, the SROCC, KROCC, and PLCC values are only 0.6435, 0.4599, and 0.6660, respectively, indicating not performing well.Notably, Fig. 9d clearly illustrates that the points demonstrate a relative degree of dispersion and do not exhibit a tightly clustered distribution around the linear correlated line.This highlights a substantial disparity between subjective and objective consistency.Therefore, further improvements are necessary to enhance the performance of the proposed method for image quality assessment in authentic settings.In summary, it is evident that the proposed method is resilient and can basically adapt to various distortion types and changes in image characteristics across different scenarios, demonstrating a degree of generalizability.

Stability analysis
This paper conduct performance comparison on SAUD and UIED datasets to verify the stability of the proposed method NMFC in this paper.Fig. 10 gives the box plots of the assessment results corresponding to this method and different IQA methods on SAUD and UIED datasets.The horizontal axis of the box plots represents the various IQA methods, while the vertical coordinates depict the objective assessment scores obtained through each method for the given set of images.The shape of each box corresponding to each method describes the distribution of the data and reflects the assessment effects of the different IQA methods.As shown in Fig. 10, our findings on both the SAUD and UIED datasets indicate that, in comparison with other IQA techniques, the boxes corresponding to the proposed method on each assessment index are smaller.A smaller box size indicates less fluctuation in the method's assessment of image quality.Moreover, the distance between the upper and lower edges of the boxes corresponding to this method is small, and there are no outliers, which demonstrates that the proposed method is more stable in assessing the quality of underwater images, compared to other IQA methods.Additionally, the median score for our proposed method is higher relative to other methods, indicating better assessment accuracy.Overall, our proposed method exhibits high stability and is effective in assessing the image quality of underwater images.

Conclusion
In this paper, an UIQA method based on color space multi-feature fusion is proposed.The proposed method extract morphological features, histogram features, moment statistics, and other features from the color spacebased transformed image and perform multi-feature fusion to assess the quality of underwater images.Experimental results demonstrate that our proposed method achieves high accuracy and robustness for UIQA, and is consistent with human subjective perception.Additionally, our method can satisfy the demands of both natural image and screenshot IQA .
The existing IQA methods proposed in this paper rely on artificial features and overlook the impact of complex real distortions on image quality in real scenes, posing a challenge for evaluating image quality in realistic environments.Enhancing the performance of the method constitutes the primary objective for future research endeavors.To address this issue, our focus will be on semantically mining the feature maps proposed in this paper, combined with deep neural networks, aiming to achieve a more efficient and effective approach to image quality assessment.

2 3 4 histogram feature f 5 6 Figure 1 .
Figure 1.The detailed procedure of the proposed method.
Figure 2b-d depict the image data corresponding to the L-component, A-component, and B-component of the original image Fig. 2a subsequent to the color space conversion.

Figure 2 .
Figure 2. Examples of underwater images and the corresponding component images.(a) Raw image, (b) L-component image, (c) A-component image, (d) B-component image.

a 5 Figure 3 .
Figure 3. Illustration of chromatic feature maps.(a) Pristine underwater image, (b-f) are five chromatic feature maps of (a).

Figure 7 .
Figure 7. Figures (a-e) represent a collection of underwater images, each accompanied by a subjective rating.The images have been ranked in ascending order based on their respective subjective scores.

Figure 8 .
Figure 8. Prediction quality values of different methods on underwater image (Fig. 7).Q is the predicted quality score.

Table 1 .
Details of features extracted by this method.
37storted images;(2)169 JPEG Compression (JPEG) distorted images;(3)145 Gaussian Blur (GB) distorted images;(4)145 White Gaussian Noise (WN) images; and(5)145 Fast Fading error (FF) distorted images.LIVE image database provides a DMOS for each image in the range [0, 100], with higher scores representing poorer image quality.The TID201334database contains 25 reference images and their corresponding distortion images of 24 distortion types, each containing five distortion levels, for a total of 3000 images, giving MOS values in the range[0, 9]to indicate perceptual quality.The LIVEMD35database is a mixed distortion image database containing two types of multiple distortion, which are blur distortion mixed with JPEG compression distortion and Gaussian blur distortion mixed with Gaussian white noise.Fifteen reference images were used to generate 450 distorted images in both multiple distortion scenarios, with 225 images of both types, and each image was given a range [0, 100] of DMOS only as a subjective score.LIVE Challenge 36 is an authentic IQA database containing 1162 images.Each image is captured by a diverse photographer using distinct camera equipment ,encompassing a wide range of real-world scenes, and these images suffer from complex reality distortion.The MOS value corresponding to each image is obtained through an online crowdsourcing platform in the range [0, 100].SIQAD37is a commonly used screenshot image database that contains 20 reference images and their corresponding 980 distorted images with seven distortion levels for each distortion type.The database contains seven distortion types, which are GN, GB, Motion Blurring (MB), Contrast Change (CC), JPEG, JP2K and Layer Segmentation based Compression (LSC).SIQAD gives the DMOS value for each image in the range [0, 100].The specific information for each dataset is presented in Table2.Experimental analysis conducted on the SAUD and UIED image datasets can effectively verify the effectiveness of the proposed method for UIQA.The experiments carried out on LIVE, LIVMD, LIVEC, TID2013, and SIQAD demonstrate the applicability of the proposed method for different types of image data.

Table 2 .
Detailed information of benchmark datasets.

Table 3 .
Performance comparison of the proposed method on two underwater image datasets.

Table 4 .
Performance comparison of the proposed method on different color space.The best values are in bold.With an increase in the number of features utilized, the proposed method demonstrates improved performance across the assessment metrics of SROCC, KROCC, and PLCC, with values reaching 0.8354, 0.6467, and 0.8401 respectively.These performance gains are particularly evident when compared against an method utilizing only luminance and chromatic features, with corresponding SROCC improvements of 0.1274 and 0.1886, increases in KROCC of 0.1229 and 0.1761, additionally, significant increases in PLCC values of 0.113 and 0.1613 respectively.The analysis conducted in this paper indicates that through the fusion of luminance and color component features, the proposed NMFC method achieves optimal performance across all assessment metrics, with the highest degree of both subjective and objective agreement.The experimental results illustrate the performance of individual feature.Meanwhile, the results demonstrate the effective of multi-feature fusion in UIQA.

Table 5 .
Performance of ablation experiments on the SAUD dataset.The best values are in bold.