A dynamic multi-objective optimization method based on classification strategies

The dynamic multi-objective optimization problem is a common problem in real life, which is characterized by conflicting objectives, the Pareto frontier (PF) and Pareto solution set (PS) will follow the changing environment. There are various dynamic multi-objective algorithms have been suggested to solve such problems, but most of the methods suffer from the inability to balance the diversity of populations with convergence. Prediction based method is a common approach to solve dynamic multi-objective optimization problems, but such methods only search for probabilistic models of optimal values of decision variables and do not consider whether the decision variables are related to diversity and convergence. Consequently, we present a prediction method based on the classification of decision variables for dynamic multi-objective optimization (DVC), where the decision variables are first pre-classified in the static phase, and then new variables are adjusted and predicted to adapt to the environmental changes. Compared with other advanced prediction strategies, dynamic multi-objective prediction methods based on classification of decision variables are more capable of balancing population diversity and convergence. The experimental results show that the proposed algorithm DVC can effectively handle DMOPs.

Based on the above problems, this paper proposes a dynamic multi-objective optimization method based on the classification of decision variables.The classified decision variables can better predict the populations after environmental changes.Different prediction strategies are effective in balancing the diversity and convergence of populations.The main contributions of this paper are as follows: • A static classification strategy is proposed to classify decision variables into three categories, which enables the algorithm to better identify the uses of decision variables.The different decision variables are explored more effectively to balance the diversity and convergence of the new middle group.• One strategy is proposed to correct the classification of decision variables in a dynamic environment.The population is prompted to respond to environmental changes and adjust the evolutionary direction of decision changes to ensure that the population evolves in the right direction.• Different prediction strategies are provided for different decision variables, and adaptive prediction strate- gies are selected according to different decision variable categories, thus enabling adaptation to dynamic environments.(3) Different prediction strategies are provided for different decision variables, and adaptive prediction strategies are selected according to different decision variable categories, thus enabling adaptation to dynamic environments.

Related work
DMOPs are defined as minimization problems, and their mathematical formulation can be defined as follows: where x = (x 1 , x 2 , . . ., x n ) , represent the n-dimensional decision variables.m is the number of objectives, h i and g i represent the equation and inequality constraint, correspondingly.n h and n g represent the number of constraints, respectively.The variable t included in the objective function of DMOPs represents the time variable, which is calculated as follows: The value of t is related to the number of iterations of EA, so τ in the formula represents the number of itera- tions, which directly affects the value of t.
Definition 1 (Dynamic Pareto Dominance) At time t, assume that any two individuals x 1 and x 2 in the popula- tion satisfy the condition: consider that individual x 1 dominates individual x 2 , written mathematically as: (1) min F(x, t) = f 1 (x, t), . . ., f m (x, t) T s.t.h i (x, t) = 0, i = 1, 2, . . ., n h g i (x, t) 0, i = 1, 2, . . ., n g (2) t = 1 n t τ τ t (3) ∀i ∈ {1, . . ., m} : f i (x 1 , t) f i (x 2 , t) ∧∃j ∈ {1, . . .m} : f j (x 1 , t) < f j (x 2 , t) Diversity introduction method.In recent years, various DMOAs have been proposed to address DMOPs, and the core idea of these DMOAs is to balance diversity and convergence in response to environmental changes.The first consideration after environmental changes is to maintain the diversity of the population and to introduce random or mutant individuals in relation to the detection of environmental changes, so the introduction of diversity becomes one of the solutions.The introduction of diverse individuals cannot be introduced randomly, as the dynamic problem Pareto solution set and Pareto frontier change with the environment.In other words, increasing population diversity needs to be added through theoretical methods, while blindly adding random points will only generate populations in a bad direction.Consequently, Deb et al. 21introduced diverse individuals by tracking the Pareto frontier according to this feature, and the new population formed can be better adapted to the population.An extended algorithm for dynamic vector evaluation particle swarm optimization (VEPSO) was proposed by Harrison et al. 22 to address the shortcoming that the change detection mechanism relies on the observation of changes in the target space.Based on NSGA-II 23,24 , the DNSGA-II 25 algorithm was proposed, which was divided into two adaptive algorithms, DNSGA-II-A 26 as well as DNSGA-II-B, but was not conducive to solving complex environmental change problems.
Diversity maintenance methods.The diversity maintenance approach, which can also be called a memory-based strategy, focuses on using historical information in the stored environment and reusing them after the environment changes.In contrast to diversity introduction methods, diversity maintenance mechanisms usually directly store historical PS as initialized populations.Nevertheless, this method is limited to solving continuum optimization problems, and it is also difficult to cope with high degree of environmental changes.Li et al. 27 proposed a novel dynamic multi-objective optimization algorithm (DMOA-DM) based on region local search and memory, using NSGA2-DM to store useful information (memory) to guide the optimization of populations.Liang et al. 28 proposed a novel dynamic multi-objective evolutionary algorithm which incorporates a hybrid algorithm of memory and prediction strategy (HMPS) 29,30 and decomposition-based multi-objective evolutionary algorithm (MOEA/D 31 ).Differential prediction based on two consecutive population centroids is utilized if the detected change is not similar to any historical change; otherwise, memory-based techniques are applied to predict the new position of the population.
Prediction-based methods.The prediction-based dynamic multi-objective optimization algorithm also exploits some historical information, but in contrast to the diversity maintenance approach, the predictionbased approach is more reliant on the prediction strategy, and the optimal solution after the change of environment is dependent on the strength of the prediction model.Unfortunately, this method only takes into account changes in the center of the flow pattern and is only relevant to the historical information of the last moment, so it has significant limitations for population updating.Zhou et al. 32 proposed a population prediction strategy (4) www.nature.com/scientificreports/(PPS) that can initialize the entire population by combining the prediction center and the estimated stream shape.Zou et al. 33 combined special points and centroids to create a prediction mechanism.This prediction mechanism uses the adjacent time interval as the prediction step and directly predicts the set of non-dominated solutions.Jiang et al. 34 developed an inflection point-based migration learning method called KT-DMOEA. in the proposed method, a trend prediction model (TPM) was developed to generate predicted inflection points, and then an unbalanced migration learning method along with the TPM predicted inflection points were used to generate high-quality initial populations.
Decision variable classification methods.The methods of diversity introduction as well as the prediction methods do not consider the properties of the decision variables during the iteration, and these methods can be regarded as a probabilistic model for searching for optimal values of the decision variables.In other words, these methods assume that all decision variables have the same probability of functioning in diversity as well as convergence on the population.Nonetheless, in most DMOPs, the probabilities are different, and different nature of decision variables should be corresponded to different search models to get better solutions.
In static MOPs, the categories of decision variables can be determined by perturbing the decision variables to produce a large number of individuals and then using fitness assessment [35][36][37][38] .While the decision variable classification in static MOEAs is performed only once and the categories of decision variables will not revert after classification, for dynamic MOPs these strategies lose their effectiveness.Liang et al. 39 proposed a dynamic multi-objective evolutionary algorithm based on the classification of decision variables.The Spearman rank correlation coefficient (SRCC) 18 was used to determine the categories of the decision variables in static classification.A nonparametric t-test was utilized to further correct for variable categories after environmental changes when dynamic environments were in place.Liu et al. 16 introduced a new DVC classification method for decision variables based on the monotonicity of the optimization objective, which does not make use of dominance relations but uses reference vectors to guide the analysis of decision variables.

Proposed algorithm
To address the above problems, we propose a dynamic multi-objective optimization algorithm based on classification prediction.The algorithm consists of three important parts: The first part is the classification strategy of static decision variables, which divide decision variables into two categories: diversity-related decision variables and convergence-related decision variables.The diversity-related decision variables will be an important reference after environmental changes to prevent the population from falling into a local optimum and converging prematurely.The second part is the dynamic classification adjustment strategy after the environment changes, which further determines the predictability of the decision variables.The third part uses different prediction strategies based on the results of the dynamic classification respectively.The components of the algorithm are shown in Fig. 1.

RM-MEDA.
Under relaxed conditions, the Karush-Kuhn-Tucker criterion 40 demonstrates that the POS of a continuous multi-objective optimization problem is a segmentally continuous (m − 1)-dimensional manifold.The algorithm works as algorithm 1: Among them, category a as well as category b are defined as diversity-related variables, while category c is defined as convergence-related.The classification after perturbation is shown in Fig. 2. Consequently, the key argument for solving DMOPs is to determine the type of decision variables.
This section presents a classification method for decision variables in static environments, as shown in Fig. 3: Firstly, representative variables should be selected in the decision space.For the decision space in two-dimensional coordinates, the extreme value points on each objective value are found and defined as boundary points.Considering the first objective value, the region from the minimum of the boundary point to the maximum of the boundary point is evenly divided into three equal parts, in each region the nearest non-dominated point to the line of the boundary point and the farthest non-dominated point are selected respectively (if there is no point in the region then two points are randomly generated).In the decision space a total of 6 decision variables are selected and the selected decision variables are perturbed (the number of perturbations is 5).
In order to be able to determine the class of the decision variables, the values of the perturbation points generated by the decision variables are normalized and the normalized points are fitted to a reference line L fitted .We consider that the normal to the hyperplane can represent the direction of convergence, so the normal L normal to the hyperplane representing the number of objectives) is computed.As shown in Fig. 4, for the reason of clear presentation of the angle we make an example with two selected obtained points.Points 1 and 2 are represented by green dots and blue dots, respectively, in the figure.The decision variables x 1 , x 2 , x 3 , x 4 are separately perturbed, and the fitted line L fitted and the normal line of hyperplane L normal will form 8 different angles after perturbation (only acute angles are chosen here).The angle is denoted as θ ij , where i represents the selected point and j indicates the pinch mark.Whether the decision variable x 1 is related to convergence can be determined by the angles.Theoretically, such judgments can be made by summing all θ i1 and taking the mean value, and the smaller i θ i1 means that L fitted is closer to L normal .Then the decision variable x 1 is determined to be related to convergence, otherwise it is related to diversity.At a brief level, the larger of the angle contributes more to diversity and the smaller of the angle contributes more to convergence.Since some decision variables have both diversity and convergence, in order to avoid overly absolute division, k − means is used here to divide the correlation angles of decision variables x 1 , x 2 , x 3 and x 4 into three categories.As shown in Fig. 5, the four decision variables given are divided into three different categories.Algorithm 2 presents the detailed steps of decision variable category classification.in DMOPs that has an impact on the evolution of populations, and it is the change in the environment that leads to problems such as goal conflict and consequently the problem of responding during the evolution of the population.In the absence of environmental changes, population convergence can only reach the optimal state in the current environment.However, in order to prevent the population from being trapped in an optimum state and failing to respond to environmental changes, the categorization of decision variables needs to be adjusted in dynamic environments.This adaptive adjustment of decision variable categorization enables populations to efficiently and effectively respond to changes in their environment, thereby promoting their survival and evolution.Therefore, incorporating dynamic adjustments to decision variable categorization is crucial for population convergence in a changing environment.In a static environment, decision variables are divided into three categories: those related to diversity, those related to convergence, and those related to both diversity and convergence.The classification of decision variables in a static environment serves as a guide for the classification of decision variables in a dynamic environment.The classification of decision variables needs to be adjusted due to changes in the environment, and different decision variables correspond to different prediction strategies.In the case where a decision variable remains constant across several consecutive changing environments, it is possible that the variable may not change in the next environmental shift.Such variables can be considered similar variables and do not require initialization in the prediction process.When a variable undergoes significant changes between two consecutive environmental shifts, it can be identified as a variable with a longer prediction process.Conversely, when a decision variable shows no significant change between two consecutive environmental shifts, it is classified as a variable with a shorter prediction process.In response to environmental changes, we employ non-parametric t-tests to adjust the categorization of decision variables to adapt to different predictive strategies.
The formula for the non-parametric t-test is as follows: where x i (t) indicates the mean value of the decision variable of point i at time t.Var(x i (t)) expresses the vari- ance of the decision variable of point i at time t.β (a predetermined threshold) is set as a criterion for testing the attributes of the decision variables.β is set as a criterion to test the properties of the decision variable, and if t − test i β , then the decision variable is considered to remain essentially constant over two successive environ- mental changes, thus defining that this decision variable is not a similar variable.The points on these variables as well as the perturbation points are added directly to the new population without initialization.If t − test i > β , then the decision variable needs to be further subdivided into diversity-related and convergence-related, in other words, it needs to be further determined what prediction strategy should be assigned to this decision variable.Further determination of the decision variables needs to be adjusted based on historical information.First assume that C t is the centroid of the non-dominated solution set POS, which can be obtained by the following equation: where C t denotes the centroid of the non-dominated solution set at moment t.P t Non−dom indicates the size of the population of non-dominated individuals at moment t. x t is specified as a non-dominated individual at moment t.Then, the results of the classification of decision variables are observed historically.Subsequently, n points X t [i](i = 1, 2, 3 . . .n) are generated by modifying the i-th decision variable in the C t to the value of the selected points as well as the value of the perturbed point on the i-th decision variable at moment t − 1 .The generated points are compared with the C t for domination, and if X t [i] can dominate the C t , then the category of decision variable i s adjusted to Con t ′ , otherwise to Div t ′ .The corresponding selected points and perturbed points on the decision variables in Con t ′ and Div t ′ .The corresponding selected points and perturbed points on the decision variables in Con t ′ and Div t ′ are composed into two point sets pop t Con ′ and pop t Div ′ , respectively.
The two sets are adapted to different prediction strategies, which will be described in the nex section.The detailed steps of dynamic classification of decision variables are shown in Algorithm 3.
Predictive strategy.As described in the section above, the selected points as well as the perturbed points are classified into three categories based on the classification of the decision variables: (1) Similar decision variables, directly add the point pop t similar to the next iteration without initialization.(2) The set of points corresponding to the diversity-related decision variables pop t Div ′ uses the following strategy: where dir indicates the step between the centroid at time t, and the centroid at time t − 1 , which is used to predict the pop t+1 Div , rand ∈ [0.5, 1.5] is a random value.(3) The prediction strategy for the point set pop t Con ′ corresponding to the decision variables associated with convergence is as follows: where U i (t) and L i (t) represent the maximum and minimum values on the ith dimensional vector at time t, Gaussian(0, d) expresses a Gaussian perturbation with mathematical expectation dimension 0 and vari- ance d.

Test problems and performance indicators
Test instances.We tested 12 benchmarks featuring the FDA1-FDA4 43 test suite, the dMOP1-dMOP3 44 test suite and the F5-F10 45 test suite.Among them, FDA1-FDA4 are non-convex, continuous or discontinuous, time-varying as well as non-time-varying.FDA4 and F8 are three objectives test problems, the others are problems with two objectives.Both the FDA test suite and the DMOP test suite have linearly correlated decision variables, and non-linearly correlated decision variables are present in the F5-F10 test suite.

Performance indicators. Inverted generational distance (IGD).
The metric under consideration represents a comprehensive evaluation metric, whereby its underlying calculation concept involves computing the minimum sum of distances that exist between individuals belonging to the actual POF and individuals generated (7) by the algorithm under evaluation.It is noteworthy that the smaller the distance, the better the algorithm's convergence and distribution performance.The IGD 46 is calculated as follows: At time t, POF t denotes the uniform distribution of the set of Pareto optimal points, and P t is the approximate set of POF t .This can be interpreted as measuring the shortest distance between the true POF and the algorithm's optimal solution.d(v, P t ) represents the minimum Euclidian distance between v and the point in P t .
The average of IGD (MIGD).For DMOPs, MIGD provides a better evaluation of MOEAs.MIGD is expressed as: where T is a set of discrete time points in a run and |T| is the cardinality of T.
Hypervolume difference (HVD).HVD 45 is used to measure the distance between the hypervolume of POF obtained by the algorithm and the real POF.HVD can be calculated as: where HV (POF t ) represents the volume of the region in the target space enclosed by the set of non-dominated solutions and reference points obtained by the algorithm.the larger the HV value, the better the comprehensive performance of the algorithm.P t is an approximation set of POF at time t.

Experimental results and analysis
Parameter settings.In this paper, a comparison is made between the proposed algorithm DVC and six dynamic multi-objective algorithms, and all of them use the RM-MEDA algorithm in the optimization of the problem.The six compared algorithms are: feedforward prediction strategy (FPS), population prediction strategy (PPS), centroid and inflection point-based prediction strategy (CKPS), special point-based prediction strategy (SPPS), change response mechanism combining hybrid prediction and mutation strategy (HPPCM), and a prediction method based on diversity screening and special point prediction to solve dynamic optimization problems (DSSP).In order to compare the algorithms fairly, the parameters of each algorithm were basically set to the values of the parameters provided in the original paper, and some of the parameters were adjusted to ensure that they were compared in the same state as the present method.The experimental parameters are summarized as follows.
Each problem was tested 20 times independently, and 100 environmental changes were generated in each experiment.The degree of change of the environment n t is set to 10.The environment change frequency τ t is set to 25.The number of iterations is 2500.The population size is 100 and the decision space dimension is 20.
• Feed-forward prediction strategy (FPS) 47 The order in AR p model was p = 3 , the number of cluster was set 5 and the maxi-mum length of history center point sequence was M = 23 .n the initial population, there are 3(m + 1) predicted points, 70% of the rest points are inherited from the previous population and the other 30% are randomly sampled from the search space.• Population prediction strategy (PPS) 32 PPS used the same AR p model as FPS, and p = 3 , M = 23.
• A novel prediction strategy based on center points and knee points (CKPS) 33 The number of knee points is 9.
• A predictive strategy based on special points (SPPS) 48 In the process of predicting the non-dominated set by feed-forward center points, set the Gaussian perturbation d to be 0.1.• Change response mechanism combining hybrid prediction and mutation strategy (HPPCM) 49 The autonomic evolution of the algebra ∆t is set to 2. • A dynamic multi-objective evolutionary algorithm based on prediction (DSSP) 42 The number of knee points is 9.The number of perturbations is set to 5.

Comparison of performance evaluation results.
Analysis of FDA and dMOP evaluation results.Table 2 shows the average values along with the variance of MIGD for all compared algorithms on the FDA1-FDA4 and dMOP1-dMOP3 test problems.In the course of the experiment we ensured that the population underwent 100 environmental changes, 100 environmental changes were observed in three stages and assessed them using  www.nature.com/scientificreports/evaluation metrics.The first 20 environmental changes were identified as 1 st stage, the middle 40 environmental changes were recognized as 2 nd stage, and the last 40 were deemed to be 3 rd stage.
From an overall perspective, the DVC algorithm performs well in all phases, although it is slightly inferior to HPPCM in the FDA3 problem and the dMOP3 problem, but performs better in all other test problems, thus demonstrating that DVC has a strong ability to respond to environmental changes.FDA3 is a nonlinear test problem with large fluctuations in the set of Pareto solutions following environmental changes.Observing the characteristics of HPPCM, the HPPCM algorithm which uses the strategy of precisely controlling the polynomial mutation, this strategy effectively controls the updating and keeping of new populations.In contrast, our proposed DVC strategy relies more on the effect of decision variables in the population.This may be the reason why HPPCM outperforms DVC.dMOP3 test problem has no connection between the decision variables, which affects the results of DVC to a certain extent, but on closer inspection, it is easy to find that DVC is only slightly inferior to HPPCM, which also shows that DVC and HPPCM have their own advantages.HPPCM performed slightly better on the FDA3 issue, but DVC's MIGD values were near parity with HPPCM.In the first stage, DVC was only marginally inferior to HPPCM for the FDA3 problem, and DVC was only a bit worse than the other methods for the dMOP problem, but performed well for FDA1, FDA2, FDA4, dMOP1, and dMOP2.This indicates that DVC is able to respond to environmental changes faster in the early stages of environmental changes.The MIGD values for the FDA2 problem in both phase 2 and phase 3 show superiority over the other five strategies.
Table 3 shows the MHVD metrics of seven different algorithms for the FDA and dMOP problems, and it can be seen from the table that the results of the seven different algorithms are similar to MIGD.Overall DVC has a strong competitive ability, especially in the FDA4 problem DVC's results are obviously better than other algorithms.
DVC differs from CKPS, SPPS, HPPCM, and DSSP in that DVC does not focus on the selection and prediction of particular points, but more on the classification of decision variables and their changes in the environment.In CKPS, SPPS, HPPCM, and DSSP, these comparison algorithms adopt the strategy of special points, which are employed to increase the diversity of populations.Conversely, in contrast to randomly adding diversity to the population, increasing the diversity of the population using special points removes the instability of randomly www.nature.com/scientificreports/adding special points.Nevertheless, the definition of special points is to some extent a coarse-grained definition.
The advantage of DVC is that only the categories of decision variables need to be analyzed and then the attributes of decision variables are judged by historical information, and different attributes correspond to different prediction strategies, which can balance the diversity and convergence of populations.
Analysis of F5-F10 evaluation results.Tables 4 and 5 show the values of the MIGD and MHVD metrics on the F5-F10 test suite, respectively.It is clearly seen that HPPCM, DSSP, and DVC algorithms are much superior to the rest among the seven compared algorithms.Among these three algorithms, the better results of DVC prevailed.F5-F7 are nonlinearly correlated problems and DVC clearly exhibits better performance than HPPCM and DSSP comparison algorithms.The nonlinear correlation problem also implies that PF changes are complex and requires a more demanding strategy for responding to environmental changes.For the three-objective problem F8, DSSP outperforms DVC to some extent because DSSP takes more objectives into account when sampling at special points.but the results of DVC are not comparable to DSSP, and the values of MIGD outperform DSSP in the first and third stages.The resilience of the DVC approach is also demonstrated by the standard deviation values presented in Tables 2, 3, 4 and 5.These results reveal that the DVC method exhibits a remarkable robustness and can swiftly and efficiently adapt to environmental variations, displaying an exceptional ability to tackle problems of varying complexities with ease.Distribution diagram of final population.Figure 6 visualizes the tracking ability of the POF on the FDA1 problem for DVC and the other six comparison algorithms.It can be seen from Fig. 6 that DVC has a more uniform POF distribution compared to the other six comparison algorithms, which also shows that the DVC algorithm is able to better balance diversity and convergence.The dMOP2 problem has the qualities of a changing POF as well as a changing POS.On the dMOP2 problem, we have selected different algorithms to www.nature.com/scientificreports/demonstrate the late state of the changing environment, and it can be seen in Fig. 7 that most of the algorithms basically track the POF well, but DVC demonstrates a more superior performance on the uniform distribution.

Discussion
In order to examine the effectiveness and robustness of the algorithm, we conducted a specific comparison with its formidable competitor, HPPCM.In our experiment, we maintain the value of n t unchanged while observing the algorithm's adaptability to dynamic environments by varying the value of τ t , which represents the frequency of environmental changes.From the table 6, it can be clearly seen that DVC outperforms HPPCM significantly in both MIGD and MHVD metrics.This further illustrates that DVC is more capable of adapting to environments with varying frequencies of changes.During 100 phased environmental changes, HPPCM indeed demonstrates comparable results to DVC.However, when altering the frequency of environmental changes, DVC exhibits a certain advantage.HPPCM's precise and controllable abrupt mutation strategy indeed holds some advantage under highly changing environments, whereas DVC's stability relies on the relationships between decision variables, demonstrating superiority under different environmental change frequencies.Both algorithms have their respective strengths and weaknesses, but DVC's stability stands as a robust competitive advantage.

Conclusions and future work
The role of decision variables in environmental change is fully considered in this paper, and a DMOP-DVC based on decision vector classification is proposed by integrating the historical information of static classification of decision variables.The proposed method reflects the evolutionary direction for decision variables in the current environment, and different classes of prediction strategies are used to optimize the evolution of populations to balance convergence and diversity.
From the experiments, many advantages are shown on DMOP-DVC, but the algorithm also has some drawbacks.For example, the accuracy of prediction can be further improved in the prediction strategy, and future work can consider introducing some training models to consider global historical changes.If the environment changes little or the environment changes drastically, whether different prediction strategies are needed to cope with it, these are the future works that are worth looking forward to.

I
Change with time Remain the same II Remain the same Change with time III Change with time Change with time IV Remain the same Remain the same Vol:.(1234567890)Scientific Reports | (2023) 13:15221 | https://doi.org/10.1038/s41598-023-41855-2

Figure 1 .
Figure 1.General flowchart of the algorithm.

Figure 3 .
Figure 3. Points are selected from the decision space and perturbed.

Figure 4 .
Figure 4. Convergence-related and diversity-related variables are identified using angles.

Figure 5 .
Figure 5. Three categories are classified according to the angle: diversity correlation, convergence correlation, and both diversity convergence correlation.

Figure 7 .
Figure 7. (a-g) exhibit the results of algorithms FPS, PPS, CKPS, SPPS, HPPCM, DSSP, and DVC with n t = 10 and τ t = 25 on the dMOP2 problem at the late stage of environmental changes, respectively.

Table 2 .
Mean and standard deviation of MIGD values of six strategies on FDA and dMOP test suites.The values which are in bold face donate to have the best effect of the six strategies.

Table 3 .
Mean and standard deviation of MHVD values of six strategies on FDA and dMOP test suites.The values which are in bold face donate to have the best effect of the six strategies.

Table 4 .
Mean and standard deviation of MIGD values of six strategies on F5-F10.The values which are in bold face donate to have the best effect of the six strategies.

Table 5 .
Mean and standard deviation of MHVD values of six strategies on F5-F10.The values which are in bold face donate to have the best effect of the six strategies.

Table 6 .
Mean and standard deviation of MIGD values and MHVD values of HPPCM and DVC on different τ t values.Significant values are in bold.