Introduction

Many-objective optimization problems (MaOPs), multi-objective optimization problems (MOPs) with more than three objectives, have been attracting considerable attention in many fields. On the one hand, many real-world applications can be modeled as MaOPs, such as parameter estimation1,2, engine calibration3, and community detection4. On the other hand, MOPs have not been well solved when the number of objectives increases. To overcome this challenge, lots of many-objective evolutionary algorithms which adopted an evolutionary algorithm, have been proposed to improve the performance in solving MaOPs. However, nearly all evolutionary algorithms solve single-objective optimization problems. Many-objective evolutionary algorithms (MaOEAs) must add other many-objective processing approaches, including Pareto-based approaches, decomposition-based approaches, and indicator-based approaches. Many-objective evolutionary algorithms contain two following major parts: the evolutionary algorithm and many-objective processing approach.

In recent years, many promising many-objective evolutionary algorithms have been proposed. Bader et al. proposed the hypervolume (HV) estimation algorithm (HypE)5. Yang et al. proposed a grid-based evolutionary algorithm (GrEA)6, which introduces the concepts of grid dominance and grid differentiation, and evaluated the relationships among individuals in the grid environment. Yuan et al. proposed a new theta-dominance7, in which the right-to-use weight vector divides each solution into a niche and completes the evaluation and selection of the solution in each niche. Zhang et al. proposed a many-objective optimization algorithm based on an improved r-dominance8, which adopts the dynamic value strategy of the non-r-dominance threshold with nonlinear reduction based on r-dominance. Cheng et al. proposed a reference vector-guided evolutionary algorithm9, which introduces a new aggregation function APD to judge the merits and demerits of the solution by the angle between the solution and the associated weight vector. Zhu et al. presented a new linear weighted minimal/maximal dominance (LWM-dominance) and a new many-objective optimization algorithm based on LWM-dominance10. Sun et al. proposed an inverted generational distance (IGD) indicator-based evolutionary algorithm11. Dabba et al. applied the artificial fish swarm optimization algorithm to solve the MaOPs12. Zhao et al. developed a decomposition-based many-objective artificial bee colony algorithm with reinforcement learning13. Wu et al. introduced a novel many-objective Brain Storm optimization algorithm14. Zhang et al. proposed a hybrid multi-agent Coordination Optimization Algorithm (MCO)15 that, applies a coordination mechanism to accelerate convergence. Guo et al. presented a many-objective optimization with an improved shuffled frog leaping algorithm16. Liu et al. proposed a novel multi-objective optimization algorithm based on Bacterial Foraging algorithm17. Uzman et al. developed a many-objective hybrid Bacterial Foraging algorithm18 and a multi-objective artificial butterfly optimization algorithm19. Li et al. proposed a many-objective optimization algorithm based on the R2 indicator and objective space partition20. This algorithm, uses a double-layer file correction strategy to provide each solution a different priority when selecting the candidate solutions, in order to give priority to the discrete solutions, and was proved to solve seriously missing diversity. In order to understand the composition of the above representative many-objective evolutionary algorithms more clearly, we summarize these algorithms in Table 1.

Table 1 Representative many-objective evolutionary algorithms.

The following conclusions could be drawn from the above literature. Compared with Pareto-based approaches and indicator-based approaches, decomposition-based approaches are more widely used in handling MaOPs. In addition, most MaOEAs are proposed by improving many-objective processing approaches or adopting some evolutionary strategies proposed recently with excellent performance. MaOEAs adopting evolutionary strategies with excellent convergence accuracy and speed, such as Bat Evolution Strategy and Cuckoo Strategy, have better convergence and diversity than those with classical evolutionary strategies, such as GA, PSO, and ABC. MaOEAs with some improved many-objective processing approach and a novel excellent evolutionary strategy simultaneously are likely to achieve the promising performance.

To improve the performance of MaOEAs in solving MaOPs when the number of the objectives increases, we proposed a many-objective optimization method based on an improved Farmland Fertility algorithm (MOIFF). Our main innovation and contributions in this paper can be summarized as follows.

  1. (1)

    Farmland Fertility algorithm (FF) as a novel bio-inspired meta heuristic method proposed in 2018, is employed to serve as the optimization strategy of MOIFF. FF performs better than many well-known meta-heuristic methods (including GA, DE, PSO, and ABC) in terms of convergence accuracy, stability, and speed FF. An improved FF algorithm (IFF) has been proposed in 202025. However, FF and IFF are designed for solving complex single-objective optimization problems. In order to handle MaOPs effectively, FF has been tailored from the following aspects in this study: First, we propose a novel individual fitness assessment approach based on the cumulative ranking value to distinguish the advantages and disadvantages of each individual in MaOPs. Second, according to the characteristics of MaOPs, we proposed a novel method based on individual cumulative ranking value to constitute and update the global memory and local memory of each individual, and propose a hybrid search mode combining subspace search and full space search to update individuals at the stages of soil optimization and soil fusion.

  2. (2)

    The experimental results have proved the dual aggregation functions-based environmental selection is a representative and promising many-objective processing approach. However, satisfactory diversity is hard to obtain, since offspring individuals are selected randomly and some holes may appear in Pareto front (PF). We propose a novel adaptive environmental selection method to address these issues. It not only avoids the blindness of random selection but also satisfies the requirements of convergence and diversity of MaOEAs at different stages of algorithm evolution.

Finally, the proposed MOIFF is compared with four state-of-the art many-objective evolutionary algorithms on many test problems with various characteristics, including DTLZ and WFG test suites. Experimental results demonstrate that the proposed algorithm has competitive convergence and diversity on MaOPs.

The rest structure of the paper is organized as follows: “Methods” introduces the relevant theoretical knowledge, including the basic principles of the original FF and a representative many-objective processing approach; “Proposed method” describes the innovation, the principle, procedures, and detailed operations of our proposed MOIFF; “Results and discussion” compares the performance of MOIFF against four state-of-the art many-objective evolutionary algorithms; “Conclusions” concludes the full text and points out the issues to be studied in future.

Methods

Farmland fertility algorithm

In the real world, farmers apply different fertilizers to farmlands with different soil qualities. By simulating the above behavior, Farmland Fertility algorithm (FF) has been proposed to handle single-objective optimization problems. In FF, the fertilization schemes and soil qualities of farmlands are equivalent to individuals and their fitness values, respectively. The section of the farmland with the worst soil quality selected the best fertilization scheme, and for the other sections of the farmland, the fertilization schemes are randomly selected. Finally, the soil quality of farmland can be effectively improved through continuous improvements in fertilization schemes. The pseudo-code of FF is shown in Algorithm 1.

figure a

A representative many-objective processing approach

In 2019, Zhang et al. proposed a novel many-objective processing approach in decomposition-based coevolutionary algorithm (DECAL), named dual aggregation functions-based environmental selection, which consists of two following aggregation functions with complementary strengths: the volume (VOL) function and the KNEE function. Where VOL function is strong at facilitating the convergence of individuals, while KNEE function is designed to maintain the diversity of the population. Unlike the most commonly used PBI approach, the VOL function and the KNEE function proposed in DECAL are parameter-free, and more importantly, obtain better convergence and diversity in MaOPs. For the above reasons, we adopt the dual aggregation functions-based environmental selection proposed in DECAL as the many-objective processing approach of our MOIFF. The pseudo-code of DECAL is shown in Algorithm 2.

figure b

Proposed method

We proposed a many-objective optimization algorithm based on an improved Farmland Fertility algorithm (MOIFF) to improve the convergence and diversity of MaEAs. FF is employed to serve as the main evolutionary strategy of MOIFF, and dual aggregation functions-based environmental selection with the VOL and KNEE functions is improved as the multi-objective processing approach of MOIFF. The basic procedure of the proposed MOIFF is similar to those of most decomposition-based MaEAs. The pseudo-code of MOIFF is shown in Algorithm 3.

figure c

Novel individual fitness assessment approach based on cumulative ranking value

To evaluate effectively the quality of individuals in MaOPs, we proposed a novel individual fitness assessment approach based on cumulative ranking value. The main motivation is as follows. For decomposition-based MaEAs, individuals associated with the same weight vector can be compared in performance. Suppose that all the individuals in MaOPs are associated with the same weight, then, they can be compared. However, decomposition-based approaches commonly contain some evenly spread weight vectors, and almost all individuals are associated with different weight vectors. The performance of each individual should be evaluated by all weight vectors instead of one. In addition, at the beginning of iteration, MaEAs should facilitate the convergence of individuals; at the end of iteration, attention should be shifted to diversity.

Basing from the above motivation, we proposed the following method to evaluate the quality of each individual for MaOPs. Step 1: At the beginning of the iteration, for each individual, calculate the VOL function value on each weight vector by using to Eq. (10); at the end of iteration, calculate the KNEE function value by using Eq. (11). Step 2: Sort the individuals associated with the same weight vector with respect to the VOL function or the KNEE function. Thus, each individual will be assigned to N ranking values, where N is equal to the size of a set of weight vectors. Step 3: For each individual, accumulate all ranking values as its novel assessment fitness value, recorded as s_sort(i). Obviously, the smaller the cumulative sorting value s_sort(i) of an individual, the better the individual.

Figure 1 displays an example of handling two-objective DTLZ1 to further illustrate the effectiveness of the above fitness assessment approach based on cumulative ranking value. The final distribution of the solutions shows that the individuals with the first five cumulative ranking values are distributed close to the true Pareto front (PF); the larger the cumulative ranking value of the individual is, the father away from the true PF. The above finding shows that our proposed novel individual fitness assessment approach based on the cumulative ranking value is effective.

Figure 1
figure 1

Principle and effect of the novel individual fitness assessment approach based on cumulative ranking value.

Mechanism of updating global memory and local memory

The decomposition-based MaEAs decompose a many-objective optimization problem into a number of single-objective sub-problems by an even set of weight vectors. Each sub-problem is defined by a weight vector, and if weight vectors are adjacent to each other, the optimal solutions of the sub-problems associated with them are also very close. Therefore, with the extension of evolution, the individuals associated with neighborhood weight vectors are similar to some extent. Obviously, each individual and its neighboring individuals form a region naturally; thus, dividing each individual into regions like FF is not necessary. However, a new mechanism must be proposed to compose and update local memory and global memory in MaOPs. Basing from the above analysis, we proposed the following mechanism based on individual cumulative sorting values to compose and update local memory and global memory.

  1. (1)

    Mechanism of composing and updating global memory

    At each iteration, MGlobal individuals with the smallest s_sort value in the current population are selected and stored them in the global memory directly.

  2. (2)

    Mechanism of composing and updating local memory

    Unlike FF, each local memory is assigned to each individual rather than each region in MOIFF. That is, the number of local memory is equal to the number of individuals. The local memory of each individual is updated with the help of its neighbor population (shown in Fig. 2) as follows. First, for each individual Xi, the weight vector associated with it is determined, and then the T weight vectors closest to the associated weight vector are selected as the neighbor weight vectors. Second, individuals associated with each neighbor weight vector are identified to form the neighbor population of Xi. Finally, MLocal individuals with the smaller cumulative sorting values in the neighbor population of Xi are selected as the local memory of Xi directly.

Figure 2
figure 2

Schematic of the neighbor population.

In Fig. 2, dotted lines represent the true PF; w1, w2, and w3 represent the different weight vectors; and x1, x2, x3, and x4 represent four individuals in the current population, where x1 is associated with vector w1, x2 and x3 with weight vector w2, and x4 with weight vector w3. For the individual x1, w2 and w3 are the neighbor weight vectors of the associated weight vector w1 with it. Individuals x2 and x3 are associated with weight vector w2, and x4 is associated with weight vector w3. Individuals x2, x3, and x4 compose the neighbor population of individual x1.

New individual-updating mechanism for soil optimization

To handle MaOPs, we propose a novel individual-updating mechanism for soil optimization in view of the characteristics of decomposition-based MaEAs, described as Eq. (13). The main motivations and ideas are as follows: to facilitate the convergence and promote the diversity of MOIFF, individuals with the poorer convergence should learn from the excellent individuals in the neighborhood, and the others should explore new locations by further communication with other individuals different from itself.

$$X_{{inew}} = \left\{ {\begin{array}{*{20}l} {\alpha \times ( - 1 + 2 \times rand) \times (X_{i} - X_{{i\_ML}} (best)) + X_{i} ,} \hfill & {s\_sort(X_{i} ) \in N/k} \hfill \\ {\beta \times rand \times (X_{i} - X_{{i\_ML}} (others)) + X_{i} ,} \hfill & {else,} \hfill \\ \end{array} } \right.$$
(13)

where Xi_ML(best) and Xi_ML(others) represent the best individual and another random individual besides the best individual in Xi’s local memory, respectively; and k is a constant, generally, k = 4.

The above individual-updating mechanism proposed for soil optimization has the following advantages. On the one hand, the N/k individuals with poor convergence and diversity learn from the excellent individuals in their local memory randomly, which provides the direction of evolution within the neighborhood and maintains the diversity of the whole population. On the other hand, others learn from other random individual besides themself, which favors exploration of new locations, and can make some weight vectors without associated individuals may be associated with some individuals. Therefore, the diversity of MOIFF will be improved.

A new individual-updating mechanism for soil fusion

As shown in Eq. (5), in the soil fusion stage of FF, all individuals only learn from the best individual in the global memory or local memory, which improves the convergence speed of FF to a certain extent. Unfortunately, it is unsuitable for decomposition-based MaEAs. Therefore, we propose a new individual-updating method for soil fusion, described as Eqs. (14) and (15). The main motivations and ideas are as follows: in the soil fusion stage, only the individual in the global memory and the best individual in each local memory are exploited many times rather than all the individuals, and minor perturbations are made around them. However, considering that the number of these individuals is small, i.e., the evolutionary information is very limited, and each individual has carried some evolutionary information related to the convergence of corresponding sub-problems during iterations. Therefore, each excellent individual should be updated by the minor perturbations around themselves and add some evolutionary information of other individuals.

$$X_{{inew}} = \left\{ {\begin{array}{*{20}l} {X_{{MG}} (random) \times randn,} \hfill & {Q > rand} \hfill \\ {X_{{i\_ML}} (best) \times randn,} \hfill & {else,} \hfill \\ \end{array} } \right.$$
(14)
$$X_{{inew,j}} = \left\{ {\begin{array}{*{20}l} {X_{{inew,j}} ,} \hfill & {CR > rand} \hfill \\ {X_{{i,j}} ,} \hfill & {else,} \hfill \\ \end{array} } \right.$$
(15)

where, \(X_{MG} (random)\) represents a random individual selected from global memory, \(X_{i\_ML} (best)\) represents the best individual stored in Xi's local memory, CR is the crossover probability, and generally, \(CR \in [0.6,0.8]\).

Adaptive environmental selection

As described in Algorithm 2, dual aggregation functions-based environmental selection in DECAL consists of the two following steps: Step 1, for each weight vector, select the individual with the best VOL function value and the individual with the best KNEE function value from all the associated individuals with it respectively; Step 2, for each weight vector, randomly select one from the individuals identified in step 1 as an individual of the offspring population. Obviously, some weight vectors may not be associated with any individual. Therefore, the number of individuals in the offspring population may be smaller than the initial size of individuals, and satisfactory diversity is hard to obtain because some holes may appear in PF. In addition, the random selection in step 2 has a certain blindness, which does not satisfy the requirements of convergence and diversity of many-objective optimization algorithms at different stages of algorithm evolution.

To address these above issues, this article proposes a new adaptive environment selection, as shown in Algorithm 4.

figure d

As shown in Algorithm 4, the adaptive environment selection proposed in this paper has the following advantages: First, when the size of the offspring population is less than the initial setting number of weight vectors, we will select some individuals into the offspring population, and vice versa. Finally, the size of the offspring population obtained by adaptive environment selection is equal to the initial setting number of weight vectors, which further guarantees that the PF is even distributed; Second, at the begging of iterations, the offspring population obtained by adaptive environment selection contains many individuals with better VOL fitness. At the end of iterations, the offspring population contains many individuals with better KNEE fitness. It not only avoids the blindness of random selection but also satisfies the requirements of convergence and diversity of many-objective optimization algorithms at different stages of algorithm evolution.

Hybrid subspace-full space search mode

For MaOPs, at the begging of iteration, the set of all the solutions is commonly far from PS. If each individual performs a complete search in the D-dimensional search space, called the full-space search mode, then the individual can obtain some promising evolutionary directions from such huge searching space. As the iteration progresses, the individuals are gradually close to the true PS. At this moment, some individuals have achieved the global optimum in most dimensions, whereas only a few dimensions are far from the global optimum. In this case, if individuals still adopt the full-space search mode of FF to evolve, the good dimensions may change greatly, which may cause the population to deviate from the excellent evolution directions, and the convergence of algorithm may be slow or fail to obtain the true PF. The current individuals are better suited for small-scale searches. Thus, the above-mentioned issues can be easily addressed, if individuals do not adopt the full-space search mode but only search on a dimension, called subspace search. In summary, the full-space search mode refers to all dimensions of an individual being updated in accordance with the established search strategy in the stages of soil optimization and soil fusion, and the subspace search mode means that during the stages of soil optimization and soil fusion, only one dimension of an individual is updated in accordance with the established search strategy, while the other dimensions are unchanged.

The subspace search mode updates only one dimension of individual at one iteration. Thus, if individuals adopt the subspace search mode to update themselves for many iterations, the convergence of the algorithm may be slow down instead. Therefore, when the subspace search mode does not obtain a better PF, the full space search mode should be used again for further search. That is, the full space search mode and subspace search mode should alternate. Basing from the above ideas, we propose a hybrid subspace-full space search mode. The conversion condition between the subspace and full-space search mode are as follows.

Conversion condition between subspace and full-space search modes

As shown in “Novel individual fitness assessment approach based on cumulative ranking value”, for each weight vector, the associated sub-problem is solved better, as the smallest s_sort associated with it (denoted as pbest) is smaller. As the iteration progresses, the change in pbest can be used to judge whether the evolution slow down, fail to update the individual into a better one, and fall into a local optimum. Considering the above, we take the change in pbest as the conversion condition between the subspace search mode and full-space search mode as follows: First, initialize parameters including c = 0, c1, and c2, where c1 is used to determine whether pbest has changed before and after the iteration, and c2 is used to determine the number of iterations that pbest remains unchanged. Second, for each weight vector, calculate the Euclidean distance Dis(i) between pbesti obtained at this iteration and the last iteration and then calculate the mean (denoted as average_Dis) of all Dis(i), as shown in Eq. (16). If average_Dis is less than c1, then c = c + 1; otherwise, c = 0. When c is equal to c2, the search mode is converted, i.e., the full space search mode is converted to the sub-space search mode, and vice versa.

$$average_{Dis} = mean\left( {\sum\limits_{i = 1}^{N} {\sqrt {\sum\limits_{j = 1}^{D} {(pbest_{i}^{d} (j) - pbest_{i}^{d - 1} (j))^{2} } } } } \right),$$
(16)

where, \(pbest_{i}^{d} (j)\) represents the j-th dimension of the smallest s_sort associated with the i-th weight vector (Wi) at the d-th iteration.

Description of subspace search mode

When the subspace search mode is adopted, if the updated dimension is selected in accordance with the order of dimensions, it can better guarantee that each dimension can search more finely during the limited iterations compared with the updated dimension selected randomly. Therefore, in our proposed subspace search mode, when converting to the subspace search mode, the updated dimension of each individual is selected in accordance with the order of dimensions at each iteration, which is similar to the method proposed in25, detailed as follows. At the last iteration, dimension d is selected to update, and at this iteration, dimension d + 1 is selected to update. When all the dimensions have been updated by adopting the subspace search mode, unlike the method proposed in25, even if they fail to meet the conversion condition between the subspace and full-space search modes, all individuals adopt the full-space search mode in the next iteration.

Suppose that the flag-th dimension needs to be updated, the subspace search modes for soil optimization and soil fusion are as follows.

The subspace search mode for soil optimization and soil fusion is shown in Eq. (17).

$$X_{inew,flag} = \left\{ {\begin{array}{*{20}l} {\alpha \times ( - 1 + 2 \times rand) \times (X_{i,flag} - X_{i\_ML,flag} (best)) + X_{i,flag} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} s\_sort(X_{i} ) \in {N \mathord{\left/ {\vphantom {N k}} \right. \kern-\nulldelimiterspace} k}} \\ {\beta \times rand \times (X_{i,flag} - X_{i\_ML,flag} (others)) + X_{i,flag} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} else.{\kern 1pt} } \\ \end{array} } \right.$$
(17)

At the stage of soil fusion, to guarantee that the new offspring individuals are different from themselves and facilitate convergence, each individual performs the crossover operation between the disturbed individual and the disturbed selected excellent individual, as shown in Eq. (18).

$$X_{inew,flag} = \left\{ {\begin{array}{*{20}l} {X_{MG,flag} (random) \times randn,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} Q > rand} \\ {X_{i\_ML,flag} (best) \times randn,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} else.{\kern 1pt} } \\ \end{array} } \right.$$
(18)

Complexity analysis

Given an MaOP with M objectives in a D-dimensional decision space. Assume that the population size of each swarm is NP. the time complexity of our proposed MOIFF is dominated by the operators in the for loop (lines 08–19 in Algorithm 3). In each iteration, determining the search mode (line 08) requires \(O(D \times NP)\) time. Calculating he assessment fitness values of individuals (line 09) costs \(O(NP{ + }M \times NP^{2} )\) time. Updating the global memory and local memory (line 10) takes \(O(M_{Global} { + }M_{Local} )\) time, where MGlobal and \(M_{Local}\) represent the size of the global memory and the local memory. The soil optimization component (line 11) requires \(O(D \times NP)\) time. The component of line 12 costs \(O(M \times NP^{2} )\) time. Adaptive environment selection (line 14) requires \(O(NP^{2} )\) time. The soil fusion takes \(O(D \times NP)\) time. Therefor, the overall time complexity of our proposed MOIFF is \(O(M \times NP^{2} )\).

Results and discussion

Test problems

In the empirical studies, two well-known test suites for many objective optimizations, the DTLZ test suite26 and the WFG test suite27 are chosen. To obtain a full comparison, all test instances in DTLZ and WFG are considered in this paper. All these test problems are scalable to any number of objectives, where \(M \in \left\{ {3,5,8,10,15} \right\}\) in this paper. According to26, the number of decision variables V = M + k − 1 for the DTLZ test suit, where M represents the number of objectives, k = 5 for DTLZ1, and k = 10 for DTLZ2-7. For all WFG test problems, the number of decision variables is set as V = r + l, where the position-related variable r = 2 × (M − 1) and the distance-related variable l = 20, as suggested in27.

Algorithm and parameter

In order to verify the validity of MOIFF proposed in this paper, we consider four state-of-the-art many-objective evolutionary algorithms, including NSGA III28, MOEA/D23, RVEA9, and ARMOEA29. To ensure the fairness of the comparison, the population size and the termination condition of each algorithm are consistent on the same test instance. The population size N is equal to the size of the reference vectors, which are used for different numbers of objectives and are summarized in Table 2, where H1 and H2 are the number of divisions of the boundary layer and inside layer, respectively. The termination condition of each algorithm is the maximum number of fitness estimations (MFE), summarized in Table 3. In addition, other parameters used in each algorithm are summarized from the original literature, as shown in Table 4.

Table 2 Population size setting.
Table 3 Maximum number of fitness estimations for different test problems.
Table 4 Setting of parameters in each algorithms.

Results on DTLZ test suite

In this section, we conduct three experiments to verify the validity of MOIFF proposed in this paper on the DTLZ test suite. The first one is verification of the effectiveness of MOIFF in convergence accuracy. The second one is verification of the effectiveness of MOIFF in convergence speed. The third one is verification of the effectiveness of MOIFF in convergence stability.

  1. (1)

    Verification of the effectiveness of MOIFF in convergence accuracy

In our empirical studies, IGD and HV are employed to evaluate the performance of each algorithm simultaneously. Each algorithm runs 30 times independently on each test instance to avoid the unfavorable effect of the algorithm evaluation caused by the randomness of a single operation. Tables 5 and 6 show the average and standard deviation of the IGD and HV values over 30 independent runs for the five compared MaOEAs, respectively, where the best average among the five compared MaOEAs is highlighted in bold. In addition, to test the differences for statistical significance, the Wilcoxon rank sum test with a 5% significance level is performed between MOIFF and each of the compared algorithms over each test instance. Symbols “+”, “−” and “=” indicate that the compared algorithm performs significantly better than, worse than, and equivalent to MOIFF in the corresponding column, respectively. The Friedman rank-sum test is performed on the data of Tables 5 and 6 to analyze the overall average performance of these above algorithms. The results are shown in Table 7, where “avg.rank” represents the average rank of each algorithm, “rank” is the overall rank of five algorithms in average performance.

Table 5 Average and standard deviation of the IGD values obtained by the five algorithms on the DTLZ test suite with different numbers of objectives.
Table 6 Average and standard deviation of the HV values obtained by the five algorithms on the DTLZ test suite with different numbers of objectives.
Table 7 Friedman-test of 5 algorithms.

Basing from the IGD results of DTLZ test instances shown in Table 5, we can find that the proposed MOIFF shows the best overall performance on DTLZ2 and DTLZ4 problems, compared with the four other MaOEAs. For the DTLZ1 problem, MOEA/D obtains the smallest IGD value on the fifteen-objective test instance, RVEA performs best on the three-objective test instance, while MOIFF works best on the five-, eight-, and ten-objective test instances. For DTLZ3, MOIFF has obvious advantages over the four other algorithms on the remaining test instances, and MOIFF is slightly worse than the four other algorithms on the three-objective test instance. For DTLZ5, NSGA-III achieves the best results on the three-objective test instance, whereas MOEA/D works best on the five-, eight-, ten-, and fifteen-objective test instances. The overall performance of MOIFF is significantly outperformed by NSGA-III and ARMOEA. For DTLZ6, the performance obtained by each algorithm is similar on all DTLZ5 instances. For DTLZ7, MOEA/D obtains the smallest IGD value on the fifteen-objective test instance, ARMOEA performs best on the five-objective test instance, and NSGA-III works best on the remaining test instances. According to the Table 7, the overall performance of MOIFF is significantly outperformed by the other competitors in terms of IGD is the best.

The HV results of DTLZ test instances are listed in Table 6. MOIFF can achieve the best results on all DTLZ instances except for ten- and fifteen-objective DTLZ instances. According to Table 7, we can see that, compared with the competitors, the overall performance of MOIFF over all test instances in terms of HV is the best.

  1. (2)

    Verification of the effectiveness of MOIFF in convergence speed

To compare the complexity of each algorithm more intuitively, Table 8 records the random respond time of each algorithm on each test instance, where the parameters of each algorithm are set as above.

Table 8 The response time of the five algorithms on the DTLZ test suite with different numbers of objectives (/s).

Seen from the Table 8, we can find that RVEA needs the shortest run time on each test instance, the respond time of ARMOEA is longest on each test instance, and MOIFF needs the longer run time than NSGA-III and MOEAD on each test instance.

In order to intuitively compare the convergence process of each algorithm, Figs. 3, 4, 5 and 6 shows the iterative process curves of each algorithm, where parameters of each algorithm are set as above. We can find that MOIFF is excellent in the convergence speed.

Figure 3
figure 3

The convergence process curves of IGD values obtained by five algorithm on DTLZ2.

Figure 4
figure 4

The convergence process curves of HV values obtained by five algorithm on DTLZ2.

Figure 5
figure 5

The convergence process curves of IGD values obtained by five algorithm on DTLZ3.

Figure 6
figure 6

The convergence process curves of HV values obtained by five algorithm on DTLZ2.

  1. (3)

    Verification of the effectiveness of MOIFF in convergence stability

In order to compare the stability of each algorithm, we show the success rate of each algorithm in Table 9, where the success rate means the times of each algorithm reaching the preset IGD convergence accuracy in 30 independent experiments. Form Table 8, we can see that, MOIFF obtains the optimal success rate of 22 out of 35 test instances, which indicates that MOIFF is excellent in convergence stability.

Table 9 The success rate of the five algorithms on the DTLZ test suite with different numbers of objectives (/%).

To further visually compare the stability of each algorithm, Figs. 7 and 8 show the box plots of statistical values of IGD and HV obtained by the above five algorithms over 30 runs. Limited to the length of the paper, we only select the IGD values obtained by the five algorithms on the DTLZ2 and the HV values obtained by the five algorithms on the DTLZ3 to draw the box plots. We can see that MOIFF is the most stable compare to the other 4 algorithms for HV. For five- and ten-objective DTLZ3, the stability of IGD values obtained by IFF is slightly inferior to the other algorithms, but its IGD value is significantly best.

Figure 7
figure 7

The box plots of HV values obtained by five algorithm on DTLZ2 over 30 runs.

Figure 8
figure 8

The box plots of IGD values obtained by five algorithm on DTLZ3 over 30 runs.

Results on WFG test suite

Comparison results of MOIFF with the four other MaOEAs in terms of IGD values on the WFG test suite are listed in Table 10. MOIFF is significantly outperformed by the four other MOEAs on the WFG4, WFG5, WFG6, WFG7, WFG8 and WFG9 instances in terms of IGD. For WFG1, RVEA obtains the smallest IGD value on the fifteen-objective test instance, ARMOEA performs best on the eight-objective test instance, and MOIFF works best on the remaining test instances. For WFG2, RVEA performs best on the five-objective test instance, and MOIFF works best on the remaining test instances. For WFG3, except for the three-objective instance, MOIFF can achieve the best performance on the other test instances. MOEA/D could obtain the best results on any test instance.

Table 10 Average and standard deviation of the IGD values obtained by the five algorithms on the WFG test suite with different numbers of objectives.

The HV results of the WFG test instances are listed in Table 11. NSGA-III can obtain the best HV values on the three-, five-, and fifteen-objective WFG1 instances; the five-objective WFG2 instance; and the eight-objective WFG3 instance. MOEA/D only works best on the ten-objective WFG3 instance. RVEA performs best on the fifteen-objective WFG5 instance, and the ten- and fifteen-objective WFG9 instances. ARMOEA works best on the eight- and ten-objective WFG1 instances, eight-, ten-, and fifteen-objective WFG2 instances, fifteen-objective WFG4 instance; the fifteen-objective WFG6 instance; the fifteen-objective WFG7 instance; and the eight-, ten-, and fifteen-objective WFG8 instances. Among the 45 test instances, MOIFF obtains the smallest HV values on 24 test instances.

Table 11 Average and standard deviation of the HV values obtained by the five algorithms on the WFG test suite with different numbers of objectives.

Figure 9 shows the parallel coordinates of the final non-dominated solutions obtained by these five algorithms on the five-objective WFG8 test instance. These plots clearly demonstrate that the PF obtained by MOIFF is close to the true PF and maintains a good distribution.

Figure 9
figure 9

Parallel coordinates of final non-dominated solutions obtained by five algorithms on the five-objective WFG8 test suite.

Conclusions

A novel algorithm named MOIFF is proposed in this paper for handling MaOPs to improve the comprehensive performance in terms of convergence and diversity. FF with excellent convergence performance is employed to serve as the optimization strategy of MOIFF. In order to handle MaOPs effectively, FF has been tailored from the following aspects in this paper. First, to distinguish the quality of each individual in MaOPs, we propose a novel individual fitness assessment approach based on cumulative ranking value. Second, considering the characteristics of MaOPs, we propose a novel method based on individual cumulative ranking value to constitute and update the global memory and local memory of each individual and a hybrid search mode combining subspace search and full space search to update individuals at the stages of soil optimization and soil fusion. In addition, we improve the dual aggregation function-based environmental selection. Finally, the results on the DTLZ and WFG test suites show that MOIFF has excellent convergence and diversity compared with four state-of-the art many-objective evolutionary algorithms.