Introduction

Multi-objective optimization problems (MOPs) contain two or more conflicting objectives that need to be optimized simultaneously. MOPs with more than three objectives are referred to as many-objective optimization problems (MaOPs). Due to the conflicting nature of the objectives, the optimization of MOPs results in a set of trade-off solutions instead of a single optimal solution. However, for better decision-making, the trade-off solutions are expected to be optimal and well-spread to cover the entire decision range. Multi/Many-objective Optimization Evolutionary Algorithms (MOEAs), due to their population-based nature, can provide the entire set of trade-off solutions in a single run and are extensively used to solve MOPs. MOEAs try to achieve this with the help of three leading operators, namely mating selection, recombination, and environmental selection. Mating selection is responsible for selecting high-quality parents that form a mating pool and are employed to generate offspring solutions. Recombination process generates quality offspring solutions by combining various solutions (crossover) and/or through perturbation of a single solution (mutation) from the mating pool. Finally, the goal of environmental selection is to select prospective solutions from the combination of parent and offspring solutions for further evolution. As the performance of MOEAs significantly depends on environmental selection, various environmental selection operators have been proposed depending on which MOEAs are classified as Pareto-based1, Decomposition2,3 or Reference Vector-based, and Indicator-based MOEAs4. Each class of MOEAs possesses its advantages and disadvantages in achieving a proper balance between convergence and diversity as the number of objectives increases (i.e., MaOPs) and characteristics of the problem.

Since the output of MOEA is a set of Pareto optimal solutions, the Pareto dominance relation naturally becomes a criterion to distinguish solutions during the evolutionary process. In Pareto-based MOEAs (PMOEAs)5,6,7,8, when the solutions are not distinguishable due to Pareto dominance, then a diversity criterion is employed where diverse solutions are preferred overcrowded ones. In MaOPs, as the number of objectives increases, the proportion of non-dominated solutions in a population tends to increase exponentially9, resulting in the failure of Pareto dominance relation-based criterion in distinguishing solutions. This places more emphasis on density-based secondary selection criterion referred to as ‘active diversity promotion’10. The ineffectiveness of Pareto dominance in providing the required selection pressure as the number of objectives increases limits the scalability of PMOEAs. In addition, as the number of objectives increases, the diversity preserving mechanisms such as crowding distance and clustering become computationally expensive. Therefore, reference vectors are employed to reduce the computational complexity and hence the diversity11.

Unlike PMOEAs that compare individuals using two criteria (i.e., dominance relation and density), Indicator-based MOEAs (IMOEAs)12 adopt a single value referred to as an indicator to measure both convergence and diversity (IBEA12, ISDE+13). However, developing an indicator that balances both diversity and convergence is challenging. The ability of the indicator to balance the convergence and diversity gets challenging as the number of objectives increases. Some indicators12 are biased towards convergence, while some favour diversity14. Therefore, in 14, a stochastic combination of both convergence- and diversity-biased indicators are considered. IMOEA based on hypervolume15 is effective but computationally expensive. ISDE+ is computationally efficient but fails to preserve the essential corner solutions.

In Reference Vector-based MOEAs (RV-MOEAs)16,17, the population members are guided towards the optimal Pareto Front (PF) in the direction specified by the weight or reference vectors. In general, the reference vectors are selected by sampling a uniform set of points on a hyperplane \(\sum_{i}{f}_{i}=1\) in the normalized M-objective space referred to as Normal Boundary Intersection (NBI) method18. In other words, it is implicitly assumed that the optimal PF is bounded by a unit simplex of reference vectors that is non-degenerate, continuous, and smooth without significant nonlinearities. However, there exist several MOPs characterized by degenerate and discontinuous PFs. As a result, several of the uniform weight vectors fail to get associated with any of the solutions and are referred to as ineffective weight vectors. In addition, it has been observed that the number of non-dominated solutions obtained by MOEA/D19, a primitive RV-MOEA, is often much smaller than the number of weight vectors as—(1) multiple weight vectors can share a single good solution, and (2) all solutions are not always non-dominated. Therefore, the Pareto dominance criterion has been integrated into RV-MOEAs (NSGA-III11,20, RVEA21, MOEA/DD22, TDEA8, PMEA23). On the other hand, a set of uniform weight vectors may not be able to approximate the different sizes and shapes of PFs. In other words, the initialization of the weight vectors should depend on the shape and size of the PF, which may not be known in advance. Therefore, RV-MOEAs24,25,26,27 with weight vector adaptation during the evolution were proposed to effectively handle MOPs with regular as well as irregular PFs. Instead of adapting the weight vectors, a combination of uniform weight vectors and a secondary criterion (e.g., polar-metric23) in to select the solutions corresponding to the ineffective weight vectors have also been investigated. Motivated by the work in23, we propose a hybridized framework, referred to as HS-MOEA, that employs ISDE+ as the secondary criterion to select solutions corresponding to the ineffective weight vectors, in addition to the Pareto dominance. In other words, the aim of this study is to develop a new environmental selection strategy that benefits from the advantages of Pareto-, decomposition- and indicator-based approaches. First, Pareto dominance alleviates the selection of dominated solutions. Second, weight vectors assist in the selection of well-diversified and convergent solutions in each generation. Third, if the weight vector fail to differentiate the high-quality parent solutions then the indicator assists the selection process by considering both convergence and diversity. The ability of ISDE+ to select a set of converged and diverse solutions from unselected ones with respect to a set of already selected solutions is expected to aid the uniform weight vectors in achieving better convergence and diversity.

The rest of this paper is organized as follows. The second section presents the preliminaries. The third section introduces related work and motivation for the current study. The fourth section contains details of HS-MOEA. The fifth section presents experimental setup and comparison results of HS-MOEA with a number of state-of-the-art MOEAs. The last section presents the conclusions and future directions.

Preliminaries

Generally, MOP is formulated as:

$$ \begin{gathered} \min f\left( x \right) = \left( {f_{1} \left( x \right), f_{2} \left( x \right), \ldots ,f_{M} \left( x \right)} \right) \hfill \\ {\text{s.t.}}\,\,x = \left( {x_{1} , x_{2} , \ldots ,x_{D} } \right) \in {\Omega } \subset {\varvec{R}}^{{\varvec{D}}} \hfill \\ \end{gathered} $$
(1)

where \(x\) represents an \(D\) dimensional decision vector in \(\Omega \), and \(M\) is the number of objectives.

In multi-objective optimization, the following concepts have been well defined and widely applied.

  1. 1.

    Pareto Dominance:

    For any two solutions \(x\) and \(y\), \(x\) is said to dominate \(y,\) denoted as \(x\prec y\) iff \({f}_{i}(x)\le {f}_{i}(y)\) \(\forall \) \(i\in \{\mathrm{1,2},\dots ,M\}\) and \({f}_{j}(x)<{f}_{j}(y)\) for at least one \(j\in \{\mathrm{1,2},\dots ,M\}\).

  2. 2.

    Pareto Optimality:

    A solution \({\mathrm{x}}^{*}\) is said to be Pareto-optimal if there is no other solution \(\mathrm{x}\in\Omega \) such that \(\mathrm{x}\prec {\mathrm{x}}^{*}\).

  3. 3.

    Pareto-optimal Set (PS):

    It is the set of all Pareto-optimal solutions and is defined as \(\mathrm{PS}=\left\{\mathrm{x}\in\Omega \right|\mathrm{x is Pareto optimal}\}\).

  4. 4.

    Pareto-optimal Front (PF):

    It is the set of all Pareto-optimal solutions and is defined as \(PF=\left\{f(x)\in {R}^{M}\right|x\in PS\}\).

  5. 5.

    Ideal point:

    Ideal point is a vector \({\mathrm{z}}^{*}=({\mathrm{z}}_{1}^{*},{\mathrm{z}}_{2}^{*},\dots ,{\mathrm{z}}_{\mathrm{M}}^{*} )\), which is the infimum of \({f}_{i}\) for each \(\mathrm{i}\in \{\mathrm{1,2},\dots ,\mathrm{M}\}\) in the PF.

  6. 6.

    Nadir point:

    Nadir point is a vector \({\mathrm{z}}^{\mathrm{nad}}=({\mathrm{z}}_{1}^{\mathrm{nad}},{\mathrm{z}}_{2}^{\mathrm{nad}},\dots ,{\mathrm{z}}_{\mathrm{M}}^{\mathrm{nad}} )\), which is the supremum of \({f}_{i}\) for each \(\mathrm{i}\in \{\mathrm{1,2},\dots ,\mathrm{M}\}\) in the PF.

  7. 7.

    Weight vector:

    A weight vector \(w\) is a \(M\) dimensional vector \(w=({w}_{1},{w}_{2},\dots ,{w}_{M})\) such that \({\sum }_{i=1}^{M}{w}_{i}=1\) and \({w}_{i}\ge 0 \forall i\in \{\mathrm{1,2},\dots ,M\}\). The Normal Boundary Intersection (NBI) method is a systematic approach that places points on a normalized hyper-plane, i.e., on a \((M-1)\)-dimensional unit simplex. It generates \(\left(\genfrac{}{}{0pt}{}{H+M-1}{M-1}\right)\) number of finite weights where \(M\) is the number of objective of problem and \(H\) is the number of divisions considered along each objective coordinate. Moreover, all the generated points are equally and uniformly distributed in that latex structure.

  8. 8.

    Penalty-based Boundary Intersection (PBI) Operator:

    Let \(\mathrm{w}=({\mathrm{w}}_{1},{\mathrm{w}}_{2},\dots ,{\mathrm{w}}_{\mathrm{M}})\) be a direction vector, where \(\sum_{\mathrm{i}}{\mathrm{w}}_{\mathrm{i}}=1\). The PBI operator is defined as:

    $$ \begin{aligned} g^{pbi} \left( {x,w,z^{*} } \right) &= d_{1} + \theta d_{2} \hfill \\ {\text{where}},\,d_{1} &= \left( {F\left( x \right) - z^{*} } \right)^{T} w/\|w\| \hfill \\ d_{2} &= \| F\left( x \right) - z^{*} - \left( {d_{1} /\|w\|} \right)w\| \hfill \\ \end{aligned} $$
    (2)

    where \(\| \cdot \| \) denotes \({L}_{2}\) norm and \(\theta \) is a penalty parameter.

Related work and motivation

In RV-MOEAs, the objective vector corresponding to each solution is converted into a scalar value based on a series of uniformly distributed weight vectors. To maintain population diversity during the evolution, RV-MOEAs assign the same search space preference to each direction vector. However, the performance of RV-MOEAs strongly depends on the shape of the PF28, which is not known in advance. In addition, the size and shape of PF vary over generators. Therefore, it is essential to adapt the weight vectors during the evolution process or employ a secondary selection criterion to aid the uniform weight vectors.

In the literature, attempts have been made to improve the performance of RV-MOEAs on MaOPs with both the regular and irregular PFs. In NSGA-III11, significant changes to the selection operator were performed compared to its predecessor NSGA-II, where the diversity promotion among population members is achieved by a set of well-spread reference points. The employment of reference vectors improves the scalability of the algorithm by reducing the computational complexity that arises due to the increase in the number of objectives. NSGA-III is further modified (referred to as A-NSGA-III20) where ineffective reference points are re-allocated based on the distribution and association of the solution. Ineffective reference vectors are the ones that do not have any population members associated with them. RVEA21 employs a scalarization approach, termed as Angle Penalized Distance, that assesses convergence by calculating the distance between the candidate solution and the weight vector. In27, a weight vector adaptation strategy was employed to enhance the performance of RVEA. TDEA8 enhances the convergence of NSGA-III in high dimensional objective spaces by—(1) incorporating a new dominance scheme and (2) employing the aggregation function-based fitness evaluation scheme of MOEA/D. In Polar Metric based Evolutionary Algorithm (PMEA)23, a metric inspired from the PBI operator referred to as polar-metric (p-metric) is proposed to measure the convergence and diversity. During environmental selection, a weight vector adjustment strategy is employed to select the well-diversified solutions. The environmental selection of PMEA is demonstrated in Fig. 1 where xi and wi represent the solutions and weight vectors, respectively. The values on the perpendicular lines represent the p-metric values of the solutions to the corresponding weight vectors. As in Fig. 1a, PMEA assigns \({x}_{1},{x}_{4}, {x}_{5}\) to \({w}_{2},{w}_{3}, {w}_{1}\), respectively and selects for future evolution. Solutions \({x}_{2}\) and \({x}_{3}\) are not selected. In addition, \({w}_{4}\) does not have any associated solutions and is considered as ineffective. The ineffective weight vector (\({w}_{4}\)) is re-initialized (\({w}_{5}\)) to pass through the nearest non-selected solution (x3) as shown in Fig. 1b. After the re-initialization, solution x3 associated with weight vector \({w}_{5}\) is selected. In other words, starting with uniform weight vectors, at first, PMEA selects solutions based on p-metric. Later, a weight vector adaptation is made to select the rest of the solutions necessary to create the population for the next generation. However, during the next generation, the weight vectors are uniformly initialized before evaluating the p-metric. From Fig. 1b, it can be noticed that the weight vectors are not well diversified can affect the population diversity during the evolution process. In other words, adapting weights to select the solution in each generation based on the distribution of population is not appropriate.

Figure 1
figure 1

Selection of PMEA.

Motivated by these observations, we propose to combine the advantages of Pareto, decomposition, and indicator methods in a single framework. In the proposed framework, the uniform weight vectors are assisted by ISDE+ indicator, which serves as a secondary criterion.

Hybrid selection based multi/many-objective evolutionary algorithm (HS-MOEA)

In this section, Hybrid Selection based Multi/Many-Objective Evolutionary Algorithm (HS-MOEA) is introduced. The different steps of HS-MOEA are detailed below.

Initialization

A set of uniform weight vectors (\(w\)) are generated using the NBI method, then subsequently, a population of size \(N\) (\(\left|w\right|\)) is initialized within the permissible boundaries as shown in Line 01 of Algorithm 1.

Mating selection and offspring generation

ISDE+ indicator values corresponding to each solution in the population (Pt), corresponding to the tth iteration is evaluated using Eq. (3) (Line 04, Algorithm 1). The solutions with the highest ISDE+ values are considered to be better. Using ISDE+ values, binary tournament selection is performed to generate the mating pool (Line 05, Algorithm 1). Then, the offspring population is generated using the variation operators such as polynomial mutation6 and simulated binary crossover6 (Line 06, Algorithm 1).

In HS-MOEA, the mating selection is performed using ISDE+ indicator. The indicator values corresponding to each solution in the population is evaluated using Eq. (3) (Line 04, Algorithm 1).

$$ {\text{I}}_{{{\text{SDE}}}}^{ + } \left( { x} \right) = \begin{array}{*{20}c} {min} \\ {y \in P_{SB\left( x \right)} , x \ne y} \\ \end{array} \left\{ {dist\left( {F\left( x \right),F\left( {y_{1}^{^{\prime}} } \right)} \right), dist\left( {F\left( x \right),F\left( {y_{2}^{^{\prime}} } \right)} \right), \ldots ,dist\left( {F\left( x \right),F\left( {y_{{N_{SB} \left( x \right)}}^{^{\prime}} } \right)} \right)} \right\} $$
(3)

where \(P_{SB\left( x \right)} \in P_{t}\) and \(y \in P_{SB} \left( x \right)\), such that \(SB\left(y\right)<SB\left(x\right).\) \(SB\) represent the sum of normalized objectives. \({N}_{SB(x)}\) represents the size of \({P}_{SB(x)}\).

$$ F\left( {y_{i}^{^{\prime}} \left( j \right)} \right) = \left\{ {\begin{array}{*{20}c} {F\left( {x\left( j \right)} \right)} & { if F\left( {y_{i} \left( j \right)} \right) < F\left( {x\left( j \right)} \right)} \\ {F\left( {y_{i} \left( j \right)} \right)} & { otherwise} \\ \end{array} } \right.\quad {\text{for}}\,\,j = 1,2, \ldots ,M $$
(4)

The solutions with the highest ISDE+ values are considered to be better. Then, the binary tournament selection is performed based on ISDE+ values to generate the matting pool (Line 05, Algorithm 1). After the mating selection, the offspring population is generated using the variation operators such as mutation and crossover, as shown in Line 06, Algorithm 1. In the current, the mutation and crossover operators employed are Polynomial Mutation (PM)2 and Simulated Binary Crossover (SBX)2.

Normalization

Normalization (Line 07, Algorithm 1) is an essential tool to map the unscaled search space to scaled one so as to characterize the badly scaled objectives. In HS-MOEA, the normalization (of the jth population member is given in Eq. (5).

$$ F_{i}^{j} = \frac{{f_{i}^{j} - z_{i}^{*} }}{{z_{i}^{nad} - z_{i}^{*} }}, \forall i = 1,2, \ldots ,M $$
(5)

where, \({z}_{i}^{*}\) and \({z}_{i}^{nad}\) are considered as the lowest and highest values of \({i}^{th}\) objective function.

figure a
figure b
figure c

Environmental Selection

The environmental selection selects a set of N converged but diversified solutions from a combined population (R) of size \(2N\) (Line 08, Algorithm 1). The working mechanism is detailed in Algorithm 2. Non-dominated sorting6 (Lines 01 ~ 02, Algorithm 2) is performed to classify the population \(R\) into several fronts (Fr) and identify the population \({P}^{ND}=\bigcup_{l=1:L}{Fr}_{l}\) (where \(L\) satisfies \(\left|\bigcup_{l=1:L}{Fr}_{l}\right|\ge N\) and \(\left|\bigcup_{l=1:L-1}{Fr}_{l}\right|<N)\).

Association

In HS-MOEA, the association procedure (Line 04, Algorithm 2) is performed in the normalized objective space, where the ideal point \({z}^{*}\) is shifted to origin. At each generation, the individuals of the \({P}_{t}^{ND}\) population is associated with the reference vectors (\(w)\). For the association operator, the norm of each solution x in \({P}_{t}^{ND}\) is evaluated as:

$$ norm\left( {F\left( x \right)} \right) = \sqrt {\mathop \sum \limits_{i = 1}^{M} F_{i} \left( x \right)^{2} } $$
(6)

Then, the angle between \(F(x)\) and \({w}^{i}\) is defined as:

$$ angle\left( {F\left( x \right),w^{i} } \right) = {\text{arccos}}\left| {\frac{{F\left( x \right) \circ w^{i} }}{{norm\left( {F\left( x \right)} \right) \cdot norm\left( {w^{i} } \right)}}} \right| $$
(7)

where, \(F\left(x\right)\circ {w}^{i}=\sum_{i=1}^{M}{F}_{i}(x)\cdot {w}_{i}^{j}\) is the dot product of \(F\left(x\right)=\left({F}_{1}\left(x\right), {F}_{2}\left(x\right),\dots , {F}_{M}\left(x\right)\right),\) and \({w}^{i}=({w}_{1}^{i}, {w}_{2}^{i},\dots , {w}_{M}^{i})\).

During association, each solution is assigned to its closest reference vector. Ki is the number of solutions associated with the weight vector wi during the association process, which can range from \(0\) to \(N\). Figure 2 represents the association operator, where the filled circles are the associated solutions with the corresponding nearest weight vector.

Figure 2
figure 2

Illustration of the association operator using an acute angle.

Two step selection

For each \(x\) and \(y\) \(\in \) \({K}_{i}\), \(x\) is preferred to \(y\) iff \(PBI\left(x,{w}^{i}\right)<PBI(y,{w}^{i})\) where \(PBI\left(x,w\right)={d}_{1}+\theta {d}_{2}\), \({d}_{2}=\) perpendicular distance of \(x\) over \(w\), \({d}_{1}=\) distance between the origin and the projected point of \(x\) over \(w\), \(\theta =\) penalty parameter. PBI refers to the penalty-based boundary intersection16.

First, select one solution from each non-empty \({K}_{i}\) \(\forall i=\mathrm{1,2},\dots ,N\) based on \(PBI\) value and save them in \(S\) (referred to as a set of selected solutions). The remaining solutions are stored in \(U\) (referred to as a set of unselected solutions). If the size of \(S\) is \(N\) then the whole set \(S\) is declared as a parent population of the next generation (Line 06, Algorithm 2); otherwise, go for the second round of selection. In the second round, \((N-\left|S\right|)\) solutions are to be selected from \(U\) using ISDE+ indicator. For each \(x\in U\) the values of ISDE+ referred to as IUSDE+ is calculated (Line 08, Algorithm 2). To evaluate the indicator, the solutions in U are sorted in the ascending order of the normalized sum of objectives (SB) (Line 09, Algorithm 2). The solution with the least SB is assigned the highest possible indicator value of one. To evaluate the IUSDE+ of a given solution \(x\in U\), the solutions in U that are better in convergence with the least SB compared to x and solutions in set S are shifted as in Eq. (8). Then \((N-\left|S\right|)\) solutions from U with the largest IUSDE+ are selected (P1) (Line 10, Algorithm 2) and added to S (Line 11, Algorithm 2), which becomes the population (P) for the next generation.

$$ I_{SDE}^{U + } = \begin{array}{*{20}c} {\min } \\ {y \in L, x \ne y} \\ \end{array} \left\{ {dist\left( {F\left( x \right),F\left( {y_{1}^{^{\prime}} } \right)} \right), dist\left( {F\left( x \right),F\left( {y_{2}^{^{\prime}} } \right)} \right), \ldots ,dist\left( {F\left( x \right),F\left( {y_{\left| A \right|}^{^{\prime}} } \right)} \right)} \right\} $$
(8)

where \(A\) is (\({U}_{SB}\left(x\right)+S)\). \({U}_{SB}\left(x\right)\in U\) and \(y\in A.\) For all \(y\in {U}_{SB}\left(x\right)\) such that \(SB\left(y\right)<SB\left(x\right)\), \(S\) and U represent a set of selected and unselected solutions respectively by weight vector association.

In other words, the use of ISDE+ indicator enables the selection of converged, yet diverse solutions with respect to the already selected solutions (S). First, Pareto dominance alleviates the selection of dominated solutions. Second, weight vectors assist in the selection of well-diversified and convergent solutions in each generation. Third, if the weight vector fails to differentiate the high-quality parent solutions, then the indicator assists the selection process by considering both convergence and diversity. The advantage of employing ISDE+ is that it enables the selection of the solutions considering the solutions that are already selected through weight vector association in the second step. In other words, HS-MOEA gets benefitted from both the reference vectors and indicators.

Computational complexity

The non-dominated sorting (Line 01, Algorithm 2) of a population of size 2 N having M-dimensional objective vectors requires \(O(Nlo{g}^{M-2}N )\) computations11. All operations in Algorithm 2 in associating a maximum of \(2N\) population members to \((H\approx N)\) reference points would require \(O(MNH)\). Thereafter, for calculating the ISDE+ of maximum \(N\) population members would require \(O({N}^{2})\). Therefore, the computational complexity of one generation of HS-MOEA is \(O({N}^{2}lo{g}^{M-2}N )\) or \(O(M{N}^{2})\), whichever is larger.

Experimental setup, results and discussion

Experiments were conducted on 16 scalable test problems from DTLZ29 and WFG30, test suites comprising of 7 and 9 problems, respectively. For each test problem, 2-, 4-, 6-, 8- and 10-objectives are considered. The parameter values employed are present in Table 1. On each instance, 30 independent runs were performed for each algorithm on a PC with a 3.30 GHz Intel (R) Core (TM) i7- 8700 CPU and Windows 10 Pro 64-bit operating system with 16 GB RAM. As a stopping criterion, the maximum number of generations for DTLZ1 and WFG2 is set to 700 and for DTLZ3 and WFG1 it is set as 1000. For the other problems (DTL2, DTLZ4–7, and WFG3–9) it is set to 250. All algorithms considered employ population size (N) of 100, 165, 182, 240 and 275 for 2-, 4-, 6-, 8-, 10-objectives, respectively13. Simulated binary crossover and polynomial mutation with distribution indices and probabilities set to \({n}_{m}=20\), \({n}_{c}=20\), \({p}_{c}=1.0\) and \({p}_{m}=1/D\), respectively, are employed.

Table 1 Parameter settings for DTLZ and WFG test suites.

In order to compare the efficiency of HS-MOEA with the state-of-the-art algorithms such as PMEA, TDEA, RVEA, NSGAIII, MOEA/D, ISDE+, IBEA, ONEBYONE31, a quantitative indicator, namely HyperVolume (HV) is being employed. The larger value of HV implies the superiority of the algorithm. The experimental results (mean and standard deviation values of normalized HV) on benchmark suites are presented in Table 2. In addition, the statistical tests (t-test) at a 5% significance level were conducted to compare the significance of the difference between the mean metric values yielded by HS-MOEA and state-of-the-art algorithms. The signs “ + ”, “−” and “≈” against the HV values indicate that the HS-MOEA is statistically “better”, “worse” and “comparable” with the corresponding algorithm, respectively. The last row of Table 2 represents the overall performance of HS-MOEA in terms of the number of instances it is better (Win-W), comparable (Tie-T) and worst (Loss-L) with respect to the corresponding algorithm.

Table 2 Comparison of HV and statistical results on DTLZ and WFG test problems (“+”—win, “≈”—TIE, “−”—loss).

As shown in Table 2 and Fig. 3, HS-MOEA significantly outperforms or comparable to PMEA, TDEA, RVEA, NSGAIII, MOEA/D, MOEA/DD, ISDE+ and ONEBYONE in 72/80 ≈ 90%, 62/80 ≈ 77.5%, 67/80 ≈ 83.75%, 60/80 ≈ 75%, 65/80 ≈ 81.25%, 65/80 ≈ 81.25%, 47/80 ≈ 58.75% and 64/80 ≈ 80% of cases, respectively. It can be observed that HS-MOEA consistently performs better or is similar to PMEA. Similarly, HS-MOEA outperforms ONEBYONE on most of the problems. However, in case of WFG1 and WFG2, ONEBYONE performs better over HS-MOEA. On the other hand; TDEA, RVEA and NSGA-III performs better than HS-MOEA on DTLZ1, WFG1 and WFG2 test problems. However, the improvements seems to be minimal compared to the advantages HS-MOEA achieves on other problems such as DTLZ5, DTLZ7, WFG3 ~ 9. MOEA/D and MOEA/DD seem to perform similarly compared to HS-MOEA, performing slightly better on DTLZ1. On WFG3, MOEA/D performs better than all the state-of-the-art algorithms, including HS-MOEA. However, the degraded performance of MOEA/D on the remaining 15 problems seems to outweigh the superior performance on WFG3.

Figure 3
figure 3

Performance comparison of HS-MOEA with the state-of-the-art algorithms.

Among the state-of-the-art algorithms, ISDE+ exhibits competitive performance compared to HS-MOEA. The superiority of ISDE+ compared to HS-MOEA can be seen on DTLZ1, WFG1, WFG6 and WFG9. The performance improvement is significant; however, HS-MOEA is also close. HS-MOEA has a slight advantage over ISDE+ on problems such as DTLZ7 and WFG2 that have disconnected PF.

To demonstrate the effectiveness of HS-MOEA, a more detailed analyse corresponding to DTLZ1 and DTLZ7 is presented. GD and Delta indicators that indicate the convergence and diversity performance of MOEAs are summarized in Tables 3 and 4, respectively. Lower values of both GD and Delta values indicate the superiority of the algorithm. The convergence (GD) of HS-MOEA is consistently better than PMEA. However, the convergence of HS -MOEA lags behind ISDE+ on DTLZ1, which was designed to test the convergence performance of MOEAs. However, the diversity (Delta) of HS -MOEA is consistently better than ISDE+. On the other hand, HS -MOEA fails to maintain the diversity with respect to PMEA on DLTZ7, which has discontinuous PF. In other words, the convergence of HS-MOEA is better or comparable to PMEA, while the diversity is better or comparable to ISDE+.

Table 3 Comparison of GD.
Table 4 Comparison of delta.

The improved performance of HS-MOEA is because it gets benefitted from both the best qualities of each component—(1) Pareto dominance’s ability to eliminate low-quality solutions, (2) Uniform weight vectors maintain the diversity, and (3) ISDE+ indicator enable both convergence and diversity in problems with MOPs with irregular or discontinuous PFs. Therefore, the performance of HS-MOEA is competitive or better in most cases. The significance is more visible in problems with discontinuous PFs such as DTLZ7.

Figures 4 and 5 present the parallel coordinates that describe the distribution of the solutions corresponding to PMEA, ISDE+ and HS-MOEA on 8-objective instances of DTLZ1 and DTLZ7. From the figures, it is evident that HS-MOEA is able to provide a well converged and diverse set of solutions compared to PMEA on both the DTLZ1 and DTLZ7 instances. However, the parallel coordinates of ISDE+ and proposed HS-MOEA seem nearly identical on both DTLZ1 and DTLZ7. From the results in Table 4, it is evident that ISDE+ slightly outperforms HS-MOEA on DTLZ1, which has continuous linear PF, while HS-MOEA performs better on DTLZ7 that has discontinuities in the PF.

Figure 4
figure 4

Parallel coordinates of the best solution set of PMEA, ISDE+ and HS-MOEA on 8-objective instance of DTLZ1.

Figure 5
figure 5

Parallel coordinates of the best solution set of PMEA, ISDE+ and HS-MOEA on 8-objective instance of DTLZ7.

Conclusion

This paper proposes a Hybrid Selection based Multi/Many-objective optimization, named HS-MOEA. In HS-MOEA, a new environmental selection that benefits from the advantages of Pareto dominance, reference vectors and an indicator is proposed. HS-MOEA is compared with seven state-of-the-art MOEAs on a number of widely used test instances. Experimental results demonstrate the superiority of HS-MOEA among all compared algorithms, mainly on problems with discontinuous PFs such as DTLZ7. In the future, we would like to investigate the possibility of weight vector adaptation using the ISDE+ indicator. In other words, new positions of the ineffective weight vectors and the consequent adjustment of the effective weight vectors can be estimated by employing the indicator values.