## Introduction

Optimization is a numerical process used to determine the decision variables for minimizing or maximizing the objective function value while satisfying the constraints of decision-space1. Optimization problems are inevitable in many real-world applications, and these problems usually contain non-linear objective functions and constraints with multiple local optimum, and low feasible regions2. These complex features make it difficult for traditional mathematical programming methods such as conjugate gradient, sequential quadratic programming, Newton's method, and quasi-Newton's method to find optimum3. Meta-heuristic algorithms (MAs) have become prevalent in many applied disciplines in recent decades because of higher performance and lower required computing capacity and time than deterministic algorithms in various optimization problems4,5,6,7,8,9,10,11,12. As a branch of random optimization, meta-heuristic algorithms can find a near-optimal solution by using available resources, although it is not always guaranteed to find the global optimum. Most MAs are inspired by human intelligence, the social nature of biological groups, and the laws of natural phenomena. Some classic representatives of MAs, such as genetic algorithm (GA)13, particle swarm optimization (PSO)14, differential evolution (DE)15, grey wolf optimizer (GWO)16, Harris hawks optimizer (HHO)17, bat algorithm (BA)18, whale optimization algorithm (WOA)19, salp swarm algorithm (SSA)20, sine cosine algorithm (SCA)21, water cycle algorithm (WCA)22, and so on, have been successfully used to solve some complex optimization problems.

However, the No Free Lunch (NFL) theorem states that it is impossible to solve all optimization problems by a specific algorithm23, which means an algorithm is suitable for a given optimization problem, but may not be suitable for another optimization problem with different characteristics. Therefore, further research on MAs is needed to deal with different optimization problems. The research directions of MAs include proposing new algorithms, improving existing algorithms, and hybridizing different algorithms. Hybridizing different algorithms has drawn attention because it can highlight their respective advantages and make the algorithms have better performance. Various hybrid algorithms have achieved good results, such as hybridizing particle swarm optimization with differential evolution proposed by Wang et al.24, hybridizing sine–cosine algorithm with differential evolution proposed by Li et al.25, hybridizing particle swarm with grey wolf optimizer presented by Zhang et al.26. Fireworks algorithm (FWA) was a newly developed swarm intelligence optimization algorithm, which was put forward by simulating the process of real fireworks explosion and generating a large number of sparks in 201027. When the fireworks explode, the sparks are everywhere. The explosion process of the fireworks can be regarded as the search behavior of the search agent in the local space. The main idea of FWA is to use fireworks and sparks as different kinds of solutions to search the feasible space of the optimization function. As an excellent algorithm, FWA has been used in hybridization with many other algorithms in recent years. Zhu et al.28 hybridized the firework algorithm with the particle swarm algorithm to form DFWPSO, which performed competitively and effectively in numerical optimization problems. Yue et al.29 proposed a new hybrid algorithm called FWGWO based on gray wolf optimizer and firework algorithm and achieved excellent results in global optimization. Guo et al.30 added the differential evolution operator to the firework algorithm and proposed a hybrid fireworks algorithm with differential evolution operator (HFWA_DE) in 2019. Zhang et al.31 introduced the migration operator of biogeography-based optimization into fireworks algorithm to enhance information sharing among populations and presented a hybrid biogeography-based optimization and fireworks algorithm for global optimization.

Political Optimizer (PO) is a new meta-heuristic algorithm based on human behavior inspired by the multi-stage political process. PO simulates all important steps in politics, such as party formation, party vote, constituency distribution, election campaigns, and party transitions, inter-party elections, and parliamentary affairs after the government is formed. In addition, PO has introduced a new position update strategy, called the recent past-based position updating strategy (RPPUS). The latter represents the behavior that politicians learned from the last election32. Compared with traditional optimization algorithms, PO shows better competitiveness. Therefore, lots of researchers have applied it in different scientific fields since the PO was proposed. Askari et al.33 employed PO for the training of feedforward neural networks to solve the classification and regression problems, and made a good achievement. Durmus et al. used PO to improve radiation properties of concentric circular antenna arrays (CCAAs) in the far-field such as wireless communication of smart grids and the Internet of things and reached a lower sidelobe level (SLL) value than other optimization methods34. Manita et al.35 proposed a binary version of PO to solve feature selection problems using gene expression data. Elsheikh et al.36 presented a novel optimized predictive model based on PO for eco-friendly MQL-turning of AISI 4340 alloy with nano-lubricants. Moreover, some scholars have made improvements to the shortcomings of PO. Askari et al.37 modified each stage of PO to improve the exploration ability and balance of the algorithm because it is found PO prematurely converges for complex problems. Zhu et al.38 also found that PO has the problem of poor global exploration capabilities, and they integrated PO with quadratic interpolation, advanced quadratic interpolation, cubic interpolation, Lagrange interpolation, Newton interpolation, and refraction learning, and proposed a sequence of novel PO variants.

As a novel swarm intelligence algorithm just proposed, PO still has many areas worth improving. It can be found that the main idea of PO is to guide the movement of the search agent through subgroup optimal solutions. However, the number of subgroup optimal solutions such as party leaders and constituency winners used by PO is limited, because the number of initial populations directly determines the number of party leaders and constituency winners. This leads to insufficient global exploration capabilities of PO. In addition, the recent past-based position updating strategy (RPPUS) of PO lacks effective verification of the updated candidate solutions, which reduces the convergence speed of the algorithm. Moreover, a new local leader called the Converged Mobile Center (CMC) based on two-way consideration is designed to guide the movement of search agents, which enhances the exploration ability and maintains the population diversity. Combining the above ideas, we propose a novel hybrid greedy political optimizer with fireworks algorithm named GPOFWA and verify its effectiveness and superiority through a well-studied set of diverse benchmark functions and three engineering optimization problems. In summary, the main contributions of this research are as follows:

1. 1.

We propose a new hybrid optimization algorithm named GPOFWA, which integrates the Political Optimizer (PO) and the Fireworks Algorithm (FWA). Using the spark explosion mechanism in FWA, GPOFWA performs explosion spark and Gaussian explosion spark operations on party leaders and constituency winners based on greedy strategy, which enhances exploitation capability of GPOFWA. At the same time, the Gaussian explosion spark mechanism of the firework algorithm is used to explore areas with better fitness to ensure the effectiveness of RPPUS.

2. 2.

We adopt a new method called Converged Mobility Center with bi-directional consideration to generate the subgroup optimal solution of the current population, which enhances the exploration ability and maintains the population diversity.

3. 3.

We investigate the performance of the proposed algorithm in solving 30 basic benchmark functions in multiple dimensions (30 and 500), CEC2019 benchmark functions and three engineering optimization problems. To verify the feasibility and effectiveness of this scheme and the accuracy of the results from different aspects, we use experimental and statistical analysis, such as qualitative analysis, quantitative analysis, convergence preference, pairwise comparative analysis (Wilcoxon signed-rank test), computational complexity, and sensitivity analysis of parameters.

The remainder of this research is organized as follows: Section 2 reviews the basic political optimizer and fireworks algorithm. Section 3 proposes a novel hybrid greedy political optimizer with fireworks algorithm. Section 4 discusses the experiment results of different swarm intelligence optimization algorithms on basic benchmark functions and CEC2019 functions. Section 5 applies the algorithm to three different engineering optimization problems. Section 6 presents the conclusions of this work and directions for future work.

## Related work

Political optimizer and firework algorithm are novel algorithms with excellent performance proposed in recent years, which are inspired by different social natural phenomena and can effectively solve optimization problems. The hybrid algorithm proposed in this paper takes the political optimization algorithm as the starting point, and the explosion spark and Gaussian mutation spark mechanism of the firework algorithm are added to the search process of the political optimization algorithm to enhance the performance of the algorithm. This section will briefly introduce these two algorithms.

### Political optimizer

The political optimizer (PO) is a novel intelligent optimization algorithm inspired by the political election process of human society. In PO, each party member can be viewed as a candidate solution, and the election behavior of party members can be seen as an evaluation function. In addition, the votes obtained by party members are mapped to the fitness value of the candidate solution. Unlike traditional algorithms based on political elections, PO considers the complete process of political elections, including five phases of party formation and constituency allocation, election campaign, party switching, inter-party election, and parliamentary affairs. PO seeks the optimal solution through a multi-stage iterative process, and its main algorithm flow is shown in Fig. 1. The following will introduce the five main stages of PO.

#### Party formation and constituency allocation

At the beginning of PO, the entire population containing $${n}^{2}$$ individuals are divided into n parties, and there are n members (candidate solution) in each party. In addition, each party member also plays the role of an election candidate, that is, one member from each party is selected to form a constituency. As is depicted in Fig. 2, the red dotted line indicates the division of political parties, and the blue dotted line indicates the division of constituencies. The mapping of this population division to the mathematical model is that the entire population is divided into n political parties as shown in Eq. (1), and each party consists of n party members as represented as Eq. (2).

$$P = \left\{ {P_{1} ,P_{2} ,P_{3} , \ldots ,P_{n} } \right\}$$
(1)
$$P_{i} = \left\{ {p_{i}^{1} ,p_{i}^{2} ,p_{i}^{3} , \ldots ,p_{i}^{n} } \right\}$$
(2)

Each party member also plays the role of an election candidate, so the entire population can be regarded as n constituencies, which can be represented as Eq. (3). What needs to be emphasized is the members of the constituency are also party members, but the logical division is different. The membership of each constituency is divided as shown in Eq. (4).

$$C = \left\{ {C_{1} ,C_{2} ,C_{3} , \ldots ,C_{n} } \right\}$$
(3)
$$C_{j} = \left\{ {p_{1}^{j} ,p_{2}^{j} ,p_{3}^{j} , \ldots ,p_{n}^{j} } \right\}$$
(4)

Furthermore, the leader of the ith party after computing the fitness of all members is noted as $$p_{i}^{*}$$ and the set of all the party leaders is represented by $${P}^{*}$$ as shown in Eq. (5). Similarly, after the election, $${C}^{*}$$ regroups the winners from all the constituencies named the parliamentarians as shown in Eq. (6), where $$c_{j}^{*}$$ denotes the winner of jth constituency.

$$P^{*} = \left\{ {p_{1}^{*} ,p_{2}^{*} ,p_{3}^{*} , \ldots ,p_{n}^{*} } \right\}$$
(5)
$$C^{*} = \left\{ {c_{1}^{*} ,c_{2}^{*} ,c_{3}^{*} , \ldots ,c_{n}^{*} } \right\}$$
(6)

#### Election campaign

This stage is the core stage of the algorithm and is responsible for the location update of the search agent. In the algorithm, the specific performance is that party members change their positions according to the leader $${P}^{*}$$ of the party they belong to and the winner $${C}^{*}$$ of their constituency. In addition, they will also learn from the experience of the last election through a novel location update mechanism called recent past-based position updating strategy (RPPUS), as formulated in Eqs. (7) and (8). The main idea of RPPUS is to predict promising areas through the numerical relationship between subgroup optimal solution (party leader or constituency winner) and current fitness and previous fitness of search agent.

$$p_{i,k}^{j} \left( {t + 1} \right) = \left\{ {\begin{array}{ll} {m^{*} + r\left( {m^{*} - p_{i,k}^{j} (t)} \right),} \hfill & {{\text{if}}\quad p_{i,k}^{j} (t - 1) \le p_{i,k}^{j} (t) \le m^{*} {\text{ or }}p_{i,k}^{j} (t - 1) \ge p_{i,k}^{j} \left( t \right) \ge m^{*} } \hfill \\ {m^{*} + (2r - 1)\left| {m^{*} - p_{i,k}^{j} (t)} \right|,} \hfill & {{\text{if}}\quad p_{i,k}^{j} (t - 1) \le m^{*} \le p_{i,k}^{j} (t){\text{ or }}p_{i,k}^{j} (t - 1) \ge m^{*} \ge p_{i,k}^{j} (t)} \hfill \\ {m^{*} + (2r - 1)\left| {m^{*} - p_{i,k}^{j} (t - 1)} \right|,} \hfill & {{\text{if}}\quad m^{*} \le p_{i,k}^{j} (t - 1) \le p_{i,k}^{j} (t){\text{ or }}m^{*} \ge p_{i,k}^{j} (t - 1) \ge p_{i,k}^{j} (t)} \hfill \\ \end{array} } \right.$$
(7)
$$p_{i,k}^{j} \left( {t + 1} \right) = \left\{ {\begin{array}{ll} {m^{*} + \left( {2r - 1} \right)\left| {m^{*} - p_{i,k}^{j} \left( t \right)} \right|,} \hfill & {{\text{if}}\quad p_{i,k}^{j} \left( {t - 1} \right) \le p_{i,k}^{j} \left( t \right) \le m^{*} {\text{ or }}p_{i,k}^{j} \left( {t - 1} \right) \ge p_{i,k}^{j} \left( t \right) \ge m^{*} } \hfill \\ {p_{i,k}^{j} \left( {t - 1} \right) + r\left( {p_{i,k}^{j} \left( t \right) - p_{i,k}^{j} \left( {t - 1} \right)} \right),} \hfill & {{\text{if}}\quad p_{i,k}^{j} \left( {t - 1} \right) \le m^{*} \le p_{i,k}^{j} \left( t \right){\text{ or }}p_{i,k}^{j} \left( {t - 1} \right) \ge m^{*} \ge p_{i,k}^{j} \left( t \right)} \hfill \\ {m^{*} + \left( {2r - 1} \right)\left| {m^{*} - p_{i,k}^{j} \left( {t - 1} \right)} \right|,} \hfill & {{\text{if}}\quad m^{*} \le p_{i,k}^{j} \left( {t - 1} \right) \le p_{i,k}^{j} \left( t \right){\text{ or }}m^{*} \ge p_{i,k}^{j} \left( {t - 1} \right) \ge p_{i,k}^{j} \left( t \right)} \hfill \\ \end{array} } \right.$$
(8)

where $$m^{*}$$ indicates the leader of a party or the winner of a constituency, $$r$$ represents a random number from 0 to 1, and $$t$$ represents the current iteration number.

#### Party switching

The party switching phase is mainly to balance exploration and exploitation, which introduces an adaptive parameter $$\lambda$$ called party switching rate. Each party member may be selected and switched to some randomly selected party. The probability of switching is determined by $$\lambda$$, which is initially 1 and linearly decreases to 0 as shown in Eq. (9).

$$\lambda = \left( {1 - \frac{t}{T}} \right)*\lambda_{{{\text{max}}}}$$
(9)

#### Election

At this stage, the fitness of each candidate solution is determined and the party leaders and constituency winners are updated by Eqs. (10) and (11).

$$q = \mathop {{\text{argmin}}}\limits_{1 \leqslant j \leqslant n} f\left( {p_{i}^{j} } \right)\quad p_{i}^{*} = p_{i}^{q}$$
(10)
$$q = \mathop {{\text{argmin}}}\limits_{1 \leqslant i \leqslant n} f\left( {p_{i}^{j} } \right)\quad c_{j}^{*} = p_{q}^{j}$$
(11)

#### Parliamentary affairs

The party switching phase is aimed at the change of the party's perspective, and the parliamentary affairs phase is the change of the constituency's perspective. The constituency winners interact with each other to improve their fitness. Each constituency winner uses the following equation to update its position relative to any other randomly selected constituency. It should be noted that the movement will only be applied if the fitness of $$c_{j}^{*}$$ becomes better.

### Fireworks algorithm

The firework algorithm (FWA) is a swarm intelligence optimization algorithm proposed in recent years, which is inspired by the explosion of fireworks. We usually celebrate with fireworks. When the fireworks explode, the sparks are everywhere. The explosion process of the fireworks can be regarded as the search behavior of the search agent in the local space. The firework algorithm is based on this idea, and the flowchart of the firework algorithm is shown in Fig. 3.

It should be emphasized that fireworks of different qualities will produce different sparks when they explode. High-quality fireworks will produce countless sparks when they explode. The explosion of the fireworks forms a circle, and the sparks are concentrated in the center of the explosion. Conversely, a bad firework will produce fewer sparks when it explodes, and the sparks will spread out to form irregular shapes. From the perspective of swarm intelligence algorithm, a firework is regarded as a candidate solution. A good firework means that a candidate solution is located in a promising area and is close to the global optimal solution. Therefore, more sparks can be generated near good fireworks to find the global optimal solution, and the search radius is as small as possible. A bad firework means that the position of the candidate solution is not ideal, so the search radius should be larger, and the number of sparks generated will be reduced accordingly.

As mentioned earlier, good fireworks should produce more sparks, while bad fireworks produce fewer sparks. The calculation of the number of sparks produced by each firework is shown in Eq. (12). Good fireworks are closer to the global optimum, so the explosion amplitude is smaller, while bad fireworks are just the opposite. The amplitude of explosion for each firework is defined as Eq. (13).

$$S_{i} = \hat{S} \cdot \frac{{y_{{{\text{max}}}} - f({\varvec{x}}_{{\varvec{i}}} ) + \xi }}{{\mathop \sum \nolimits_{i = 1}^{n} (y_{{{\text{max}}}} - f({\varvec{x}}_{{\varvec{i}}} )) + \xi }}$$
(12)
$$A_{i} = \hat{A} \cdot \frac{{f({\varvec{x}}_{{\varvec{i}}} ) - y_{{{\text{min}}}} + \xi }}{{\mathop \sum \nolimits_{i = 1}^{n} (f({\varvec{x}}_{{\varvec{i}}} ) - y_{{{\text{min}}}} ) + \xi }}$$
(13)

where $$y_{{{\text{min}}}} = {\text{min}}(f({\varvec{x}}_{{\varvec{i}}} ))$$, $$y_{{{\text{max}}}} = {\text{max}}(f({\varvec{x}}_{{\varvec{i}}} ))$$, $$\hat{S}$$ and $$\hat{A}$$ are constants, which are to control the number of explosion sparks and the size of explosion amplitude, respectively.

What should be noted is FA design two ways of generating sparks, one is explosion sparks for normal search, its algorithm is shown in Algorithm 1. The other is Gaussian spark, which is a mutation mechanism, and its algorithm is shown in Algorithm 2.

## Proposed method

The original PO assigns dual roles to each agent and uses RPPUS to make the algorithm have excellent performance, but through careful observation, we can find that the algorithm still has a lot of room for improvement. There are the following several points:

1. 1.

The main idea of PO is to guide the movement of the search agent through the subgroup optimal solution. The number of subgroup optimal solutions such as party leader and constituency winner are limited because the number of initial populations directly determines the number of party leaders and constituency winners, which leads to insufficient global exploration capabilities of PO.

2. 2.

In RPPUS, member positions are updated based on the positions of members of the previous generation, the positions of party leaders or constituency winners, and the current positions of members. Considering the numerical relationship between these three indicators, it effectively predicts the favorable area of the member's next move, but this is the future movement trend predicted based on only three indicators, and its accuracy needs to be improved. Moreover, after the update is completed, it is not verified whether the fitness has improved.

3. 3.

In the position update process, to consider the influence of the party leader and the constituency winner on the position of the members, the members are successively moved around the two subgroup optimal solutions. If the two subgroup optimal solutions themselves are relatively close, the difference between updating twice and updating once is not large, and updating twice also means that all dimensions of each member must be updated twice, which adds a lot of time consumption.

The proposed algorithm puts forward corresponding solutions based on the above points, and finally forms GPOFWA. For the first point, using the spark explosion mechanism in FWA, GPOFWA performs explosion spark and Gauss explosion spark operations on party leader and constituency winner respectively based on greedy strategy, thereby optimizing the subgroup optimal solution. For the second point, GPOFWA uses the Gaussian explosion spark mechanism of the firework algorithm to explore areas with better adaptability to ensure the effectiveness of RPPUS. Regarding the third point, this article proposes a new subgroup optimal solution, called Converged Mobility Center (CMC) with bi-directional consideration, which not only considers the advantages of the party leader and the constituency winner but also maintains the population diversity.

### Hybridizing political optimizer with fireworks algorithm

The most distinctive feature of FWA is that the firework explosion operator truly simulates the search process of the search agent. Generating a large number of sparks means that a large number of candidate solutions are generated. PO updates the position of search agent around subgroup optimal solutions, but the number of subgroup optimal solutions is limited by the size of the initial population. At the same time, the individuals performing the explosion operation in FWA are selected optimally from the entire population, and subgroup optimal solution of PO has been screened out, which can be used for the explosion operation. Moreover, the two explosion methods of FWA correspond to the two subgroup optimal solutions of PO, and they complement each other. Here, the party leaders conduct the explosion spark operation, and the constituency winners conduct the Gaussian spark operation. The detailed process of their explosive operation is shown in Fig. 4. In the figure, each dot represents a candidate solution, and each fivE−pointed star represents the spark produced by the explosion. Dots of the same color indicate that they belong to the same political party, and the darkest colored dots indicate the leader of the party. Obviously, the dots in the same ellipse belong to a constituency, and the dots marked with a “W” letter indicate the winner of the constituency. The leader of the party conducts an explosion spark operation (hexagonal firework), while the constituency winner conducts a Gaussian explosion operation (pentagonal firework).

Similar to the FWA, the calculation of the number of sparks generated by subgroup optimal solution is shown in Eqs. (14) and (15). The difference is that in the process of generating sparks, only the subgroup optimal solutions are considered. A better subgroup optimal solution generates more sparks, and a lower fitness subgroup optimal solution generates fewer sparks.

$$K_{i}^{p} = k \cdot \frac{{p_{{{\text{max}}}}^{*} - f\left( {p_{i}^{*} } \right) + \xi }}{{\mathop \sum \nolimits_{i = 1}^{N} \left( {p_{{{\text{max}}}}^{*} - f\left( {p_{i}^{*} } \right)} \right) + \xi }}$$
(14)
$$K_{j}^{c} = k \cdot \frac{{c_{{{\text{max}}}}^{*} - f\left( {c_{j}^{*} } \right) + \xi }}{{\mathop \sum \nolimits_{j = 1}^{N} \left( {c_{{{\text{max}}}}^{*} - f\left( {c_{j}^{*} } \right)} \right) + \xi }}$$
(15)

where $$K_{i}^{p}$$ indicates the number of sparks generated by the leader of the $$ith$$ party, $$K_{j}^{c}$$ indicates the number of sparks generated by the winner of the $$jth$$ constituency, k is a parameter controlling the total number of sparks generated by party leaders or constituency winners, $$p_{{{\text{max}}}}^{*} = {\text{max}}\left( {f\left( {p_{i}^{*} } \right)} \right)$$ ($$i = 1, 2, \ldots , N$$) is the maximum (worst) value of the objective function among the N party leaders, $$c_{{{\text{max}}}}^{*} = {\text{max}}\left( {f\left( {c_{j}^{*} } \right)} \right)$$ ($$j = 1, 2, \ldots , N$$) is the maximum (worst) value of the objective function among the N constituency winners, and $$\xi$$, which denotes the smallest constant in the computer, is utilized to avoid zero-division-error.

Since the party leaders conduct the explosion spark operation, it is necessary to calculate the explosion range. The calculation formula is shown as Eq. (16).

$$R_{i}^{p} = R \cdot \frac{{f\left( {p_{i}^{*} } \right) - p_{{{\text{min}}}}^{*} + \xi }}{{\mathop \sum \nolimits_{i = 1}^{N} \left( {f\left( {p_{i}^{*} } \right) - p_{{{\text{min}}}}^{*} } \right) + \xi }}$$
(16)

where $$R_{i}^{p}$$ represents the explosion range of the leader of the $$ith$$ party, R denotes the maximum explosion range, $$p_{{{\text{min}}}}^{*} = {\text{min}}\left( {f\left( {p_{i}^{*} } \right)} \right)$$ ($$i = 1, 2, \ldots , N$$) is the minimum (best) value of the objective function among the N party leaders.

It should be noted that after the party leaders and the constituency winners perform the explosion operation, based on the greedy strategy, they will update themselves if the sparks they generate have better fitness than themselves. This process is carried out after party formation and constituency allocation, whose pseudo-code is shown in Algorithm 3.

### Gaussian spark for verification of RPPUS

As mentioned earlier, RPPUS only predicts the favorable area where the search agent moves and lacks correctness verification after the update. In some cases, the fitness of the candidate solution after the update is worse than the fitness before the update. As shown in Fig. 5, RPPUS only roughly predicts based on three reference points. The green area is where we want the candidate solution to enter, but the candidate solution may enter the yellow area and cause the fitness to become worse. At this time, the candidate solution is regarded as a “problematic” solution and it should be corrected.

In this paper, the Gaussian spark in the FWA is used to correct the candidate solution whose fitness becomes worse after the update. The specific method is to generate three sparks around the candidate solution and judge whether there is a better solution than the candidate solution before the update among the three sparks, if there is, choose the best spark as the new candidate solution. If the fitness of all sparks is worse than that of the candidate solution before the update, the candidate solution before the update will be inherited and no change will be made. It should be noted that the Gaussian spark here is slightly different from the original firework algorithm because we stipulate that the number of sparks generated by the “problematic” solution is three. The pseudo-code of this process is shown in Algorithm 4.

### Converged mobility center with bi-directional consideration

In PO, the party leader and the constituency winner are successively regarded as the center on which the position of the member is moved. If the two centers themselves are relatively close, it is not necessary to update twice. In response to this situation, we propose a new method to generate a new subgroup optimal solution as a mobile center—Converged Mobility Center with Bi-directional Consideration (CMC), which not only uses the advantages of both the party leader and the constituency winner but also maintains the population diversity.

In order to improve their performance in the election, candidates not only refer to the advantages of party leaders but also compare and analyze with the constituency winners. This action should be carried out at the same time, not one after the other. The higher the ranking of the party leader of the candidate’s party among all party leaders, the more the candidate wants to be close to the party leader. In the same way, the better the constituency winner of the candidate's constituency ranks among all constituency winners, the candidate will prefer the constituency winner. CMC is proposed based on this consideration. As shown in Fig. 6, $$P^{\prime}$$ means ranking first among all party leaders, $$P^{\prime\prime}$$ means ranking second, $$P^{\prime\prime\prime}$$ means ranking third, and $$C^{\prime}$$ , $$C^{\prime\prime}$$ and $$C^{\prime\prime\prime}$$ indicate the ranking among the constituency winners. CMC will be generated near the higher-ranked party leader or constituency winner. The solution of CMC is shown in Eq. (17).

$$center_{i,j}^{k} = PF* p_{i,k}^{*} + CF* c_{j,k}^{*}$$
(17)

where PF represents the party weighting factor, CF represents the constituency weighting factor, $$p_{i,k}^{*}$$ indicates the value of the kth dimension of the party leader $$p_{i}^{*}$$, and $$c_{j,k}^{*}$$ indicates the value of the kth dimension of the party leader $$c_{j}^{*}$$.

The party weighting factor PF and the constituency weighting factor CF are calculated as follows:

$$PF = r_{1} *\frac{{N - PartyRank\left( {p_{i}^{*} } \right)}}{N},\quad PartyRank = sort(P^{*} )$$
(18)
$$CF = r_{2} *\frac{{N - {\text{Constituency}}Rank\left( {c_{j}^{*} } \right)}}{N},\quad ConstituencyRank = sort(C^{*} )$$
(19)

where $$r_{1}$$ and $$r_{2}$$ denotes the random value in the interval of [0, 1], $$N$$ indicates the total number of parties or constituencies.

### Computational complexity

Time complexity is a key criterion for judging the quality of an algorithm. To demonstrate the computational efficiency of GPOFWA, this section analyzes the computational complexity of PO and GPOFWA. The time complexity analysis of PO mainly includes three parts:

1. 1.

The time complexity of the population initialization phase is $$O(ND)$$, where $$N$$ represents the population size and $$D$$ represents denotes the dimensions variables of the problem.

2. 2.

The fitness value of each candidate is evaluated initially, and the time complexity is $$O(NT_{obj} )$$, where $$T_{obj}$$ denotes the cost of the objective function.

3. 3.

The main loop of the algorithm is the main time consumption. The time complexity of the election campaign stage is $$O(2ND)$$, $$O(N)$$ is the time complexity of party switching phase, $$O(NT_{obj} )$$ is the time complexity of the election stage, and the time complexity of parliamentary affairs stage is $$O\left( {\sqrt N D} \right)$$, and $$T_{{{\text{max}}}}$$ with each component is for the main loop. Therefore, the time complexity of the basic PO for $$T_{{{\text{max}}}}$$ loops can be computed as follows:

$$O(PO) = O(ND) + O(NT_{obj} ) + T_{max} \times \left( {O(2ND) + O(N) + O(NT_{obj} ) + O\left( {\sqrt N D} \right)} \right)$$

In contrast, GPOFWA introduced the search strategy of the fireworks algorithm and adopted the Converged Mobility Center with bi-directional consideration. The time complexity of these two algorithms is different in the main loop. GPOFWA performs explosion spark and Gaussian explosion spark operations on party leaders and constituency winners to optimize subgroup optimal solutions. The time complexity of this process is $$O\left( {2\sqrt N DK} \right)$$, where $$K$$ represents the number of sparks generated by the subgroup optimal solution. Gaussian spark for verification of RPPUS and CMC are applied in the election campaign stage, the time complexity is $$O(ND)$$. Therefore, the time complexity of the GPOFWA for $$T_{{{\text{max}}}}$$ loops can be computed as follows:

$$O(GPOFWA) = O(ND) + O(NT_{obj} ) + T_{{{\text{max}}}} \times \left( {O(ND) + O(N) + O(NT_{obj} ) + O\left( {\sqrt N D} \right) + { }O\left( {2\sqrt N DK} \right)} \right)$$

We can conclude from the detailed analysis that they are of the same order of magnitude.

## Experiments and discussion

The performance of GPOFWA is evaluated on 30 basic benchmark functions in multiple dimensions (30 and 500), CEC2019 benchmark functions and three engineering optimization problems against a good combination of some advanced swarm intelligence algorithms. These test cases include various types (linear, nonlinear, and quadratic) of objective functions with the different number of decision variables and a range of types (linear inequalities, nonlinear equalities, and nonlinear inequalities), and the number of constraints. All simulation experiments are conducted on a computer with a Win10 operating system and Intel(R) Core (TM) i7-10750H GHz with 16 GB RAM. The proposed algorithm is coded in MATLAB R2020a.

### Comparison with other algorithms in low-dimensional functions

To verify the good performance of GPOFWA, we first used thirty benchmark functions for testing which are equally divided into two groups: unimodal function and multimodal function. The unimodal function (F1–F15) with the unique global optimal solution can reveal the exploitative capabilities of different algorithms, while the multimodal function (F16–F30) can be used to test the ability of the algorithm to avoid falling into the local optimal solution. It should be noted that the multimodal function test set also contains some fixed-dimensional functions, which show some optimization problems in the real world.

The detailed information of the unimodal function is shown in Table 1, including mathematical expressions, test dimensions, search ranges, and theoretical optimal values. The same details of multimodal functions are presented in Table 2. Moreover, in order to reflect the superiority of GPOFWA, we compare it with the existing advanced optimization algorithms, including HHO, GWO, SCA, SSA, WCA, WOA, LSA, and the original PO. The algorithms used for comparison and their parameter settings are all shown in Table 3. It is worth mentioning that parameter settings are based on the parameters used by the original author or the parameters widely used by various researchers. To ensure the fairness of the experiment, we compare the performance of the algorithms after running each experiment independently 30 times and the maximum number of objective function evaluations for all algorithms is set to 30,000.

First of all, we tested the performance of all selected algorithms on F1–F15. And used three different statistics to start the first step of the evaluation. These statistics are the best fitness value (Best), the average fitness value (Mean), and the standard deviation (Std). Table 4 outlines the obtained results using these measures where the best ones are highlighted in bold text. It can be seen from the table that the proposed algorithm GPOFWA is superior to the original PO, and performs better than other advanced optimization algorithms. Especially for F4–F8 and F12, GPOFWA can find the theoretical optimal value of the function, while other algorithms are far different in terms of optimization accuracy. For the remaining unimodal functions, the performance of GPOFWA is also better than other algorithms. Not only does it converge faster, but it also achieves the best results in finding global optimal values. In order to reflect the superiority of GPOFWA in convergence speed, we also drew some convergence curves as shown in Fig. 7 based on the average fitness value of each generation in 30 experiments, and show the stability of the algorithm through the corresponding box plot. It can be seen from the figure that for most unimodal functions, GPOFWA can find the optimal value in a few iterations, which shows that its global optimization ability is stronger than other algorithms.

By testing the unimodal function F1–F15, we can find the powerful exploitative capability of GPOFWA. To evaluate the exploration capability of GPOFWA, we used the multimodal function set F16–F30 for testing. As with the unimodal function test, we also use the best fitness value (Best), the average fitness value (Mean), and the standard deviation (Std) three statistics to illustrate the experimental results. The experimental results are shown in Table 5. It can be seen from the table that GPOFWA performs better on the multidimensional function test set than other advanced optimization algorithms. For example, in functions such as F16–F20 and F23, GPOFWA has higher optimization accuracy than other optimization algorithms. Secondly, we can find that the variance corresponding to the running results of GPOFWA is very small, most of which are 0 or close to 0, which means that GPOFWA is relatively stable in 30 runs. In addition, we also drew the convergence curve as shown in Fig. 8 based on the results of 30 runs and show the stability of the algorithm through the corresponding box plot. It can be seen from the figure that the convergence speed and optimization accuracy of GPOFWA are superior. Considering the performance of GPOFWA on the unimodal function and multimodal function test sets, we can find that GPOFWA not only has good exploitation capability but also performs well in exploration capability.

### Comparison with other algorithms in high-dimensional functions

To test the performance of the GPOFWA algorithm on high-dimensional problems, we tested unimodal and multimodal functions of 500 dimensions. It should be noted that the test function used in 4.1 contains some fixed dimension functions, so we chose F1–F10, F16–F25 for testing. For each function, the parameters are the same as those mentioned above. Figure 9 shows the qualitative analysis of functions in 500 dimensions. We also use the best fitness value (Best), the average fitness value (Mean), and the standard deviation (Std) three statistics to illustrate the experimental results. The experimental results are shown in Table 6. Similar to the low-dimensional case, GPOFWA also exhibits superior performance in high-dimensional functions. As shown in Fig. 9, it can be clearly seen that for unimodal functions such as F2, F4, and F8, GPOFWA has faster convergence speed and higher convergence accuracy, while for multimodal functions such as F16, GPOFWA shows its ability to avoid local optimal. From the results, the scalability of the proposed algorithm in terms of the number of variables of the optimization problem can be seen.

### Comparison with other algorithms on CEC2019 benchmark functions

By testing 30 classic benchmark functions in low and high dimensions, we can already find the excellent performance of GPOFWA. To further explore the effectiveness of the proposed method, we also use the CEC2019 benchmark function for testing. The CEC2019 benchmark function contains a number of shifted rotated functions to test the stability of the algorithm against function shifts. It is worth mentioning that the comparison algorithms we use in this section are some advanced and hybrid algorithms, not the basic algorithm used above. These algorithms are FWHHO39, PPSO40, CLPPSO40, HHOHGSO41, DE15 and CMA-ES42. The algorithms used for comparison and their parameter settings are based on the parameters used by the original author or the parameters widely used by various researchers. To ensure the fairness of the experiment, we compare the performance of the algorithms after running each experiment independently 30 times. Figure 10 shows a qualitative analysis of some CEC2019 benchmark functions and Table 7 shows results of CEC2019 benchmark functions. From the experimental results, GPOFWA can achieve better scores on F3, F6, F7, F8 of CEC2019, and it can be seen from the box plot that GPOFWA is more stable than other algorithms. Although not optimal on other functions, the results obtained using GPOFWA can be as close to optimal as possible.

### Statistical analysis

To evaluate the proposed algorithm fairly and accurately, we perform statistical tests on the experimental results. To better determine whether the optimization results of GPOFWA were significantly different from those of other algorithms, a Wilcoxon nonparametric test was performed at a significance level of 0.05. A significance level $$p$$-value below 0.05 will be considered sufficient proof of the null hypothesis. The Wilcoxon tests for low dimensions (30 or less), 500 dimensions and CEC2019 are given in Tables 8, 9 and 10. In Tables 8, 9 and 10, values with $$p$$ greater than 0.05 are shown in bold, and NaN indicates that the result of the sum-of-values test is not a number. The last line shows the total counts in ($$+/\approx /-$$) format, where “$$+$$” indicates that the proposed GPOFWA outperforms the comparison algorithms at the 95% significance level (α = 0.05), ‘$$-$$’ indicates that the proposed GPOFWA algorithm exhibits poor performance in comparison, and “$$\approx$$” indicates that there is no significant statistical difference between the proposed GPOFWA algorithm and the comparison algorithm. From the last row, we can more intuitively compare the differences between different algorithms from a statistical point of view. From the last row of Table 8, it can be seen that GPOFWA outperforms other algorithms. We can conclude that from a statistical point of view, the performance of GPOFWA for low-dimensional function optimization is significantly different compared to other algorithms. Table 9 shows the Wilcoxon test results for the 500-dimensional function, and it is not difficult to see that the vast majority of $$p$$-values are less than 0.05 compared to other algorithms. It also shows that GPOFWA still has a statistically significant advantage on high-dimensional problems compared to other algorithms. Table 10 shows the Wilcoxon test results for the CEC2019 functions. It can be seen that except PPSO and HHOHGSO, GPOFWA still has obvious advantages compared with other algorithms.

### Convergence analysis

In original PO, the balance between the exploration and exploitation is attained through party switching, which uses a parameter λ to control the diversity, and the interaction between the constituency winners in the phase of parliamentary affairs ensures the convergence of PO32. CPOFWA adds many mechanisms on the basis of PO to enhance the performance of the algorithm. First, GPOFWA performs explosion spark and Gaussian explosion spark operations on party leaders and constituency winners based on greedy strategy, and the Gaussian explosion spark mechanism of the firework algorithm is used to explore areas with better fitness to ensure the effectiveness of RPPUS. The greedy strategy enhances exploitation capability of GPOFWA, and Gaussian spark for verification of RPPUS prevents excluding good solutions. In addition, Converged Mobility Center with bi-directional consideration enhances the exploitation ability and maintains the population diversity, avoiding local optima. We can also analyze the convergence of GPOFWA by observing the convergence curves of numerous test functions. It can be observed that GPOFWA has a faster convergence rate to produce accurate solutions in most cases compared to the comparison algorithms.

### Parameter sensitivity analysis

The GPOFWA mainly includes 4 parameters, which are the parameter $$k$$ that controls the number of sparks generated, the parameter $$R$$ that controls the radius of the spark explosion, the number of parties(constituencies) and initial party switching rate $$\lambda$$. Among them, the parameter $$k$$ and the parameter $$k$$ are unique to the GPOFWA. Therefore, we need to analyze the influence of parameters $$k$$ and $$R$$ on the performance of GPOFWA algorithm. Experiments were conducted under four sets of parameters in Table 11. The number of parties (constituencies) is set to 8 and the initial party conversion rate λ is set to 1. We selected several unimodal functions (F2 and F6), multimodal functions (F16 and F23), and fixed dimension functions (F28 and F29) as representatives to test the performance of the algorithm under different parameters. The statistical results of GPOFWA are shown in Table 11, and the best results are shown in bold. According to Table 9, when $$k=50$$ and $$R=50$$, the number of optimal values obtained is 5, which is greater than the number of other cases. Hence, $$k=50$$ and $$R=50$$, is the best choice of parameters.

## Engineering optimization problems

In this section, we apply GPOFWA to three well- known constrained engineering problems: welded beam design problem, spring design problem and three bar truss problem to demonstrate its performance in solving practical problems. For the fairness and rationality of the experiment, each experiment is independently run 30 times and the number of iterations is 500. These engineering problems are abstracted from various scenes in the real world, which are composed of an objective function and multiple constraints. Therefore, we need a suitable method to deal with these constraint conditions in these engineering problems. In this section, we employ the penalty function method. In this approach, solutions which violate any of the constraints are penalized by a large fitness value (in case of minimization). The penalty function is defined as follows:

$$\begin{array}{*{20}c} {F(x) = f(x) + \lambda *\mathop \sum \limits_{i = 1}^{p} \left\{ {\max \left[ {0,{ }g_{i} (x)} \right]} \right\} + \lambda *\mathop \sum \limits_{i = 1}^{q} \left\{ {\max \left[ {0,{ }\left| {h_{j} (x)} \right|} \right]} \right\}} \\ \end{array}$$
(20)

where $$\lambda$$ is penalty factor, and it is initialized to $$10^{10}$$ in this section.

### Welded beam design problem

The goal of the welded beam design problem is to determine the best cost of welding beams with strong members. As shown in Fig. 11, there are four parameters that can be optimized for welded beam: height (h), length (l), weld thickness (t) and thickness (b). Its constraints consist of shear ($$\tau$$), beam blending stress ($$\sigma$$), bar bucking load ($$P_{c}$$) and beam end deflection ($$\delta$$) and side constraints. The mathematical expression of WBD problem is given by:

$$\begin{array}{*{20}l} {{\text{Consider}}} \hfill & {\vec{l} = \left[ {l_{1} l_{2} l_{3} l_{4} } \right] = \left[ {hltb} \right] = \left[ {x_{1} x_{2} x_{3} x_{4} } \right],} \hfill \\ {{\text{minimize}}} \hfill & {f\left( {\vec{l}} \right) = l_{1}^{2} l_{2} *1.10471 + 0.04811*l_{3} l_{4} *\left( {14.0 + l_{2} } \right),} \hfill \\ {{\text{Subject}}\;{\text{to}}} \hfill & {s_{1} \left( {\vec{l}} \right) = \tau \left( {\vec{l}} \right) - \tau_{{{\text{max}}}} \le 0,} \hfill \\ {} \hfill & {s_{2} \left( {\vec{l}} \right) = \sigma \left( {\vec{l}} \right) - \sigma_{{{\text{max}}}} \le 0,} \hfill \\ {} \hfill & {s_{3} \left( {\vec{l}} \right) = \delta \left( {\vec{l}} \right) - \delta_{{{\text{max}}}} \le 0,} \hfill \\ {} \hfill & {s_{4} \left( {\vec{l}} \right) = l_{1} - l_{4} \le 0,} \hfill \\ {} \hfill & {s_{5} \left( {\vec{l}} \right) = {{\rm P}} - P_{c} \left( {\vec{l}} \right) \le 0,} \hfill \\ {} \hfill & {s_{6} \left( {\vec{l}} \right) = 0.125 - l_{1} \le 0,} \hfill \\ {} \hfill & {s_{7} \left( {\vec{l}} \right) = 1.10471*l_{1}^{2} + 0,0481*l_{3} l_{4} \left( {14.0 + l_{2} } \right) - 5.0 \le 0,} \hfill \\ \end{array}$$

Decision variable interval values:

\begin{aligned} & 0.1 \le l_{1} \le 2, \\ & 0.1 \le l_{2} \le 10, \\ & 0.1 \le l_{3} \le 10, \\ & 0.1 \le l_{4} \le 2, \end{aligned}

where

\begin{aligned} \tau \left( {\vec{l}} \right) & = \sqrt {\tau^{{\prime}2} + 2\tau^{{\prime}} \tau^{{\prime\prime}} \left( {\frac{{l_{2} }}{R}} \right) + \left( {\tau^{{\prime\prime}} } \right)^{2} } , \\ \tau^{{\prime}} & = \frac{{{\rm P}}}{{\sqrt 2 l_{1} l_{2} }}, \tau^{{\prime\prime}} = \frac{{{{{\rm M}{\rm H}}}}}{{{\rm N}}},\quad {{\rm M}} = \left( {{{\rm K}} + \frac{{l_{2} }}{2}} \right), \\ {{\rm H}} & = \sqrt {\frac{{l_{2}^{2} }}{4} + \left( {l_{1} + \frac{{l_{3} }}{2}} \right)^{2} } , \\ {{\rm N}} & = 2\left\{ {\sqrt 2 l_{1} l_{2} \left[ {\left( {\frac{{l_{2}^{2} }}{4} + \left( {l_{1} + \frac{{l_{3} }}{2}} \right)} \right)^{2} } \right]} \right\}, \\ P_{c} \left( {\vec{l}} \right) & = \frac{{\frac{{4.013{{\rm E}}\sqrt {l_{3}^{2} l_{4}^{6} } }}{36}}}{{{{\rm K}}^{2} }}\left( {1 - \frac{{\frac{{l_{3} }}{{2{{\rm K}}}}\sqrt {{\rm E}} }}{4G}} \right), \\ \end{aligned}

where $$\sigma_{{{\text{max}}}} = 30000$$ psi, P = 6000 lb, L = 14 in, $$\delta_{{{\text{max}}}} = 0.25$$ in, $${{\rm E}} = 3 \times 10^{6}$$ psi, $$\tau_{{{\text{max}}}} = 13600$$ psi and $$G = 12 \times 10^{6}$$ psi.

We compare the statistical results of 30 independent executions of GPOFWA with some other excellent algorithms, and show the values of the design variables obtained, the mean, best value and variance of the optimal solution in Table 12. The results show that the performance of GPOFWA is better than other algorithms.

### Spring design problem

This constrained engineering problem is to design a tension/compression spring with minimum weight, the structure of which is shown in Fig. 12. There are three variables that can be optimized, including the diameter of the wire (d), coil (D) and the number of the active coil (N). The Spring design problem is mathematically formulated as follows:

$$\begin{array}{*{20}l} {{\text{Consider}}} \hfill & {\vec{l} = \left[ {l_{1} l_{2} l_{3} } \right] = \left[ {dDN} \right] = \left[ {x_{1} x_{2} x_{3} } \right],} \hfill \\ {{\text{Minimize}}} \hfill & {f\left( {\vec{l}} \right) = \left( {l_{3} + 2} \right)*l_{2} l_{1}^{2} ,} \hfill \\ {{\text{Subject}}\;{\text{to}}} \hfill & {s_{1} \left( {\vec{l}} \right) = 1 - \frac{{l_{2}^{3} l_{3} }}{{717851^{4} }} \le 0,} \hfill \\ {} \hfill & {s_{2} \left( {\vec{l}} \right) = \frac{{4l_{2}^{2} - l_{1} l_{2} }}{{12566\left( {l_{3} l_{1}^{3} - l_{1}^{4} } \right)}} + \frac{1}{{5108l_{1}^{2} }} \le 0,} \hfill \\ {} \hfill & {s_{3} \left( {\vec{l}} \right) = 1 - \frac{{140.45l_{1} }}{{l_{2}^{2} l_{3} }} \le 0,} \hfill \\ {} \hfill & {s_{4} \left( {\vec{l}} \right) = \frac{{l_{2} + l_{1} }}{1.5} - 1 \le 0,} \hfill \\ \end{array}$$

Decision variable interval values:

\begin{aligned} & 0.05 \le l_{1} \le 2.00, \\ & 0.25 \le l_{2} \le 1.30, \\ & 2.00 \le l_{3} \le 15.0, \\ \end{aligned}

We also compare the statistical results of 30 independent executions of GPOFWA with some other excellent algorithms, and show the values of the design variables obtained, the mean, best value and variance of the optimal solution in Table 13. The results show that GPOFWA can get better results than other algorithms. GPOFWA has performed well in these two engineering application problems, which shows that the algorithm better balance the relationship between exploration and exploitation.

### Three bar truss design problem

The threE−bar truss design problem is a classic design problem in the field of engineering structure. The optimization goal of this design problem is to design a truss as light as possible, which must meet the three constraints of stress, deflection and buckling. This problem aims to minimize the volume of the truss structure subject to 3 stress constraints. The structural model and parameters of the threE−bar truss design problem are shown in the Fig. 13 and the mathematical formulation of this problem is given below:

$$\begin{array}{*{20}l} {{\text{Consider}}} \hfill & {\vec{x} = \left[ {x_{1} x_{2} } \right] = \left[ {A_{1} A_{2} } \right],} \hfill \\ {{\text{Minimize}}} \hfill & {f\left( {\vec{x}} \right) = \left( {2\sqrt {2x_{1} + x_{2} } } \right)l,} \hfill \\ {{\text{Subject}}\;{\text{to}}} \hfill & {\frac{{\sqrt {2x_{1} } + x_{2} }}{{\sqrt {2x_{1}^{2} } + 2x_{1} x_{2} }}p - \sigma \le 0,} \hfill \\ {} \hfill & {\frac{{x_{2} }}{{\sqrt {2x_{1}^{2} } + 2x_{1} x_{2} }}p - \sigma \le 0,} \hfill \\ {} \hfill & {\frac{1}{{\sqrt {2x_{1}^{2} } + x_{1} }}p - \sigma \le 0,} \hfill \\ \end{array}$$

Decision variable interval values:

\begin{aligned} & 0 \le x_{1} \le 1, \\ & 0 \le x_{2} \le 1, \\ & l = 100, \\ & p = 20, \\ & \sigma = 2.0, \\ \end{aligned}

We compare the statistical results of 30 independent executions of GPOFWA with other excellent algorithms, and show the values of the design variables obtained, the mean, best value and variance of the optimal solution in Table 14. The results show that the optimal values of GPOFWA and PO, SSA and WCA are consistent, but the average and variance of GPOFWA are the smallest among all algorithms, which indicates that the proposed GPOFWA is feasible and effective for solving the design problem of threE−bar truss.

## Conclusions

As an emerging swarm intelligence algorithm, PO has good exploration capability, exploration capability, and convergence speed, but the subgroup optimal solution used by the original PO is limited, and PO’s recent past-based position updating strategy (RPPUS) has loopholes. The explosion search mechanism of the firework algorithm has certain potential and unique advantages. In this paper, the explosion search mechanism of the firework algorithm is used to expand and optimize the subgroup optimal solution in the political optimization algorithm. At the same time, the Gaussian explosion spark of the firework algorithm is used to make up for some of the shortcomings of RPPUS. In addition, a new local leader called Converged Mobile Center (CMC) based on two-way consideration was designed to guide the movement of search agents.

Based on these, a hybrid algorithm called GPOFWA is obtained. In order to verify the good performance of GPOFWA, we conducted a two-part experiment. In the first part, we selected a set of well-researched different benchmark functions and compared them with new swarm intelligence optimization algorithms including the original HHO, GWO, SCA, SSA, WCA, WOA, LSA, PO. Compared with PO, this algorithm has significantly improved accuracy, convergence curve, stability, and robustness when solving functions that are unimodal or multimodal. Compared with other methods, GPOFWA also shows significant advantages. In the second part, we apply GPOFWA to three constrained engineering problems, because of the improvement of the explosion search mechanism, GPOFWA can achieve the best results in all engineering design problems. The results show that GPOFWA has excellent performance for engineering design problems, and it is believed that GPOFWA can expect the same performance for other more complex engineering problems.

In addition to the qualities mentioned above, PO has some limitations that need to be highlighted. The limitations of GPOFWA are as follows: Due to the addition of the explosive search mechanism, the algorithm time overhead has increased, although CMC has reduced this newly added time overhead as much as possible. Second, the algorithm has a total of 4 parameters, which is relatively complex and needs to be improved in the future. In future work, the GPOFWA algorithm can also consider a binary version to solve discrete practical problems, such as antenna design, feature selection, etc. At the same time, we can also combine CMC with other swarm optimization algorithms to further test its performance.