A hybrid particle swarm optimization algorithm for solving engineering problem

To overcome the disadvantages of premature convergence and easy trapping into local optimum solutions, this paper proposes an improved particle swarm optimization algorithm (named NDWPSO algorithm) based on multiple hybrid strategies. Firstly, the elite opposition-based learning method is utilized to initialize the particle position matrix. Secondly, the dynamic inertial weight parameters are given to improve the global search speed in the early iterative phase. Thirdly, a new local optimal jump-out strategy is proposed to overcome the "premature" problem. Finally, the algorithm applies the spiral shrinkage search strategy from the whale optimization algorithm (WOA) and the Differential Evolution (DE) mutation strategy in the later iteration to accelerate the convergence speed. The NDWPSO is further compared with other 8 well-known nature-inspired algorithms (3 PSO variants and 5 other intelligent algorithms) on 23 benchmark test functions and three practical engineering problems. Simulation results prove that the NDWPSO algorithm obtains better results for all 49 sets of data than the other 3 PSO variants. Compared with 5 other intelligent algorithms, the NDWPSO obtains 69.2%, 84.6%, and 84.6% of the best results for the benchmark function (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{1}-{f}_{13}$$\end{document}f1-f13) with 3 kinds of dimensional spaces (Dim = 30,50,100) and 80% of the best optimal solutions for 10 fixed-multimodal benchmark functions. Also, the best design solutions are obtained by NDWPSO for all 3 classical practical engineering problems.


Improved PSO algorithms
Many researchers have constantly proposed some improved PSO algorithms to solve engineering problems in different fields.For instance, Yeh 17 proposed an improved particle swarm algorithm, which combines a new self-boundary search and a bivariate update mechanism, to solve the reliability redundancy allocation problem (RRAP) problem.Solomon et al. 18 designed a collaborative multi-group particle swarm algorithm with high parallelism that was used to test the adaptability of Graphics Processing Units (GPUs) in distributed computing environments.Mukhopadhyay and Banerjee 19 proposed a chaotic multi-group particle swarm optimization (CMS-PSO) to estimate the unknown parameters of an autonomous chaotic laser system.Duan et al. 20 designed an improved particle swarm algorithm with nonlinear adjustment of inertia weights to improve the coupling accuracy between laser diodes and single-mode fibers.Sun et al. 21proposed a particle swarm optimization algorithm combined with non-Gaussian stochastic distribution for the optimal design of wind turbine blades.Based on a multiple swarm scheme, Liu et al. 22 proposed an improved particle swarm optimization algorithm to predict the temperatures of steel billets for the reheating furnace.In 2022, Gad 23 analyzed the existing 2140 papers on Swarm Intelligence between 2017 and 2019 and pointed out that the PSO algorithm still needs further research.In general, the improved methods can be classified into four categories: (1) Adjusting the distribution of algorithm parameters.Feng et al. 24 used a nonlinear adaptive method on inertia weights to balance local and global search and introduced asynchronously varying acceleration coefficients.(2) Changing the updating formula of the particle swarm position.Both papers 25 and 26 used chaotic mapping functions to update the inertia weight parameters and combined them with a dynamic weighting strategy to update the particle swarm positions.This improved approach enables the particle swarm algorithm to be equipped with fast convergence of performance.(3) The initialization of the swarm.Alsaidy and Abbood proposed 27 a hybrid task scheduling algorithm that replaced the random initialization of the meta-heuristic algorithm with the heuristic algorithms MCT-PSO and LJFP-PSO.(4) Combining with other intelligent algorithms: Liu et al. 28 introduced the differential evolution (DE) algorithm into PSO to increase the particle swarm as diversity and reduce the probability of the population falling into local optimum.

Particle swarm optimization (PSO)
The particle swarm optimization algorithm is a population intelligence algorithm for solving continuous and discrete optimization problems.It originated from the social behavior of individuals in bird and fish flocks 6 .The core of the PSO algorithm is that an individual particle identifies potential solutions by flight in a defined constraint space adjusts its exploration direction to approach the global optimal solution based on the shared information among the group, and finally solves the optimization problem.Each particle i includes two attributes: velocity vector V i = v i1 , v i2 , v i3 , ..., v ij , ..., v iD , and position vector X i = [x i1 , x i2 , x i3 , ..., x ij , ..., x iD ] .The velocity vector is used to modify the motion path of the swarm; the position vector represents a potential solution for the optimization problem.Here, j = 1, 2, . . ., D , D represents the dimension of the constraint space.The equations for updating the velocity and position of the particle swarm are shown in Eqs.
Here Pbest k i represents the previous optimal position of the particle i , and Gbest is the optimal position discovered by the whole population.i = 1, 2, . . ., n , n denotes the size of the particle swarm.c 1 and c 2 are the acceleration constants, which are used to adjust the search step of the particle 29 .r 1 and r 2 are two random uniform values distributed in the range [0, 1] , which are used to improve the randomness of the particle search.ω inertia weight parameter, which is used to adjust the scale of the search range of the particle swarm 30 .The basic PSO sets the inertia weight parameter as a time-varying parameter to balance global exploration and local seeking.The updated equation of the inertia weight parameter is given as follows: where ω max and ω min represent the upper and lower limits of the range of inertia weight parameter.k and Mk are the current iteration and maximum iteration.

Improved particle swarm optimization algorithm
According to the no free lunch theory 31 , it is known that no algorithm can solve every practical problem with high quality and efficiency for increasingly complex and diverse optimization problems.In this section, several improvement strategies are proposed to improve the search efficiency and overcome this shortcoming of the basic PSO algorithm.

Improvement strategies
The optimization strategies of the improved PSO algorithm are shown as follows: (1) The inertia weight parameter is updated by an improved chaotic variables method instead of a linear decreasing strategy.Chaotic mapping performs the whole search at a higher speed and is more resistant to falling into local optimal than the probability-dependent random search 32 .However, the population may result in that particles can easily fly out of the global optimum boundary.To ensure that the population can converge to the global optimum, an improved Iterative mapping is adopted and shown as follows: Here ω k is the inertia weight parameter in the iteration k , b is the control parameter in the range [0, 1].(2) The acceleration coefficients are updated by the linear transformation.c 1 and c 2 represent the influential coefficients of the particles by their own and population information, respectively.To improve the search performance of the population, c 1 and c 2 are changed from fixed values to time-varying parameter param- eters, that are updated by linear transformation with the number of iterations: where c max and c min are the maximum and minimum values of acceleration coefficients, respectively.(3) The initialization scheme is determined by elite opposition-based learning.The high-quality initial population will accelerate the solution speed of the algorithm and improve the accuracy of the optimal solution.Thus, the elite backward learning strategy 33 is introduced to generate the position matrix of the initial population.Suppose the elite individual of the population is X = [x 1 , x 2 , x 3 , ..., x j , ..., x D ] , and the elite opposition-based solution of X is X o = [x o1 , x o2 , x o3 , ..., x oj , ..., x oD ] .The formula for the elite opposition- based solution is as follows: where k r is the random value in the range (0, 1) .ux oij and lx oij are dynamic boundaries of the elite oppo- sition-based solution in j dimensional variables.The advantage of dynamic boundary is to reduce the (1) exploration space of particles, which is beneficial to the convergence of the algorithm.When the elite opposition-based solution is out of bounds, the out-of-bounds processing is performed.The equation is given as follows: After calculating the fitness function values of the elite solution and the elite opposition-based solution, respectively, n high quality solutions were selected to form a new initial population position matrix.(4) The position updating Eq. ( 2) is modified based on the strategy of dynamic weight.To improve the speed of the global search of the population, the strategy of dynamic weight from the artificial bee colony algorithm 34 is introduced to enhance the computational performance.The new position updating equation is shown as follows: Here ρ is the random value in the range (0, 1) .ψ represents the acceleration coefficient and ω′ is the dynamic weight coefficient.The updated equations of the above parameters are as follows: where f (i) denotes the fitness function value of individual particle i and u is the average of the population fitness function values in the current iteration.The Eqs. (11,12) are introduced into the position updating equation.And they can attract the particle towards positions of the best-so-far solution in the search space.(5) New local optimal jump-out strategy is added for escaping from the local optimal.When the value of the fitness function for the population optimal particles does not change in M iterations, the algorithm determines that the population falls into a local optimal.The scheme in which the population jumps out of the local optimum is to reset the position information of the 40% of individuals within the population, in other words, to randomly generate the position vector in the search space.M is set to 5% of the maximum number of iterations.(6) New spiral update search strategy is added after the local optimal jump-out strategy.Since the whale optimization algorithm (WOA) was good at exploring the local search space 35 , the spiral update search strategy in the WOA 36 is introduced to update the position of the particles after the swarm jumps out of local optimal.The equation for the spiral update is as follows: Here D = |x i (k) − Gbest| denotes the distance between the particle itself and the global optimal solu- tion so far.B is the constant that defines the shape of the logarithmic spiral.l is the random value in [−1, 1] .lrepresents the distance between the newly generated particle and the global optimal position, l = −1 means the closest distance, while l = 1 means the farthest distance, and the meaning of this parameter can be directly observed by Fig. 1. (7) The DE/best/2 mutation strategy is introduced to form the mutant particle.4 individuals in the population are randomly selected that differ from the current particle, then the vector difference between them (9) x oij = rand lx oij , ux oij (10)   www.nature.com/scientificreports/ is rescaled, and the difference vector is combined with the global optimal position to form the mutant particle.The equation for mutation of particle position is shown as follows: where x * is the mutated particle, F is the scale factor of mutation, r 1 , r 2 , r 3 , r 4 are random integer values in (0, n] and not equal to i , respectively.Specific particles are selected for mutation with the screening conditions as follows: where Cr represents the probability of mutation, rand(0, 1) is a random number in (0, 1) , and i rand is a random integer value in (0, n]. The improved PSO incorporates the search ideas of other intelligent algorithms (DE, WOA), so the improved algorithm proposed in this paper is named NDWPSO.The pseudo-code for the NDWPSO algorithm is given as follows:

Comparing the distribution of inertia weight parameters
There are several improved PSO algorithms (such as CDWPSO 25 , and SDWPSO 26 ) that adopt the dynamic weighted particle position update strategy as their improvement strategy.The updated equations of the CDWPSO and the SDWPSO algorithm for the inertia weight parameters are given as follows: (16) where A is a value in (0, 1] .r max and r min are the upper and lower limits of the fluctuation range of the inertia weight parameters, k is the current number of algorithm iterations, and Mk denotes the maximum number of iterations.
Considering that the update method of inertia weight parameters by our proposed NDWPSO is comparable to the CDWPSO, and SDWPSO, a comparison experiment for the distribution of inertia weight parameters is set up in this section.The maximum number of iterations in the experiment is Mk = 500 .The distributions of CDWPSO, SDWPSO, and NDWPSO inertia weights are shown sequentially in Fig. 2.
In Fig. 2, the inertia weight value of CDWPSO is a random value in (0,1].It may make individual particles fly out of the range in the late iteration of the algorithm.Similarly, the inertia weight value of SDWPSO is a value that tends to zero infinitely, so that the swarm no longer can fly in the search space, making the algorithm extremely easy to fall into the local optimal value.On the other hand, the distribution of the inertia weights of the NDWPSO forms a gentle slope by two curves.Thus, the swarm can faster lock the global optimum range in the early iterations and locate the global optimal more precisely in the late iterations.The reason is that the inertia weight values between two adjacent iterations are inversely proportional to each other.Besides, the time-varying part of the inertial weight within NDWPSO is designed to reduce the chaos characteristic of the parameters.The inertia weight value of NDWPSO avoids the disadvantages of the above two schemes, so its design is more reasonable.

Experiment and discussion
In this section, three experiments are set up to evaluate the performance of NDWPSO: (1) the experiment of 23 classical functions 37 between NDWPSO and three particle swarm algorithms (PSO 6 , CDWPSO 25 , SDWPSO 26 ); (2) the experiment of benchmark test functions between NDWPSO and other intelligent algorithms (Whale Optimization Algorithm (WOA) 36 , Harris Hawk Algorithm (HHO) 38 , Gray Wolf Optimization Algorithm (GWO) 39 , Archimedes Algorithm (AOA) 40 , Equilibrium Optimizer (EO) 41 and Differential Evolution (DE) 42 ); (3) the experiment for solving three real engineering problems (welded beam design 43 , pressure vessel design 44 , and three-bar truss design 38 ).All experiments are run on a computer with Intel i5-11400F GPU, 2.60 GHz, 16 GB RAM, and the code is written with MATLAB R2017b.
The benchmark test functions are 23 classical functions, which consist of indefinite unimodal (F1-F7), indefinite dimensional multimodal functions (F8-F13), and fixed-dimensional multimodal functions (F14-F23).The unimodal benchmark function is used to evaluate the global search performance of different algorithms, while the multimodal benchmark function reflects the ability of the algorithm to escape from the local optimal.The mathematical equations of the benchmark functions are shown and found as Supplementary Tables S1-S3 online.

Experiments on benchmark functions between NDWPSO, and other PSO variants
The purpose of the experiment is to show the performance advantages of the NDWPSO algorithm.Here, the dimensions and corresponding population sizes of 13 benchmark functions (7 unimodal and 6 multimodal) are set to (30, 40), (50, 70), and (100, 130).The population size of 10 fixed multimodal functions is set to 40.Each algorithm is repeated 30 times independently, and the maximum number of iterations is 200.The performance of the algorithm is measured by the mean and the standard deviation (SD) of the results for different benchmark functions.The parameters of the NDWPSO are set as: Besides, the experimental data are retained to two decimal places, but some experimental data will increase the number of retained data to pursue more accuracy in comparison.The best results in each group of experiments will be displayed in bold font.The experimental data is set to 0 if the value is below 10 -323 .The experimental parameter settings in this paper are different from the references (PSO 6 , CDWPSO 25 , SDWPSO 26 , so the final experimental data differ from the ones within the reference.( 17) www.nature.com/scientificreports/As shown in Tables 1 and 2, the NDWPSO algorithm obtains better results for all 49 sets of data than other PSO variants, which include not only 13 indefinite-dimensional benchmark functions and 10 fixed-multimodal benchmark functions.Remarkably, the SDWPSO algorithm obtains the same accuracy of calculation as NDWPSO for both unimodal functions f 1 -f 4 and multimodal functions f 9 -f 11 .The solution accuracy of NDWPSO is higher than that of other PSO variants for fixed-multimodal benchmark functions f 14 -f 23 .The conclusion can be drawn that the NDWPSO has excellent global search capability, local search capability, and the capability for escaping the local optimal.
In addition, the convergence curves of the 23 benchmark functions are shown in Figs. 3, 4, 5, 6, 7, 8, 9, 10,  11, 12, 13, 14, 15, 16, 17, 18 and 19.The NDWPSO algorithm has a faster convergence speed in the early stage of the search for processing functions f1-f6, f8-f14, f16, f17, and finds the global optimal solution with a smaller number of iterations.In the remaining benchmark function experiments, the NDWPSO algorithm shows no outstanding performance for convergence speed in the early iterations.There are two reasons of no outstanding performance in the early iterations.On one hand, the fixed-multimodal benchmark function has many disturbances and local optimal solutions in the whole search space.on the other hand, the initialization scheme based on elite opposition-based learning is still stochastic, which leads to the initial position far from the global optimal www.nature.com/scientificreports/solution.The inertia weight based on chaotic mapping and the strategy of spiral updating can significantly improve the convergence speed and computational accuracy of the algorithm in the late search stage.Finally, the NDWPSO algorithm can find better solutions than other algorithms in the middle and late stages of the search.
To evaluate the performance of different PSO algorithms, a statistical test is conducted.Due to the stochastic nature of the meta-heuristics, it is not enough to compare algorithms based on only the mean and standard deviation values.The optimization results cannot be assumed to obey the normal distribution; thus, it is necessary to judge whether the results of the algorithms differ from each other in a statistically significant way.Here, the Wilcoxon non-parametric statistical test 45 is used to obtain a parameter called p-value to verify whether two sets of solutions are different to a statistically significant extent or not.Generally, it is considered that p ≤ 0.5 can be considered as a statistically significant superiority of the results.The p-values calculated in Wilcoxon's ranksum test comparing NDWPSO and other PSO algorithms are listed in Table 3 for all benchmark functions.The p-values in Table 3 additionally present the superiority of the NDWPSO because all of the p-values are much smaller than 0.5.
In general, the NDWPSO has the fastest convergence rate when finding the global optimum from Figs.

Comparison experiments between NDWPSO and other intelligent algorithms
Experiments are conducted to compare NDWPSO with several other intelligent algorithms (WOA, HHO, GWO, AOA, EO and DE).The experimental object is 23 benchmark functions, and the experimental parameters of the NDWPSO algorithm are set the same as in Experiment 4.1.The maximum number of iterations of the experiment is increased to 2000 to fully demonstrate the performance of each algorithm.Each algorithm is repeated 30 times individually.The parameters of the relevant intelligent algorithms in the experiments are set as shown in Table 4.
To ensure the fairness of the algorithm comparison, all parameters are concerning the original parameters in the relevant algorithm literature.The experimental results are shown in Tables 5, 6, 7 and 8 and Figs.20, 21, 22, 23,  24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35 and 36.The experimental data of NDWPSO and other intelligent algorithms for handling 30, 50, and 100-dimensional benchmark functions ( f 1 − f 13 ) are recorded in Tables 8, 9 and 10, respectively.The comparison data of fixed- multimodal benchmark tests ( f 14 − f 23 ) are recorded in Table 11.According to the data in Tables 5, 6 and 7, the NDWPSO algorithm obtains 69.2%, 84.6%, and 84.6% of the best results for the benchmark function ( f 1 − f 13 ) in the search space of three dimensions (Dim = 30, 50, 100), respectively.In Table 8, the NDWPSO algorithm obtains 80% of the optimal solutions in 10 fixed-multimodal benchmark functions.
The convergence curves of each algorithm are shown in Figs. 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,  32, 33, 34, 35 and 36.The NDWPSO algorithm demonstrates two convergence behaviors when calculating the benchmark functions in 30, 50, and 100-dimensional search spaces.The first behavior is the fast convergence of NDWPSO with a small number of iterations at the beginning of the search.The reason is that the Iterativemapping strategy and the position update scheme of dynamic weighting are used in the NDWPSO algorithm.This scheme can quickly target the region in the search space where the global optimum is located, and then precisely lock the optimal solution.When NDWPSO processes the functions f 1 − f 4 , and f 9 − f 11 , the behavior can be reflected in the convergence trend of their corresponding curves.The second behavior is that NDWPSO gradually improves the convergence accuracy and rapidly approaches the global optimal in the middle and late stages of the iteration.The NDWPSO algorithm fails to converge quickly in the early iterations, which is possible to prevent the swarm from falling into a local optimal.The behavior can be demonstrated by the convergence trend of the curves when NDWPSO handles the functions f 6 , f 12 , and f 13 , and it also shows that the NDWPSO algorithm has an excellent ability of local search.
Combining the experimental data with the convergence curves, it is concluded that the NDWPSO algorithm has a faster convergence speed, so the effectiveness and global convergence of the NDWPSO algorithm are more outstanding than other intelligent algorithms.

Experiments on classical engineering problems
Three constrained classical engineering design problems (welded beam design, pressure vessel design 43 , and three-bar truss design 38 ) are used to evaluate the NDWPSO algorithm.The experiments are the NDWPSO algorithm and 5 other intelligent algorithms (WOA 36 , HHO, GWO, AOA, EO 41 ).Each algorithm is provided with the maximum number of iterations and population size ( Mk = 500, n = 40 ), and then repeats 30 times, independently.The parameters of the algorithms are set the same as in Table 4.The experimental results of three engineering design problems are recorded in Tables 9, 10 and 11 in turn.The result data is the average value of the solved data.www.nature.com/scientificreports/

Welded beam design
The target of the welded beam design problem is to find the optimal manufacturing cost for the welded beam with the constraints, as shown in Fig. 37.The constraints are the thickness of the weld seam ( h ), the length of the clamped bar ( l ), the height of the bar ( t ) and the thickness of the bar ( b ).The mathematical formulation of the optimization problem is given as follows:              www.nature.com/scientificreports/In Table 9, the NDWPSO, GWO, and EO algorithms obtain the best optimal cost.Besides, the standard deviation (SD) of t NDWPSO is the lowest, which means it has very good results in solving the welded beam design problem.

Pressure vessel design
Kannan and Kramer 43 proposed the pressure vessel design problem as shown in Fig. 38 to minimize the total cost, including the cost of material, forming, and welding.There are four design optimized objects: the thickness of the shell T s ; the thickness of the head T h ; the inner radius R ; the length of the cylindrical section without considering the head L .The problem includes the objective function and constraints as follows: is one of the most widely-used case studies as shown in Fig. 39.There are two main design parameters: the area of the bar1 and 3 ( A 1 = A 3 ) and area of bar 2 ( A 2 ).The objective is to minimize the weight of the truss.This problem is subject to several constraints as well: stress, deflection, and buckling constraints.The problem is formulated as follows: From Table 11, NDWPSO obtains the best design solution in this engineering problem and has the smallest standard deviation of the result data.In summary, the NDWPSO can reveal very competitive results compared to other intelligent algorithms.

Conclusions and future works
An improved algorithm named NDWPSO is proposed to enhance the solving speed and improve the computational accuracy at the same time.The improved NDWPSO algorithm incorporates the search ideas of other intelligent algorithms (DE, WOA).Besides, we also proposed some new hybrid strategies to adjust the distribution of algorithm parameters (such as the inertia weight parameter, the acceleration coefficients, the initialization scheme, the position updating equation, and so on).www.nature.com/scientificreports/23 classical benchmark functions: indefinite unimodal (f1-f7), indefinite multimodal (f8-f13), and fixeddimensional multimodal(f14-f23) are applied to evaluate the effective line and feasibility of the NDWPSO algorithm.Firstly, NDWPSO is compared with PSO, CDWPSO, and SDWPSO.The simulation results can prove the exploitative, exploratory, and local optima avoidance of NDWPSO.Secondly, the NDWPSO algorithm is compared with 5 other intelligent algorithms (WOA, HHO, GWO, AOA, EO).The NDWPSO algorithm also has better performance than other intelligent algorithms.Finally, 3 classical engineering problems are applied to prove that the NDWPSO algorithm shows superior results compared to other algorithms for the constrained engineering optimization problems.
Although the proposed NDWPSO is superior in many computation aspects, there are still some limitations and further improvements are needed.The NDWPSO performs a limit initialize on each particle by the strategy of "elite opposition-based learning", it takes more computation time before speed update.Besides, the" local optimal jump-out" strategy also brings some random process.How to reduce the random process and how to improve the limit initialize efficiency are the issues that need to be further discussed.In addition, in future work, researchers will try to apply the NDWPSO algorithm to wider fields to solve more complex and diverse optimization problems.

Figure 19 .
Figure 19.Evolution curve of NDWPSO and other PSO algorithms for f23.

Table 1 .
Optimization results and comparison for functions (f 1 -f 13 ).Significant values in bold.

Table 2 .
Optimization results and comparison for functions (f 14 -f 23 ).Significant values in bold.

Table 3 .
Results of the p-value for the Wilcoxon rank-sum test on benchmark functions.

Table 4 .
Parameter settings for algorithms.

Table 5 .
Optimization results and comparison for functions(f 1 -f 13 ) with Dim = 30.Significant values in bold.

Table 6 .
Optimization results and comparison for functions for(f 1 -f 13 ) Dim = 50.Significant values in bold.

Table 7 .
Optimization results and comparison for functions for(f 1 -f 13 ) Dim = 100.Significant values in bold.

Table 8 .
Optimization results and comparison for function (f 14 -f 23 ).Significant values in bold.

Table 10 .
Comparison of results for pressure vessel design problem.Significant values in bold.

Table 11 .
Comparison of results for the three-bar truss design problem.Significant values in bold.

Table 9 .
Comparison of results for welded beam design problem.Significant values in bold.