Introduction

Meta heuristic technology is a product derived from the field of mathematics, the research progress in the field of meta heuristic is often applicable to the behavior of human life and production. Taking particle swarm optimization (PSO) algorithm as an example, Wang et al.1 designed a new potential aware fuzzy scheme, namely syntony (SYNTONY), which uses the efficient PSO to select effective seeds, and according to the experiment, SYNTONY significantly increases the edge coverage; Liu et al.2 proposed a migration based PSO algorithm, which uses dynamic differential grouping for online decomposition. Experiments show that the solution of the proposed algorithm is superior to the current most advanced algorithm in the case of problems up to 1000 dimensions; Raghav3 proposed an improved PSO (mPSO) algorithm to solve the economic scheduling problem. The experimental results show that the proposed algorithm has a more reasonable economic cost; Zhang et al.4 proposed a multi-objective collaborative PSO (MOCPSO), which combines diversity strategy (DSS) and convergent search strategy (CSS) to increase the local search range of particles, and uses a three category framework to effectively use DSS and CSS; Zhang et al.5 proposed an improved discrete PSO (DPSO) algorithm to guide the underwater nozzle array layout of the artificial upflow system. The experiment found that with the increase of iteration times, the layout optimized by DPSO algorithm was more reasonable; Parastoo et al.6 proposed a multi-objective discrete PSO (CBMODPSO) based on crossover, which was applied to the parameter evaluation of Multimodal routing problem. The experiment found that the error rate and spacing measurement obtained by using CBMODPSO algorithm were smaller, which proved that the convergence of the algorithm was better; Zhu et al.7 proposed a Multi strategy PSO based on Exponential noise (MEPSO) to solve the problem of low altitude penetration in safe space. The experiment found that the proposed algorithm can plan a simpler and clearer path in different complex environments; Zhu et al.8 also proposed an improved bare skeleton PSO (IBPSO), which was applied to solve the problem of DeoxyriboNucleic Acid (DNA) design. The experiment found that the DNA sequence designed by the proposed algorithm can avoid the secondary structure and effectively reduce the value of h-measure and similarity combination constraint. Besides PSO algorithm, other algorithms also have certain advantages.

Other algorithms have also achieved better experimental results in specific problems through improvement or combination, which proves the research value of the improvement. Zhu et al.9 proposed a human memory algorithm (HMO), and the experiment found that the proposed algorithm can get smaller optimal value. Zhao et al.10 proposed the improved gray wolf algorithm (IGWO), and the simulation experiment found that the improved gray wolf algorithm has good consistency. Zamani and Nadimi-Shahraki11 proposed an evolutionary crow search algorithm (ECSA) to optimize the hyperparameters of ANNs for diagnosing chronic diseases,and the experiment indicated the superiority of ECSA over competitor algorithms in optimizing the network. NadimiShahraki et al.12 proposed a new binary optimizer algorithm (BSMO), which is based on the newly proposed starling murmuration optimizer (SMO) and use four targeted medical datasets to evaluate the performance of BSMO, the experiment shows satisfactory results in selecting effective features from the targeted medical datasets. Zhu et al.13 proposed a Jaya (JAYA) algorithm based on normal cloud, which was applied to the optimization of DNA sequence design and found that the proposed algorithm can effectively control the secondary structure or hybridization in the process of DNA reaction. Xue et al.14 proposed a population-based optimization algorithm for dung beetles, which was applied to three well-known engineering design problems. Experiments showed that the proposed algorithm can effectively deal with constraint problems; Zhu et al.15 proposed an improved manta ray foraging optimization algorithm MGL-MRFO. Experiments found that using MGL-MRFO algorithm in different environments could ensure good learning ability and adaptability to find reasonable and feasible solutions. Hisham16 suggest combining gravitational search algorithm (GSA) and sperm swarm optimization (SSO) to propose a hybird algorithm called HSSOGSA, the experiment found that HSSOGSA will be able to search and explore any search space domain with a fast convergence rate without tripping in a local minimum. Xu et al.17 proposed an integrated clustering scheme to Fuse both the global Structure and the local structure information for Ensemble Clustering (FSEC), and used the alternating direction multiplier method (ADMM) to solve the objective function optimization problem. The experiment found that the proposed FSEC was superior to many of the most advanced integrated clustering methods.

Harris hawks optimization algorithm (HHO) is a new biological optimization algorithm proposed by Heidari et al.18. It is also one of the algorithms frequently selected in recent years to solve optimization problems. Through in-depth research, scholars have improved and clustered the HHO algorithm in varying degrees, so as to solve the shortcomings of the algorithm itself, such as poor search performance and easy to fall into local optimization in the later phase of optimization, and applied it to a variety of complex engineering problems. Xie et al.19 proposed a data-driven method based on HHO genetic programming (HHO-GP), which was applied to 12 prediction models of underground structure life under sulfate corrosion. The results showed that the average relative training error and prediction error of the new prediction model were small. Asad et al.20 proposed to combine HHO algorithm with vehicle ad HOC network (VANET) to form a new Clustering algorithm (HHOCNET). Comparing the proposed algorithm with other clustering algorithms, it was found that HHOCNET has the least number of vehicle cluster heads in the whole network, representing the large coverage of each vehicle, proving that the proposed algorithm has higher reliability. Sana et al.21 proposed to further optimize the extraction of kernel sharply value (kSV) using HHO algorithm. This kSV-HHO cancer classification method provides the potential to improve interpretability, enhance performance and improve the efficiency of cancer classification. Zhang and Bao22 proposed a HHO phase division method based on hard sequence constraints to automatically determine the optimal number of phases. The advantage of this HHO based phase division is that it can successfully find the optimal number of phases according to the percentage of performance improvement indicators. In recent years, the application of leader HHO has become more and more frequent. Ayinla et al.23 proposed using IHHO to design proportional integral derivative (PID) and fractional order proportional integral derivative (FOPID) controllers to realize the optimal speed regulation of direct current (DC) motor. Experiments show that the proposed controller has significantly improved the rise time, stability time and the maximum overshoot during transient. Bibhuti24 proposed to use improved HHO (IHHO) and Taguchi coupling additive ratio assessment (ARAS) technology to study the optimal cutting conditions and variables affecting the cutting parameters of titanium alloy. At present, it has been used to minimize tool wear, chip reduction coefficient and surface roughness. Liu et al.25 proposed a multi leader HHO model with adaptive mutation(MIHHO-AM), and used the algorithm to optimize the parameters of Elman neural network (ENN). The experiment found that the proposed model has higher accuracy in predicting the silicon content in blast furnace molten iron and can better adapt to the changing trend.

The improved HHO algorithm is also applied in information technology, electrical power, medical and other fields. Khatri et al.26 proposed the discrete natural inspired HHO algorithm DHHO, which was applied to the independent cascade model of information diffusion to evaluate the effectiveness of eight social networks. The experiment found that DHHO has a higher final impact in almost all data sets. Gharehchopogh et al.27 proposed a new binary multi-objective dynamic HHO algorithm (MODHHO) based on mutation operator, which was applied to identify botnets in the Internet of things. The experiment found that the use of MODHHO algorithm can show low error rate and high accuracy in the optimization operation. Hussein et al.28 proposed a boosted HHO algorithm (BHHO), which was applied to the problem of extracting the parameters of single diode peak voltage (PV). The experimental data under seven weather conditions were used to verify the performance of the algorithm. The experiment proved that the proposed algorithm has high consistency and converges to the optimal under all environmental conditions. Ebrahim et al.29 proposed an optimization technology combining HHO and sine cosine algorithm (SCA), which was applied to determine the parameters of the optimal control DC bus voltage controller. The experiment found that the proposed algorithm improved the DC bus voltage and battery response, and improved the overall efficiency and fuel cell life. Zhou and Bian30 first proposed an multi improved double objective HHO (MBOHHO) algorithm to solve the problem of sustainable robot disassembly line balance. The experiment found that the number of Pareto solutions obtained by MBOHHO algorithm was more, and the value of inter-generational distance was lower, which proved that the algorithm had better convergence and better search ability. Aneesh et al.31 proposed a new differential evolution adaptive HHO (DEAHHO), which was applied to multi-level image threshold segmentation. The experiment found that although the budget time of using DEAHHO algorithm was more, the fitness value was better. Zhang et al.32 proposed a SSFSHHO that integrates Sobol sequence and random fractal search (SFS) mechanism to classify Alzheimer's disease (AD) and early mild cognitive impairment (MCI). Experiments show that this algorithm is superior to many classical machine learning algorithms and improves the classification performance of AD diagnosis to a certain extent.

The above improved HHO algorithms are obviously superior to the original algorithm in their respective original texts, but first of all, the improved HHO algorithms also have problems such as unstable optimization effect, easy to fall into stagnation, and high standard deviation when dealing with high-dimensional problems. Second, the actual changes of Harris hawk and prey in nature are often ignored in the design of the improvement strategy. Finally, the basic mathematical structure principles are not thoroughly quoted in the process of adding innovation points, Therefore, it is necessary to modify the algorithm and obtain a new improved version. While solving the shortcomings of the original HHO algorithm in the research, we can get an algorithm with higher versatility, stability and further improved optimization effect. The main contributions of this study are as follows:

  1. 1.

    A new integrated improved Harris hawks optimization algorithm (IIHHO) is proposed. Firstly, the intermittent energy adjustment factor is used to affect the nonlinear change of energy. The real intermittent energy regulator can effectively improve the local search ability of the algorithm. Then, combined with the compound function formula, the random vector of Levy flight is modified to obtain the attenuation vector to obtain a more accurate search range, which is conducive to jumping out of the local optimum. Finally, by setting Cardano formula function to adjust the step size, the accuracy of the algorithm is improved.

  2. 2.

    The solution generated by IIHHO in the 50 dimensional test function is evaluated by using the computational experiment competition 2013 (CEC 2013) test suite. Firstly, the ablation experiment is used to verify the effectiveness of each innovative mechanism. Secondly, compared with other improved algorithms, it is determined that the optimal value and other data indicators obtained by the improved algorithm in most functions are optimal. Finally, in the computational experiment competition 2022 (CEC 2022), the proposed IIHHO algorithm is compared with three latest SOTA algorithms to verify the effectiveness of the algorithm.

  3. 3.

    The algorithm is applied to two engineering experiments of welded beam design and pressure vessel design. The experiment shows that the parameter optimization performance of IIHHO algorithm is better, and the cost consumption value of 5884.74947 is smaller than that of other algorithms in solving pressure vessel problems, which verifies the performance of the IIHHO algorithm.

The remainder of this paper is organized as follows: The basic steps of the original HHO algorithm are presented in the sections of “Harris hawks optimization algorithm” section. “Integrated improved Harris hawks optimization algorithm” section focuses on the improvement strategy of HHO algorithm. In “Experiments” and “Performance of IIHHO on engineering applications” sections, the experiment of the proposed method is carried out, and the results are demonstrated and analyzed. Based on the experimental results, the obtained algorithm IIHHO and other algorithms are analyzed and summarized at the same time. “Conclusion” section is a elaboration and induction of the existing work and future actions and expectations.

Harris hawks optimization algorithm

This section mainly describes the initialization phase, exploration phase, transition phase and development phase of HHO algorithm.

Initialization phase

In the initialization phase of the algorithm, the position and fitness value of each individual in the population are initialized by random generation. The fitness value is constantly updated through the objective function in the iteration process, so the objective function for solving the fitness value also needs to be initialized. At the same time, all parameters in the function need to be set with corresponding boundary constraints.

Exploration phase

In the HHO model, the exploration phase is a link that simulates the Harris hawk's random stay strategy mechanism, sets the random selection factor \({q}_{r}\) of the value range [0,1], and in each iteration, takes the probability value of 0.5 as the boundary, and carries out different location selection strategies according to the size of \({q}_{r}\). When \({q}_{r}\)≥0.5, the Harris hawk needs to select a random location to explore, while when \({q}_{r}\)<0.5, The location strategy of Harris hawk is to choose the location of other family members. Its mathematical expression is as follows:

$$X(t+1)=\left\{\begin{array}{ll}{X}_{rand}(t)-{r}_{d1}|{X}_{rand}(t)-2{r}_{d2}X(t)| & \quad {q}_{r}\ge 0.5\\ {(X}_{p}(t)-{X}_{m}(t))-{r}_{d3}(lb+{r}_{d4}(ub-lb)) & \quad {q}_{r} < 0.5 \end{array}\right.$$
(1)

where \({r}_{d1}\), \({r}_{d2}\), \({r}_{d3}\), \({r}_{d4}\) are random numbers between [0,1]. The updated value in each iteration, t is the t-th iteration, which affects the position update of the hawk. The parameters \(X(t)\), \(X(t+1)\), \({X}_{rand}(t)\), \({X}_{p}(t)\) and \({X}_{m}(t)\) in the formula are updated with the value of iteration times t, representing the current position vector of the selected hawk, the position vector of the next iteration of the hawk, the position vector of the current randomly selected hawk, the position vector of prey and the average vector of Harris hawk population, ub and lb respectively represent the upper and lower bounds of variables. For the solution of the average position vector \({X}_{m}(t)\), select the positions of all hawks in the t-th iteration for summation, and then take the average value. The corresponding formula is shown in Eq. (2), where the total number of hawks is \({N}_{sum}\):

$${X}_{m}(t)=\frac{1}{{N}_{sum}} \sum_{i=1}^{{N}_{sum}}{X}_{i}(t)$$
(2)

Transition phase

The run of sight in the transition stage is determined by the change of prey escape energy. Set the initial stage energy of prey energy as \({E}_{init}\) and update the value during the iteration process within [-1,1]. Set the escape energy \({E}_{time}\) as shown in Eq. (3), where the absolute value is used to judge the implementation of the algorithm in the exploration and development phases, and the critical value 1 shall prevail. When |\({E}_{time}\)|≥ 1, Harris hawk will carry out the exploration phase, When |\({E}_{time}\)|< 1, Harris hawk enters the development phase:

$${E}_{time}=2{E}_{init} \left (1-\frac{t}{T} \right)$$
(3)

Exploitation phase

After the Harris hawk detects the prey, it can proceed to the development phase. At this time, a new random number g is introduced to represent the prey predation probability before the Harris hawk attacks. The value range is [0,1] and the critical value is also 0.5. Finally, the attack strategy used in the development phase is determined through the combination of |\({E}_{time}\)| and g.

Soft besiege in the exploitation phase

When g ≥ 0.5 and |\({E}_{time}\)|≥ 0.5, it means that the prey tries to escape but is surrounded by a sudden attack. The corresponding formula is as follows:

$$X\left(t+1\right)=\Delta X(t)-{E}_{time}|{B}_{r}{X}_{p}(t)-X(t)|$$
(4)
$$\Delta X(t)={X}_{p}(t)-X(t)$$
(5)
$${B}_{r}=2(1-{r}_{d5})$$
(6)

where X(t) is the distance difference between prey and hawk in the t-th iteration, \({r}_{d5}\) is a random number between [0,1] and \({B}_{r}\) represents the random jump strength of prey in the process of escape. The value range is [0,2].

Hard besiege in the exploitation phase

When g ≥ 0.5 and |\({E}_{time}\)|< 0.5, this state represents the process that the prey itself is more strongly surrounded, exhausted enough after a large amount of energy consumption, and finally captured. The simulation formula is realized as follows:

$$X(t+1)={X}_{p}(t)-{E}_{time}|\Delta X(t)|$$
(7)

Soft besiege with progressive rapid dives

When g < 0.5 and |\({E}_{time}\)|≥ 0.5, Harris hawk will dive. After combining with Levy flight (LF) function, the simulation formula is realized as follows:

$$Y={X}_{p}(t)-{E}_{time}|B{X}_{p}(t)-X(t)|$$
(8)
$$Z=Y+S\times LF(dim)$$
(9)
$$LF\left(x\right)=0.01\times \frac{u\times \sigma }{{\left|v\right|}^{\frac{1}{\beta }}}$$
(10)
$$\sigma ={\left(\frac{{\Gamma}(1+\beta )\times {\text{sin}}(\frac{\pi \beta }{2})}{{\Gamma}(\frac{1+\beta }{2})\times \beta \times {2}^{(\frac{\beta -1}{2})}}\right)}^{\frac{1}{\beta }}$$
(11)

where dim is the dimension of the problem, S is a random vector with the size of 1 \(\times\) dim, LF is the Levy flight function, and u and v are random numbers between [0,1], β is fixed at 1.5 as the flight step. The updated soft besiege strategy function after introducing LF is:

$$X(t+1)=\left\{\begin{array}{ll}Y & \quad if \; F(X) < F(X(t))\\ Z & \quad if \; F(Z) < F(X(t))\end{array}\right.$$
(12)

Hard besiege with progressive rapid dives

When g < 0.5, |\({E}_{time}\)|< 0.5, under the premise of hard besiege, the hawk will shorten its average position interval to capture prey that cannot escape. The simulation formula is:

$$X(t+1)=\left\{\begin{array}{ll }Y & if \; F(X)<F(X(t))\\ Z & if \; F(Z)<F(X(t))\end{array}\right.$$
(13)

Under the new rules, Y and Z need to be optimized and adjusted. The simulation formula includes:

$$Y={X}_{p}\left(t\right)-{E}_{time}|B{X}_{p}(t)-{X}_{m}(t)|$$
(14)
$$Z=Y+S\times LF(dim)$$
(15)

Pseudocode of HHO

See algorithm 1 for the pseudocode of HHO algorithm.

Algorithm 1
figure a

Pseudocode of HHO algorithm.

Integrated improved Harris hawks optimization algorithm

The basic HHO algorithm still has some shortcomings. First of all, the algorithm has poor search ability in solving problems and is easy to fall into local optimal solution. Secondly, the activity of Harris hawk in natural terrain needs to be considered when constructing HHO model, so HHO algorithm needs to be continuously improved to improve its performance.

Intermittent energy regulator

The linear decreasing strategy of the original energy of prey in HHO algorithm can not effectively describe the consumption process of the actual prey, and the switching from exploration to development phase of the algorithm is too monotonous, poor balance, and lack of periodic execution operation in the whole iteration process. To address these issues, this paper proposes an intermittent energy adjustment factor for the exploration and development process of phased execution algorithm, And the local search ability of the algorithm is further greatly improved. Accordingly, the original energy formula is modified as follows:

$${E}_{nl}=2{{\text{e}}}^{-\left(\uppi \times \frac{{\text{t}}}{{\text{T}}}\right)}$$
(16)
$$InF={\text{cos}}\left(\frac{2k\mathrm{\pi t}}{{\text{T}}}\right) k=\mathrm{0,1},2,......$$
(17)
$${E}_{time}=\left\{\begin{array}{ll}{E}_{0}\times {E}_{nl}\times InF & if \; InF\ge 0\\ 0 & if \; InF<0\end{array}\right.$$
(18)

The exponential function is used to design the nonlinear energy change \({E}_{nl}\), \(\uppi\) is the adjustment coefficient of the value range; Because the cosine function is periodic, the energy is further designed to add cosine disturbance, so as to realize the periodic transformation of energy. The intermittent parameter namely InF is designed as shown in Eq. (17).The principle is to multiply the nonlinear energy, further disturb the escape energy, and make intermittent judgement, when InF ≥ 0, the energy decreases nonlinearly, while when InF < 0, the energy of the prey remains basically unchanged and the displayed value continues to be 0 as shown in Eq. (18), after a period of time, the prey regains activity again to obtain a certain amount of energy, and the algorithm is executed according to this intermittent mechanism. In function (17), k = 0,1,2,… indicates the number of decreasing cycles of intermittent parameters. According to the experiment, the effect is the best when k = 5. The energy before and after the improvement is as shown in the Fig. 1. After adding intermittent parameters, the algorithm can quickly enter the development phase in the early phase of iteration. With the continuous progress of the cycle, the energy regained by the prey continues to decrease, which the exploration stage is no longer executed, and the time of prey development activities increases significantly, which is conducive to greatly improving the local search ability of the algorithm.

Figure 1
figure 1

Dynamic comparison diagram of energy change.

Attenuation vector

The original Harris hawk algorithm introduced Levy flight function into the progressive siege form in the development phase. S, as an arbitrary 1 \(\times\) dim vector, randomly affected the position update of Harris hawk under the progressive besiege. Due to the low energy of prey when entering the development phase, the impact of Levy flight should also be reduced in the middle and late phases of iteration, so as to adapt to the implementation activities in the middle and late phases of iteration; By introducing the concept of attenuation, the random vector s is improved, as designed in Eq. (19). The formula uses the principle of monadic quadratic equation, and selects the change curve that the value of the vector gradually decreases with the increase of the number of iterations. Finally, the attenuation effect is realized, which is conducive to further improving the local search ability of the algorithm. The attenuation variation plotted in combination with the design of the function is shown in Fig. 2:

Figure 2
figure 2

Variation diagram of attenuation vector value.

$$S={\left(1-\frac{{\text{t}}}{{\text{T}}}\right)}^{2}$$
(19)

Combined with the change of attenuation vector, it is found that compared with the original random vector, the improved vector can effectively shorten the average distance between Harris hawk and prey, and use this strategy to generate a new individual position vector, which can help the algorithm jump out of the local optimum and overcome the shortage of premature.

Step size updating combined with Cardano formula function

The traditional HHO algorithm uses a fixed step value of 1.5. If the step size is too large in the process of Harris hawk searching for prey, there will be no optimal solution. Therefore, in order to make the improved algorithm effectively adjust the search distance, effectively improve the local development ability of HHO algorithm and improve the accuracy of the algorithm, the step size is improved to gradually reduce the large step size at the initial phase of the search to a small step size at the later phase, so as to achieve a smooth transition. Cardano formula is a well-known formula for finding roots of monadic cubic equation. For the monadic cubic equation with standard real coefficients as shown in Eq. (20), one of the function solutions is shown in Eq. (21), where p and q are known coefficients. When p = q, the obtained function solution can be regarded as a new monadic cubic function. According to this, this paper updates the flight step size to obtain the Eq. (22) by using the effective function solution obtained by Cardano formula. In order to solve the problem that the population individuals are trapped in a small range at the end of the iteration and stop further iteration, set the critical value of step size, and take 1.2 according to the corresponding value of the experiment. Best, when the step size is less than 1.2, add random value disturbance to the original as shown in Eq. (23), where is the variance after disturbance:

$${x}^{3}+px+q=0$$
(20)
$${x}_{1}=\sqrt[3]{-\frac{q}{2}+\sqrt{\frac{{q}^{2}}{4}+\frac{{p}^{3}}{27}}}+\sqrt[3]{-\frac{q}{2}-\sqrt{\frac{{q}^{2}}{4}+\frac{{p}^{3}}{27}}}$$
(21)
$$\beta =1.5\times \left(1-\sqrt[3]{-\frac{t}{2T}+\sqrt{\frac{1}{4}\times {\left(\frac{t}{T}\right)}^{2}+\frac{1}{27}\times {\left(\frac{t}{T}\right)}^{3}}}\right)$$
(22)
$${\sigma }_{c}=\left\{\begin{array}{ll}\sigma & if\; \beta >1.2\\ \sigma \times rand() & if \; \beta \le 1.2\end{array}\right.$$
(23)

Comprehensive review of IIHHO

The IIHHO algorithm is mainly divided into two parts. One is to simulate the intermittent escape energy of prey, and update Eq. (18) through Algorithm 2. The other is to improve the progressive diving mode in the development phase. Compared with other improved HHO algorithms, IIHHO algorithm adjusts the position update formula of progressive subduction and various parameters of levy flight, and changes the random vector into attenuation vector, The original fixed step size is modified by combining with the Cardano function, and the square difference is added with the influence of random number, which completely refined the process operation of the algorithm to jump out of the local optimal solution to a certain extent, and greatly improved the search ability of the algorithm. Because IIHHO algorithm modifies the random vector and fixed step size, the value of the vector and step size in each run of the algorithm is more regular and diversified, which is also conducive to the algorithm to get richer results when solving the individual optimal value, and improve the diversity of the algorithm. See Algorithm 2 for the pseudocode of the proposed IIHHO algorithm.

Algorithm 2
figure b

Pseudocode of IIHHO algorithm.

In order to analyze the algorithm more accurately, the time complexity of IIHHO algorithm is further analyzed. In the classical HHO algorithm, the initialization of the swarm of hawks requires O(T × \({N}_{sum}\)×dim) time, where T represents the maximum number of iterations and has been defined in Chapter 2, meanwhile , dim is the dimension of the specific problem. The classical computational complexity of the updated mechanism is O(T × \({N}_{sum}\)), so the computational complexity of HHO algorithm is O(T × \({N}_{sum}\)×dim) time. Since IIHHO does not supplement any additional process, and the energy update mechanism of IIHHO is modified to intermittent energy regulator, requiring O(T × \({N}_{sum}\)) time, the time complexity required after modifying the attenuation vector is O(T × \({N}_{sum}\)×dim), which is the same as the step size updating combined with Cardano formula function, so there is no difference in cost calculation. In summary, the total computational time for the IIHHO is O(T × \({N}_{sum}\)×dim) for T iterations.

Experiments

In this section, an experimental environment is designed to ensure the fairness of the experiment. The computer used for simulation is configured with Intel (R) core (TM) i7-9750h CPU @ 2.60GHz, 8GB ram, 64 bit operating system, and all algorithms are operated in matlab2016a tool. The basic parameters are set as follows: population size \({N}_{sum}=100\), independent operation times 30, and dimension dim = 50.

Ablation experiment

In the ablation experiment, 28 benchmark functions are used to verify the effectiveness of the algorithm. These functions are selected from the CEC 2013 test suite33, where F1–F5 are unimodal functions, F6–F20 are basic multimode functions, and F21–F28 are combined functions, and the number of evaluations is controlled at the dimension multiplied by 10,000. The improvements used in the IIHHO algorithm were decomposed and combined to obtain a total of six variant algorithms, as shown in Table 1. Since the original HHO algorithm did not add parameters, all the variant algorithms did not set parameter initialization. By comparing IIHHO with HHO and six variant algorithms under the same constraints. In the experiment, the CEC 2013 function set is reset to the total test set to verify the algorithm performance, since the initial overall scale is set to 100 at the beginning of the experimental part, when the dimension of the problem is 50, the maximum number of function evaluations is 500,000. These algorithms run 30 times on each function, and take the Best value, Average value, Worst value and the Standard deviation(Std) value as evaluation indexes. In order to analyze the data, the average value is used to sort each algorithm. If the average values are equal, the standard deviation is considered in order to obtain a clear sequential relationship between the algorithms, and the experimental results are shown in Table 2.

Table 1 Settings for 6 variant algorithms.
Table 2 Optimization results of each variant algorithm in 50 dimensions.

In the experiment, the change of the average value of each algorithm in 5000 iterations is drawn to show the experimental effect of IIHHO more specifically. Finally, the iteration curves of eight algorithms are obtained, as shown in Fig. 3. As shown in Table 2, compared with the standard HHO algorithm, the proposed IIHHO algorithm is also superior to the other six IIHHO variant algorithms in the optimal value, worst value, average value and standard deviation of functions F1, F5, F11, F14, F17, F19 and F22. Similarly, in functions F2, F3, F9, F12, F15, F21, the numerical effect of IIHHO is also significantly dominant; Combined with ablation experiments, it is found that the improvement of energy and step size among the three improvement strategies can independently play a certain effect. Combined with the ranking of arithmetic results in Table 2, variant IIHHO-I and variant IIHHO-III algorithm are significantly better than HHO algorithm; Although the variant algorithm IIHHO-II with attenuation vector alone performs poorly, combined with the overall analysis, it is found that the optimization results of IIHHO in the 50 dimensional case are similar to IIHHO-IV, and it is much better than the original HHO algorithm in solving the problems of unimodal function and basic multi-mode function, which proves that adding attenuation vector is conducive to reducing the influence of Levy flight function and jumping out of the local optimum, and because IIHHO adds the step strategy combined with Cardano formula, The algorithm is often better than IIHHO-IV in solving the optimal value, which shows that the algorithm can maximize the excellent difference performance of the algorithm because each improvement strategy plays a corresponding role.

Figure 3
figure 3figure 3figure 3

Iterative convergence graph of each improved algorithm.

In order to further analyze whether there are differences between the existing improved algorithm and other algorithms, Wilcoxon test is used to verify the improvement effect of IIHHO algorithm. The principle of Wilcoxon test in this section is to compare each variable algorithm with the original HHO algorithm, and the final result is shown in the Context value in Table 3. Taking 0.05 as the bound, the Context value below 0.05 indicates that there is a significant difference between the two algorithms in the optimal value obtained by Wilcoxon test. First of all, after adding the intermittent energy regulator, the final IIHHO algorithm has greater differences than the original HHO algorithm in functions F1–F5 as shown in Table 3, and it is found that all values are excellent in combination with the experimental data in Table 2. Secondly, the two improved strategies of attenuation vector and Cardano formula can achieve the same effect, which proves the advantage of the improved strategy in improving the algorithm ability. Among unimodal functions and basic multimodal functions, IIHHO has greater differences, and according to the average index in Table 2, IIHHO can jump out of the local optimum and obtain a smaller average value.

Table 3 Wlicoxon test results of each variant algorithm.

Performance comparison between IIHHO and improved algorithms

After determining the improvement strategy adopted, this section uses IIHHO and the selected seven improved algorithms, LMHHO34, ISSA35, DPSO36, LHHO37, HHO_JOS38, CDO39 and SSO40 conducted data comparison and analysis, and the corresponding parameter settings are shown in Table 4. All algorithms are still tested in CEC 2013, and the maximum number of function evaluations is 500,000. The experimental results are shown in Table 5. The results of Wilcoxon test are listed in Table 6.

Table 4 Parameter settings for each improved algorithm.
Table 5 Optimization results of each improved algorithm in 50 dimensions.
Table 6 Wlicoxon test results of each improved algorithm.

The experimental effects of IIHHO are shown in Fig. 4. From the test results of unimodal functions F1 to F5 in Table 5, the optimal value, average value, worst value and standard deviation of IIHHO in functions F1, F3, F4 and F5 are the best, although the optimal value obtained by the algorithm in function F2 is slightly inferior to that of HHO_ JOS algorithm is still better than other algorithms. As shown in Fig. 4, the algorithm has the best iterative effect on all unimodal functions and can jump out of the local optimum to find a more accurate optimal solution. In general, IIHHO algorithm is conducive to solving unimodal functions. Then, the improvement of the algorithm on the basic multi-mode function is analyzed from Table 5. From F6 to F20, the results of IIHHO on the relevant parameters of functions F6, F10, F11, F14 and F19 are the best. Although the values of functions F8, F9, F12, F13 and F15 are not optimal, combined with Wilcoxon test, the difference of the optimal solution obtained by the algorithm is greater than most algorithms and the average value is better than most algorithms, proving that the results obtained by the algorithm are still better than most algorithms, which further proves that the algorithm can show superior performance in solving the basic multi-mode function. Finally, the improvement effect of the algorithm between F21 and F28 is analyzed. On the whole, the processing advantage of the algorithm is not big, but the proposed algorithm can still get more accurate optimal value, average value, maximum difference and standard deviation in function F22, and the average value in functions F21, F23 and F25 is still better than most algorithms. Combined with the Context value, the average ranking of the algorithm in 50 dimensions is 2.6429. In general, IIHHO algorithm has certain competitiveness in the comparison of seven variant algorithms, which also verifies that IIHHO algorithm has certain research value.

Figure 4
figure 4figure 4figure 4

Iterative convergence graph of each improved algorithm.

Performance comparison between IIHHO and SOTA algorithms

The experimental results compared with the improved algorithm verify the performance of the algorithm. In this chapter, IIHHO is used to compared the data with the currently selected LSHADE_ SPACMA41, LSHADE_cnEpSin42 and EA4eig43 totally three state of the art (SOTA) algorithms, which means top ranking algorithms. The corresponding parameter settings are shown in Table 7. All algorithms are applied to CEC 202244 for testing. Unlike CEC 2013, CEC 2022 has only 12 single objective test functions with boundary constraints, which are unimodal function (F1), multimodal function (F2–F5), mixed function (F6–F8) and combined function (F9-F12), and the test dimension selection is only 2 dimensions, 10 and 20 dimensions. This paper selects 20 dimensions as the problem scale, and Wilcoxon test is also performed, which is represented by the value of Context parameter. The experimental results are shown in Tables 8 and 9.

Table 7 Parameter setting for each SOTA algorithm.
Table 8 Optimization results of SOTA algorithm in 50 dimensions.
Table 9 Wlicoxon test results of each improved algorithm.

Comparing with the SOTA algorithm in Table 8, it is found that although the values obtained by the proposed IIHHO algorithm do not have obvious advantages, and the average ranking obtained is 3.5833, it has no obvious competitiveness compared with the other three SOTA algorithms, and the optimal value obtained by the proposed algorithm in functions F10 and F11 is better than the three SOTA algorithms, and the optimal value in function F2 is better than LSHADE_SPACMA and LSHADE_cnEpSin, which proves that the algorithm still has certain advantages in the performance of jumping out of the local optimum to get a more accurate optimal value.

Performance of IIHHO on engineering applications

In this section, two different engineering benchmark problems—welding beam design problem, pressure vessel design problem, and threE−bar truss design problem are used to evaluate the performance of IIHHO in practical problems, and the IIHHO algorithm is compared with LMHHO, ISSA, DPSO, LHHO, HHO_JOS, CDO, SSO totally 7 algorithms are applied together in the problem for comparison, and the comparison results are obtained.

Welded beam engineering design problem

The well-known welded beam problem is a typical engineering design problem proposed by Ashutosh and Vikram45 . The corresponding example is shown in Fig. 5. The purpose is to find the best design to minimize the manufacturing price of welded beams under multiple constraints. The parameters needed in the design of welded beam include the thickness (b), length (L), weld thickness (h) and reinforcement height (t). The eight algorithms are applied to the welded beam problem for comparison. According to the results in Table 10, it is found that using IIHHO algorithm can obtain the minimum production cost of 1.72493, which is second only to DPSO.

Figure 5
figure 5

Example of welding beam design47.

Table 10 Comparison results of welding beam design problems.

Pressure vessel engineering design problem

The pressure vessel design problem is an engineering design problem to minimize the production cost of pressure vessels. The corresponding example is shown in Fig. 6. L is the section length of the cylinder part, 2R is the inner wall diameter of the cylinder, Th and Ts represent the wall thickness of the head and the wall thickness of the circular cylinder respectively. These four indicators are the four optimization variables of the pressure vessel problem. The IIHHO algorithm is also applied to the welded beam problem. The experimental results are shown in Table 11. It is found that IIHHO algorithm has obvious optimization effect on variables, and can achieve the optimal cost of 5887.74947.

Figure 6
figure 6

Example of pressure vessel design46.

Table 11 Comparison results of pressure vessel design problems.

Conclusion

The original HHO algorithm is a kind of algorithm with certain research value. Because of its strong global search ability and less conditional parameters, it has been used to solve practical problems in recent years. The algorithm also has the disadvantages of poor search performance when solving low dimensional problems and easy to fall into local optimum after searching. Therefore, this paper improves the HHO algorithm by adjusting the intermittent energy factor, the attenuation vector and the flight step adjusted by Cardano function to improve the local search ability and calculation accuracy of the algorithm. On the CEC 2013 function test set, the improved IIHHO has obvious optimization effect on 50 dimensional unimodal function and basic multimode function, and the convergence speed is faster, and the results are better. Compared with three SOTA algorithms in CEC 2022, IIHHO also shows certain optimization effect, which proves that the improved IIHHO algorithm has strong stability and robustness. Finally, the algorithm is applied to two classical engineering problems to test its performance; Although the improvement effect of IIHHO algorithm is stable, it still has some problems. For example, compared with SOTA algorithm, there are no advantages in the average and other related parameters; The results of solving composite function in CEC 2013 are poor; The intermittent time of energy recovery is too long. Therefore, some adjustments can be made in the future research:

  1. 1.

    The cycle mechanism of energy can be modified. First, it is considered to appropriately shorten the rest time of prey so that the exploration activities can still improve the global search ability in the early cycle. Second, it is further considered to carefully consider the recovery level of energy in combination with the actual situation.

  2. 2.

    The variance update formula of Levy flight function can be further improved. Firstly, the influence of random value should be appropriately reduced. Secondly, the classical variance structure should be decomposed. There is also a certain research value for the design of parameters u and v.

  3. 3.

    The application field of the improved algorithm is still limited to classical engineering problems, so it is necessary to constantly explore new fields to meet the needs of real life and production.