Introduction

The field of optimization research has experienced significant growth in recent decades, particularly with the widespread utilization of nature-inspired optimization algorithms (NIAs). These algorithms, derived from natural phenomena, are now employed across a multitude of research domains, including engineering design, management science, medical technology, social science, and others. While genetic algorithms (GA)1, differential evolution (DE)2, and particle swarm optimization (PSO)3 remain influential, the landscape has expanded with the introduction of numerous new algorithms inspired by different species and natural processes. This continuous innovation and the development of hybrid techniques underscore the dynamic nature of NIAs, showcasing their relevance and applicability in diverse problem-solving scenarios.

The fundamental domain is categorized into two main classes: evolutionary algorithms (EAs) and swarm intelligent algorithms (SIAs). EAs are rooted in fundamental natural processes, specifically drawing inspiration from Darwinian theory and natural selection. Notable examples include GA1, memetic algorithm (MA)4, scatter search (SS)5, stochastic fractal search (SFS)6, fire-hawk algorithm (FHA)7 among others, as prominent representatives.

On the other hand, SIAs are built upon the collective behavior observed in various species. This category encompasses algorithms such as red fox optimization algorithm (RFO)8, mud ring algorithm (MRA)9, sea horse optimizer (SHO)10, escaping bird search (EBS)11, golden eagle optimizer (GEO)12, clouded leopard optimization (CLO)13, hermit crab shell exchange (HCSE)14, honey badger algorithm (HBA)15, naked mole rat algorithm16 cuckoo search algorithm (CS)17, whale optimization algorithm (WOA)18, grey wolf optimization (GWO)19,20, equilibrium optimizer (EO)21, moth flame optimization (MFO)22 and others. These algorithms leverage the swarming behaviour of different species as a basis for their optimization strategies.

In real world, most of the practical engineering design problems are highly challenging and differential evolution (DE) has been applauded as an efficient problem solver by the evolutionary computing community, due to its simple linear structure, lesser known tuning parameters and versatile applicability23. The major reason for its popularity is because of its splendid performance and ranking in Congress on Evolutionary Computation (CEC) competitions by IEEE for various complex domain research scenarios and benchmark test suites (such as multi-modal, composite, single objective, dynamic, constrained, multi-objective, etc). Numerous efforts have been employed to improve the working efficiency, scalability, speed, robustness and accuracy of DE. Unlike traditional evolutionary programming (EP) and evolutionary strategies (ES), DE is based on the population members generated in the current generation with respect to randomly different members of the search space. Here, no probability based distribution (Gaussian distribution in case of EP and ES, Cauchy distribution in fast EPs) is required to generate new offspring. Numerous recent modifications have been added to DE as self-adaptive DE (SaDE)24, adaptive differential evolution with optional external archive (JADE)25, success-history based adaptive DE (SHADE)26, SHADE with population size reduction hybrid with semi-parameter adaptation of CMA-ES (LSHADE-SPACMA)27, hybrid ES-DE28 and others.

In this paper, a hypothesis of using a relatively new concept of iterative division to improve the exploration (expl) and exploitation (expt) operation and overcome the local optima stagnation is used29. Apart from this, four new modifications are added in the conventional DE to improve its overall performance. Firstly, an adaptive proportional population size reduction mechanism, inspired by GA30, is followed. Secondly, a reducing Weibull distributed31 crossover rate CR is introduced such that during the initial stages, the algorithm performs extensive expl whereas in final stages, expt is followed. The next modification follows the Gaussian sampling mechanism by hybridizing basic search equations to mitigate the problem of premature convergence and reinforce complementary searching capabilities32. Finally, instead of using a simple crossover and mutation operations, new hybridization based on grey wolf optimization (GWO)20 and cuckoo search (CS)29 are incorporated to improve the overall all performance of DE. The proposed algorithm has been named as multi-hybrid differential evolution (MHDE) algorithm. The resulting framework has been integrated with the basic DE and tested on IEEE CEC 200533, CEC 201434 and CEC 201735 test suites, four engineering design problems and three frame design problems. The results indicate that adding additional hybridization and self-adaptivity helps in providing reliable results.

The rest of the research article is given as, “Frame design problems” section provide details about the basics of frame optimization problems. “The proposed algorithm” section describes the proposed approach, its requirement and implementation. In “Numerical examples” section, numerical results on CEC 2005 test suite, CEC 2014 and CEC 2017 benchmark problems are presented whereas in “Real-world applications I: engineering design problems” section, four engineering design problems including pressure vessel design, rolling element bearing design, tension/compression spring design and cantilever beam design are discussed. In “Real-world applications II: frame design problems” section, design of 1-bay 8-story frame, 3-bay 15-story frame and 3-bay 24 story frames are presented. Finally, in “Conclusion” section, insightful conclusions and future recommendations are unearthed.

Frame design problems

Frame design problem is one among the most significant structural engineering design problem and has a diversified design flexibility36. The generalized equation for optimal frame design is given by

$$\begin{aligned} Find \hspace{10pt} X= [x_1, x_2,\ldots ,X_{ng}] \end{aligned}$$
(1)
$$\begin{aligned} to \hspace{5pt} minimize \hspace{5pt} f(X)=g(X)\times g_{penalty}(X) \end{aligned}$$
(2)

For W sections, X is the cross-sectional areas design vector, f(X) represents merit functions, ng is the number of design variables; g(X) is the objective function defined as the volume or weight of the frame structure; \(g_{penalty}(X)\) is defined as a penalty function and is a result of constraint violations on structural response.

The frame structure weight in the form of a function g(X) is given by

$$\begin{aligned} g(X)=\sum _{i=1}^{nm} \gamma _i. X_i. L_i \end{aligned}$$
(3)

where mn is the total members making up the frame; \(L_i\) is the length of the \(i_{th}\) member within the frame; and \(\gamma _i\) density of the material in the \(i_{th}\) member.

The penalty function, \(g_{penalty}(X)\) is given by28:

$$\begin{aligned} g_{penalty}(X)=(1+ \epsilon _1. v)^{\epsilon _2}, \hspace{10pt} v=\sum _{i=1}^n max [0, o_i] \end{aligned}$$
(4)

where n is the number of constraints of the design problem, \(\epsilon _1\) and \(\epsilon _2\) are constants based on expl and expt, and \(o_i\) is the displacement or stress constraint. If \(o_i\) has a positive value, the corresponding value is added to the constraint functions. These constraints consist of

Element stresses

$$\begin{aligned} o_i^{\sigma }=1-\Big |\frac{\sigma _i}{\sigma _i^a}\Big |\le 0, \hspace{10pt} i=1,2,\ldots ,nm \end{aligned}$$
(5)

Maximum latent displacement

$$\begin{aligned} v^{\Delta }=R-\frac{\Delta T}{H} \le 0 \end{aligned}$$
(6)

Inter-story displacements

$$\begin{aligned} v_j^d=R_I-\frac{d_j}{h_j}\le 0, \hspace{10pt} j=1,2,\ldots ,ns \end{aligned}$$
(7)

where \(\sigma _i\) and \(\sigma _i^a\) is the stress and allowable stress in the ith member respectively; \(\Delta T\) is the maximum latent displacement; ns is the total number of stories; R and \(d_j\) is the maximum drift index and inter-story drift respectively; H and \(h_j\) is the height of the frame structure and story height of jth floor; \(R_I\) represents the inter-story drift index allowed by AISC 200128 and is set to 1/300. The constraints as per LRFD interactions formulas o AISC 2001 are given by

$$\begin{aligned} o_i^I= {\left\{ \begin{array}{ll} 1-\frac{P_u}{2\phi _c P_n}-\left( \frac{M_{ux}}{\phi _bM_{nx}+\frac{M_{uy}}{\phi _bM_{ny}}}\right) \le 0; \hspace{1pt} For \frac{P_u}{\phi _cP_n} < 0.2 \\ 1-\frac{P_u}{\phi _c P_n}-\frac{8}{9}\left( \frac{M_{ux}}{\phi _bM_{nx}+\frac{M_{uy}}{\phi _bM_{ny}}}\right) \le 0; \hspace{1pt} For \frac{P_u}{\phi _cP_n} \ge 0.2 \end{array}\right. } \end{aligned}$$
(8)

where \(P_u\) and \(P_n\) is the required and nominal tension or compression axial strength respectively, \(\phi _c =0.9\) and \(\phi _c=0.85\) are the resistance factor for tension and compression respectively, \(\phi _b=0.90\) is the flexural resistance reduction factor; \(M_{ux}\) and \(M_{uy}\), \(M_{nx}\) and \(M_{ny}\) are the flexural’s required strength and flexural nominal strengths respectively in the x end y direction. For a two-dimensional structure, the value of \(M_{ny}=0\).

In order to find the Euler and compression stresses, the effective length factor K is required. For bracing and beam members, \(K=1\) and for the column members, it is calculated by using SAP2000. For a generalized case, the approximate effective length within \(-1.0\%\) and \(+2.0\%\) accuracy are based on Dumonteil37 and are given by

$$\begin{aligned} K= {\left\{ \begin{array}{ll} \sqrt{\frac{1.6G_AG_B+4(G_A+G_B)+7.5}{G_A+G_B+7.5}}; \hspace{5pt} For \hspace{2pt} unbraced \hspace{2pt} members \\ \frac{3G_AG_B+1.4(G_A+G_B)+0.64}{3G_AG_B+2(G_A+G_B)+1.28}; \hspace{10pt} For \hspace{2pt} braced \hspace{2pt} members \end{array}\right. } \end{aligned}$$
(9)

where \(G_A\) and \(G_B\) at the two end joints A and B of the column section is the stiffness ratio of two columns and girders respectively.

The proposed algorithm

It is already known that even though a lot of new DE variants have been proposed, but it still suffers from various problems including poor expl, unbalanced expl versus expt operation and premature convergence23. Therefore, it becomes necessary to adopt changes, hybridize and add prospective modifications in the basic algorithm to overcome its inherent drawbacks and limitations. In the present work, the structure of DE is changed, and new adaptations are added in the crossover and mutation operation of the algorithm. Here GWO based equations20 are added in the crossover phase to improve the expl operation whereas mutation operation is enhanced by using CS based29 hybridization to enhance the expt operation. Apart from these modifications, the concept of iterative division is added so that considerable expl and exhaustive expt is performed towards the start whereas substantial expt and in-depth expl is performed with in certain sections towards the end29. The proposed MHDE is presented in the following steps:

Initialization

The first and the foremost step, like any other algorithm, in the MHDE algorithm is the initialization phase. Here new solutions are selected randomly within the search space. The general equation is thus given by

$$\begin{aligned} x_{i,j} = n_{min,j} + U(0,1) \times (n_{max,j}-n_{min,j}) \end{aligned}$$
(10)

where \(n_{min,j}\) and \(n_{max,j}\) are the lower bounds and upper bounds and \(x_{i,j}\) is ith solution of a j dimensional problem (D), and U(0, 1) is a uniform random number distributed over [0, 1].

Mutation operation

At each generation, DE employs crossover operation, which is controlled by a scaling factor. The target solution is achieved by different mutation strategies. The most popular strategies are given by23

$$\begin{aligned} o_i^t= x_{i}^t+ F.(x_{i}^t-x_{j}^t); \hspace{3pt} ``DE/rand/1" \end{aligned}$$
(11)
$$\begin{aligned} o_i^t= x_{best}^t+ F.(x_{i}^t-x_{j}^t); \hspace{3pt} ``DE/best/1" \end{aligned}$$
(12)

where \(x_i^t\) and \(x_j^t\) are the random solutions corresponding to the ith and jth member with D dimension, \(o_i^t\) is the velocity corresponding to the target solution, F is the scaling factor, \(x_{best}\) is the best solution and t is the current iteration. Here the equation derived from DE/rand/1 is more of an exploratory nature with increased diversity among the search agents whereas DE/best/1 has intensification properties, which promotes exploitative search around the best solution. In the proposed MHDE, both these equations are used in an adaptive manner and is explained as

For the first half of the iterations, DE/rand/1 equation is used along with GWO based equations to perform the global search operation. The scaling factor uses Lévy distribution, as discussed in subsequent subsections. Here, GWO based equations are used to perform expt and expl. This is possible because of the presence of better expl capabilities of GWO20. The new solutions generated using modified equations is thus given by

$$\begin{aligned} x_1=x_i-W_1\left( H_1.o_i^t-x_{i}^t\right) \end{aligned}$$
(13)
$$\begin{aligned} x_2=x_i-W_2\left( H_2.o_i^t-x_{i}^t\right) \end{aligned}$$
(14)
$$\begin{aligned} x_3=x_i-W_3\left( H_3.o_i^t-x_{i}^t\right) \end{aligned}$$
(15)
$$\begin{aligned} o_i^t=\frac{x_1+x_2+x_3}{3} \end{aligned}$$
(16)

where \(W_1, W_2, W_3\) and \(H_1,H_2,H_3\) are generated randomly from \(W=2a.e_1-a\) and \(H=2.e_2\), a is a linearly decreasing random number \(\in [0,2]\) whereas \(e_1\) and \(e_2\) lies between [0, 1]. The whole search process consists of DE/rand/1 equation and the new GWO inspired equations, which helps the algorithm in enhancing its expl properties.

For the other iterative half, the DE/best/1 equation along with Gaussian random sampling is used32. The general equation for this phase is the same as DE/best/1 equation with an additional advantage of the Gaussian mutation to deal with the local best solution. Here, m new solutions are spawned and compared in accordance with \(x_{best}\). If the new solution m is better than \(x_{best}\), \(x_{best}\) is replaced by the new solution. Also, it is only followed if the local best solution is not improving in a single iteration. For this strategy, the general equation is given by:

$$\begin{aligned} mutx_{t,d}=x_d\times (1-G(0,1)) \end{aligned}$$
(17)

where G(0, 1) is a random number. Apart from this modification, the whole search operation is the same as the DE/best/1 equation. The main goal is to search for potential global best solution without getting trapped in local optimal solution. The search process is followed for consecutive iterations and over the course of time, the final best solution is updated. Thus, overall helps in reinforcing complementary searching capabilities32 to prevent the algorithm from local optima stagnation.

Crossover operation

Crossover (can be arithmetic, exponential or binomial) is the next step of DE and is meant for creating the final offspring vector \(x_i^t\). Here, the most commonly used binomial crossover operation is used. In this kind of crossover, each component of \(x_i^t\) either comes from the mutated vector \(o_i^t\) or \(x_i^t\) itself, and is given as

$$\begin{aligned} x_i^t= {\left\{ \begin{array}{ll} o_i^t, \hspace{10pt} if (rand_k[0,1]\le CR), \hspace{3pt} j=1,2,\ldots ,n \\ x_i^t, \hspace{10pt} otherwise \end{array}\right. } \end{aligned}$$
(18)

where \(rand_k[0,1] \in [0,1]\) and is continuously changed with respect to the jth part of the ith member of the population, CR is crossover rate and helps in controlling the extent of \(o_i^t\) and \(X_i^t\). This parameter is really important and helps in balancing expl as well as expt operation. In the proposed MHDE algorithm, modification has been added in the solution \(x_i^t\) (used in Eq. 18). Here the solution \(x_i^t\) is not the previous solution but is based on the local search equation of the CS algorithm and the general equation is given by

$$\begin{aligned} x_{i}^{t}=x_i^t+\epsilon \otimes \left( x_{e_1}^t-x_{e_2}^t\right) \end{aligned}$$
(19)

Here all the notations are the same as discussed in the mutation operation, apart from \(\epsilon \) which is a uniformly distributed random number generated using an adaptive strategy (discussed in subsequent subsections) and lies in the range of [0, 1]. The main aim is to equally balance the local and the global search without losing the diversity among the search agents. Here mutation is performed using Eq. (19), which helps the algorithm in providing extensive search capabilities and instead of using a previous solution, a new generalized solution is used for maintaining diversity among the search agent (intensive expt operation). The next step is selection operation.

Selection operation

For any minimization process, the fitness \(f(x_i^t)\) for the \(x_i^t\) solution is given by Eq. (20)

$$\begin{aligned} x_{new}^{t+1}= {\left\{ \begin{array}{ll} x_{new} \hspace{20pt}if f(x_{new})<f(x_i^t)\\ x_i^t \hspace{30pt} otherwise \end{array}\right. } \end{aligned}$$
(20)

Here a generalized Roulette wheel selection mechanism is followed to find the final best solution. The next section deals with the various parameters of the proposed algorithm.

Parametric adaptation

In DE, a balanced expl and expt operation is achieved by optimizing F and CR. One among the earliest studies were conducted by38 where efficient values were \(0<\textit{CR}<0.2\) and \(0.4<\textit{F}<0.95\), while in39, values of \(0.1<\textit{CR}<1.0\) and \(0.15<\textit{F}<0.5\) were used and in24 self-adaptive F and CR provided better results. Overall, CR and \(F \in [0,1]\). The parameter F is meant for improving the expl properties of DE and in present work, Lévy flights are used to imitate this operation. The Lévy flight mechanism is highly efficient and generates larger step sizes, enhancing the expl properties. The step size based on Lévy flights is generated as

$$\begin{aligned} \ F(\lambda )\sim \frac{\lambda \Gamma (\lambda )\sin (\pi \lambda /2)}{\pi }\frac{1}{s^{1+\lambda }} \hspace{2pt}(s\gg s_0\gg 0) \end{aligned}$$
(21)

where \(\hspace{5pt} s=\frac{U}{|V|^1/\lambda }\hspace{2pt} U\sim N(0,\sigma ^2), \hspace{10pt}V\sim N(0,1)\) and \( \sigma ^2=\bigg \{\frac{\Gamma (1+\lambda )}{\lambda \Gamma [(1+\lambda )/2]}. \frac{\sin (\pi \lambda /2)}{2^{(\lambda -1)/2}}\). Here \(\lambda =1.5\) and \(\Gamma \) is the gamma function. The parameter N has mean 0 and variance \(\sigma ^2\) and is taken from a Gaussian distribution.

The second parameter CR is mainly meant for drifting the algorithm from expl to expt. Here based on CR, new solutions are kept if it is improved over subsequent iterations and if there is no improvement in the new generation solutions, the solutions are inspired by CS based hybridization. Though, work has been done for improving CR but it has been found that adding adaptive properties can provide more reliable results24. These conclusions pave the way for the requirement of a new distribution, which can help MHDE in a gradual transition from expl to expt without losing the global solution. Here, Weibull distribution has been used to overcome this drawback31. The probability distribution function is given by

$$\begin{aligned} CR(t)=\frac{\beta }{\eta }\Big (\frac{t-\gamma }{\eta }\Big )^{\beta -1} \hspace{5pt} e^{-(\frac{t-\gamma }{\eta })^\beta } \end{aligned}$$
(22)

where \(f(t)\ge 0\), \(\beta >0\), \(t\ge 0\hspace{5pt} or \hspace{5pt}\gamma \), \(\eta >0\), \(-\infty <\gamma >\infty \). It has three main parameters namely shape (\(\beta \)), scale (\(\eta \)) and location (\(\gamma \)) parameter. In most of the cases, \(\gamma =0\); \(\beta \) helps in switching between different distributions including L-shaped distribution for \(\beta \le 1\), normal distribution for \(\beta =3.602\), bell-shaped for \(\beta >1\) and others. For the present case, the two parameter Weibull distribution is used with \(\eta \) equals to the maximum iterations and \(\beta =2\). The values of Weibull distribution are taken from the literature31.

In the mutation phase, there is a new \(\epsilon \) parameter, inspired from CS, and is meant for improving the local search capabilities of an algorithm. The parameter is adapted in accordance with the scaling factor, as in case of35. The general equation is given by

$$\begin{aligned} \varepsilon _i^{t+1}=\frac{1}{2}\times \Big (sin(2\pi \times freq \times t + \pi )\times \frac{t_{max-t}}{t_{max}} +1\Big );\hspace{5pt}if\; e_1 >0.5 \end{aligned}$$
(23)
$$\begin{aligned} \varepsilon _i^{t+1}=\frac{1}{2}\times \Big (sin(2\pi \times freq \times t)\times \frac{t_{max-t}}{t_{max}} +1\Big );\hspace{5pt} if\; e_1< 0.5 \end{aligned}$$
(24)

Here freq, is a fixed function, t and \(t_{max}\) is the current and the maximum iterations. This parameter has been used only during the mutation operation and is intended for exploiting. Thus, three parameters are there to improve overall stability of the proposed MHDE algorithm.

Population adaptation

Population-based algorithms require an initial set of random solutions to start their search operation. Population decides three things, total spawned solutions, maximum function evaluations and complexity of an algorithm. A static population keeps the total function evaluations constant, whereas an adaptive decreasing population can reduce them significantly. The concept of adaptive population was formulated in40 and was extended to GA30. In40 with an increasing solution fitness, the population was decreased whereas for decreasing solution fitness, the population was increased. The major drawback of this formulation was the formation of new clones of existing solutions, paving the way for reduced performance. In30, an opposite adaptation was followed by reducing the population if the best fitness in increasing. For a multimodal problem, the algorithm should be able to optimize large landscapes. Initially, if there is a large population size, the fitness will be very high. The algorithm will explore the search space and, over the course of iterations, starts moving toward some random direction. Here because of the higher fitness, chances are there that new solutions are in the same direction and hence population size can be reduced30. This new population helps to find potential solutions without losing the final best solution. Also, with increase in the iterations, the variation in solution quality is marginal and hence using a smaller population provide many reliable results. This is because, each member in a small population has a higher probability of becoming the local best and eventually the global best solution. The mathematical equation deduced by30 is given as

$$\begin{aligned} N_{t+1}= {\left\{ \begin{array}{ll} (1-\Delta f_t^{best})N_t, \hspace{10pt} if \; \Delta f_t^{best}\le \Delta f_{max}^{best}\\ (1-\Delta f_{max}^{best})N_t, \hspace{10pt} if \; \Delta f_t^{best}> \Delta f_{max}^{best}\\ {min}_{N}, \hspace{10pt} if \; N_{t+1} <{min}_{N} \end{array}\right. } \end{aligned}$$
(25)

Here N is the population for the \(t_{th}\) generation, \(\Delta f_t^{best}\) is given by \(\Big (\frac{f_{t-1}^{best}-f_{t-2}^{best}}{|f_{t-2}^{best}|}\Big )\) is change in the best fitness, \(\Delta f_{max}^{best}\) is the threshold value. It should be noted that a minimum fitness must be defined so that all the negative effects of minimal population are minimized.

Numerical examples

The proposed MHDE algorithm is analysed for numerical benchmark datasets and compared with respect to other recent hybrid algorithms. Two benchmark sets have been used, namely classical benchmark problems from CE C2005 test suite33 and CEC2014 test suite25. For comparison on CEC 2005, the major algorithms used are JADE25, Evolution strategy based on covariance adaptation (CMA-ES)21, SaDE41, a sine cosine crow search algorithm (SCCSA)42, extended GWO (GWO-E)20, fractional-order calculus-based FPA (FA-FPO)43, SHADE26 and LSHADE-SPACMA21. On the other hand for CEC 2014 benchmark problems, blended biogeography-based optimization (B-BBO)44, laplacian BBO (LX-BBO)44, random walk GWO (RW-GWO)44, population-based incremental learning (PBIL)45, improved symbiotic organisms search (ISOS)46, variable neighbourhood BA (VNBA)45, chaotic cuckoo search (CCS)45, and improved elephant herding optimization (IMEHO)45 are used.

Table 1 Parameter settings of different algorithms.

For all test categories, the parametric details for all the algorithms under comparison is given in Table 1. Apart from the basic parameters, a \(population size = 50\), \(D = 30\) and total of 51 runs is used for evaluation. For CEC2005, the total function evaluations are taken as 15, 000 whereas for CEC2014, the maximum function evaluations are set34 as \(10^4\) \(\times \) D. The results for both the test cases are evaluated as mean error and standard deviation (std)34. It must be noted that the bold values in all the tables signifies the best algorithm corresponding to that particular problem.

For statistical testing, two statistical tests, namely Friedman rank (f-rank) and Wilcoxon’s rank-sum tests47 are used. The results are presented as ranks found by p-values at \(5\%\) level of significance. For every test function, the statistical results are presented as win(w)/loss(l)/tie(t). Here win(w) is the situation where the test algorithm is better than MHDE algorithm, the loss(l) scenario on the other hand, is the situation where the algorithm is worse than MHDE algorithm and “-” sign means tie(t) and it denotes that both the algorithms under consideration are either statistically similar or irrelevant in accordance to each other47. Apart from that, the f-rank is calculated for every function and an average of all the ranks is presented. In the next subsections, analysis on CEC 2005 benchmark problems is presented.

Classical benchmarks

A comparison of MHDE is performed with respect to the well-known variants of DE including JADE, SaDE, SHADE and LSHADE-SPACMA as well as some recently introduced algorithms including GWO-E, SCCSA, FA-FPO and CMA-ES as given by Table 2. Here \(G_1-G_7\) are unimodal functions (for testing expt capabilities), \(G_8-G_{12}\) are multi-modal functions (for a balanced expl and expt operation), and \(G_{13}-G_{15}\) are fixed dimension (convergence analysis), testing the effectiveness and consistency of the MHDE algorithm for finding the optimal solution. These test functions are defined in48 and are not explicitly discussed in the present paper.

Table 2 Simulation results for CEC 2005 benchmarks.

The results are presented as mean and std values for 30 dimension size. For \(G_1\), \(G_3\), \(G_4\), \(G_5\), \(G_7\), \(G_{11}\), \(G_{12}\) and \(G_{13}\) functions, the algorithm performs better in comparison to others. For \(G_8\) and \(G_{10}\) functions, GWO-E, FA-FPO and the proposed MHDE performs equivalently whereas for function \(G_9\), SCCSA, FA-FPO have equivalent results with respect to MHDE. Apart from that, JADE is found to be better for \(G_{14}\) and SCCSA for \(G_{15}\) function. The statistical results show that MHDE converges to better solutions than JADE, SaDE, SHADE and LSHADE-SPACMA and others, which indicate that MHDE is an excellent algorithm.

Furthermore, the Friedman f-rank test and Wilcoxon rank sum tests are conducted to analyse the results of MHDE with respect to other algorithms for 51 individual trials for each function. Taking JADE versus MHDE as an example, w/l/t ratio and average f-rank of MHDE is better than JADE, it means that MHDE is significantly better than JADE at the 5% significance level or 95% level of confidence. Thus overall, ranking analysis results between DE variants, MHDE and other algorithms show that the proposed MHDE is significantly better.

Table 3 Sensitivity analysis of parametric adaptations.

Sensitivity analysis is done to check how the newly introduced parameters affect the performance and efficiency of MHDE. In Table 3, five different adjustments are made in the proposed modifications, and two statistical indicators (mean and std) are used to describe it. The same set of parameters and function evaluations are used as used for CEC 2005 benchmark testing. The improved crossover operation helps in performance enhancement for unimodal functions and hence leading to better expt properties. Addition of adaptive F helps in improving the global search capabilities and hence provides better expl. Adding adaptivity in CR and mutation operation helps in enhancing the accuracy for multi-modal functions, whereas adaptive population size N helps in reducing the function evaluations. Furthermore, the results of MHDE at different parameters are all better than JADE, SaDE and other hybrid versions of DE. To sum up, the performance of MHDE is robust and excellent. To further validate the superiority of MHDE with respect to some recently introduced algorithms, CEC 2014 benchmark test suite is used and has been explained in details in the next subsection.

CEC 2014 benchmarks

For CEC 2014 benchmarks, proposed MHDE algorithm and eight recently introduced hybrid algorithms have been selected for comparison. All of these algorithms are enhanced versions of new population-based algorithms and are B-BBO44, LX-BBO44, RW-GWO44, PBIL45, ISOS46, VNBA45, CCS45, and IMEHO45. The mean error and std values of all of these variants on 30 dimension problems are listed in Table 4.

Table 4 Statistical results for CEC 2014 benchmark functions.

Here, the results by comparing the difference between obtained solution and the desired best solution are found. If the difference becomes less than \(10^{-8}\), the error is treated as zero. From Table 4, it is found that MHDE performs better than all other algorithms under consideration. Here out of three uni-modal functions (\(G_1-G_3\)), MHDE performs better for two among all other algorithms showing superior capability in finding global solution. This further shows that the algorithm has better expl properties. Among multi-modal function (\(G_4-G_{10}\)), MHDE performs better for three functions among all the variants and for rest of the functions it is either ranked second or third. This again shows the superior performance of MHDE for local optima avoidance. For hybrid benchmarks (\(G_{11}-G_{20}\)) and composite benchmarks (\(G_{21}-G_{30}\)), MHDE is found to be the best among all other algorithms. This further proves the capability of MHDE in balancing the expl and expt operation to achieve global best solution. Overall, MHDE is ranked first, RW-GWO is ranked second and IMEHO is ranked third among all the other algorithms under comparison. In the next section, MHDE is used for design of frame structures.

CEC 2017 benchmarks

For a comprehensive evaluation of the proposed MHDE algorithm in comparison to MH algorithms, the SaDE35, SHADE52, JADE35, CV1.029, \(CV_{new}\)53, MVMO35, and CS17 algorithms have been utilized with 51 run and 100 population size. In order to have a fair comparison, a maximum function evaluations as \(10,000 \times D\) is used where \(D = 30\) is the dimension size. The algorithms used for comparison are highly competitive and have proved their worthiness for various CEC competitions. A rank-sum test (in terms of w/l/t) and f-test at 0.05 level of significance47 is done to evaluate the performance of MHDE, along with experimental mean error and standard deviation. The mean error is evaluated by calculating the difference between the obtained values and the global optimum of that problem. From the results in Table 5, the following observations are made. For the first case of unimodal problems, \(H_1\), \(H_2\) and \(H_3\), SHADE, JADE, SaDE, MVMO, and LSHADE give highly efficient results; \(CV_{new}\), CV1.0, CS and MHDE have similar performance and SHADE performed the best for these problems. For the multimodal problems, \(H_4\) to \(H_{10}\), SHADE, MVMO, JADE and SaDE have similar performance and LSHADE gave the best results. For hybrid problems, \(H_{11}\) to \(H_{20}\), CS, \(CV_{new}\), CV1.0, were better than the DE variants, and MHDE was found to be the best among others. For \(H_{21}\) to \(H_{30}\) composite problems, MHDE gives the best performance and is the most significant algorithm among all others under comparison. The results in the last line of Table 5, provides statistical p-values and f-rank, and it is found that with respect to MHDE, LSHADE gives better performance for 16 problems, SHADE for 14 problems, SaDE for 10 problems, JADE for 12 problems, MVMO for 15 problems, \(CV_{new}\) for 6 problems.

Overall comparison shows that MHDE is better than other algorithms for most of the hybrid and composite problems and has poor performance over unimodal and multimodal problems. This further proves the significance of MHDE over statistical and experimental results, for challenging optimization problems.

Table 5 Statistical results for CEC 2017 benchmark problems.

Real-world applications I: engineering design problems

Here, the effectiveness of the MHDE algorithm is assessed across a range of real-world optimization problems with diverse constraints. To handle constraints, a variety of techniques including decoder functions, repair algorithms, feasibility preservation and penalty functions are employed, as outlined in55. In this study, the focus is to opt for penalty functions due to their simplicity of implementation and widespread adoption. A common method for constraint management through penalty functions is detailed through a specific implementation, as illustrated in the equation below.

$$\begin{aligned} Minimize \hspace{2pt} f(\textbf{x}) = f(\textbf{x}) \pm \left( \sum _{i=1}^p a_i G_i(\textbf{x})+ \sum _{j=1}^qb_jH_j(\textbf{x})\right) \end{aligned}$$
(26)
$$\begin{aligned} G_i(\textbf{x})= max(0, g_i(\textbf{x}))^n \end{aligned}$$
(27)
$$\begin{aligned} H_j(\textbf{x})= |h_j(\textbf{x})|^{\lambda } \end{aligned}$$
(28)

The equalities are described by \(g_i(\textbf{x})\) and the inequalities by \(h_j(\textbf{x})\). \(p\) and \(q\) count the number of equality and inequality constraints. Constants \(a_i\) and \(b_j\) are positive. \(n\) and \(\lambda \) are set as 1 or 2. Utilizing a penalty function results in an elevation of the objective function value when constraints are breached. This creates an incentive for the algorithm to steer clear of infeasible areas and prioritize the exploration of feasible regions within the search space.

For performance evaluation, four engineering design problems including, (1) pressure vessel design, (2) rolling element bearing design, (3) tension/compression spring design, and (4) cantilever beam design, are used. The MHDE algorithm is compared with respect to some of the well-known algorithms including, artificial rabbit optimization (ARO)55, taguchi search algorithm (TSA)56, multi-strategy chameleon algorithm (MCSA)56, hybrid particle swarm optimization (HPSO)57, equilibrium optimizer (EO)21, evolution strategies (ES)58, grasshopper optimization algorithm (GOA)59, (\(\mu + \lambda \)) evolutionary search (ES)60, harris hawk optimizer (HHO)56, cuckoo search (CS)55, GCAII55, ant colony optimization (ACO)55, co-evolutionary DE (CDE)60, bacterial foraging optimization algorithm (BFOA)61, symbiotic optimization search (SOS)62, passing vehicle search (PVS)63, meerkat optimization algorithm (MOA)64, red panda optimizer (RPO)65, mine blast algorithm (MBA)66, moth flame optimizer (MFO)56, thermal exchange optimization (TEO)67, GCAI55, co-evolutionary differential evolution (CDE)60, seagull optimization algorithm (SOA)68, co-evolutionary particle swarm optimization approach (CPSO)57, and dynamic opposition strategy taylor-based optimal neighbourhood strategy and crossover operator (DTCSMO)69.

Pressure vessel design

The optimization problem related to pressure vessels design is a widely acknowledged challenge in engineering. The fundamental objective is to minimize costs linked to material acquisition, welding, and the overall fabrication of pressure vessels, as discussed in57. This problem revolves around four key design variables: the thickness of the cylindrical shell represented as \(T_s\), the inside radius of the cylindrical shell denoted as R, the head thickness of the cylindrical shell indicated by \(T_h\), and the length of the cylindrical segment denoted as L. This problem has four constraints, as given by (29) and (30), as shown in Fig. 1.

Figure 1
figure 1

Pressure vessel design problem.

Consider, \(\textbf{P}=\left[ L_1L_2L_3L_4\right] =\left[ T_sT_hRL\right] \)

$$\begin{aligned} Optimize, f\left( \textbf{P}\right) = 0.6224L_1L_2L_3+1.778L_2L_3^2+3.1661L_1^2L_4+19.84L_1^2L_3 \end{aligned}$$
(29)
$$\begin{aligned} \begin{aligned} Subjected \hspace{2pt} to, g_1\left( \textbf{P}\right)&=-L_1+0.0193L_3\le 0 \\ {P}_2\left( \textbf{P}\right)&=L_2+0.00954L_3\le 0 \\ L_3\left( \textbf{P}\right)&=\pi L_3^2L_4-\frac{4}{3}\pi L_3^2+1296000\le 0 \\ {P}_4\left( \textbf{P}\right)&=L_4-240\le 0 \end{aligned} \end{aligned}$$
(30)

Varying range, \(0\le L_1\le 99,\ 0\le L_2\le 99,\ 10\le L_3\le 200,\ 10\le L_4\le 200\)

Table 6 Statistical outcomes for Pressure vessel design challenge.
Figure 2
figure 2

Convergence of pressure vessel design.

The outcomes pertaining to this design problem are presented in Table 6, where the results are evaluated using different algorithms for comparative analysis. These algorithms encompass MCSA56, ARO55, CPSO75, HPSO57, (\(\mu + \lambda \))ES60, ACO55, CDE60, HHO56, MOA64, RPO65, MFO56, TSA55, MVO56, and others. The convergence patterns are shown in Fig. 2.

After optimization, the values of the variables obtained through MHDE are given by \(x = (0.7781695, 0.3846499, 40.32966, 199.9994)\). The corresponding optimal cost for this design problem is \(f = 5885.3353\). This result significantly outperforms the outcomes achieved by all other algorithms examined in our comparative analysis. The demonstration of such competitive performance serves to affirm the effectiveness of the proposed MHDE algorithm in comparison to the alternative algorithms that were evaluated.

Rolling elemet bearing design

This optimization problem is associated with the load-bearing capacity of rolling elements56, and is represented in Fig. 3. This design problem has ten variables and many constraints. It is mathematically given by

Consider \(\textbf{P} = [J_m, J_b, Z, f_i, f_o, K_{Dmin}, K_{Dmax}, \epsilon , e, \zeta ]\)

$$\begin{aligned} Maximise \left\{ {\begin{array}{*{20}{l}} f_2(\textbf{P})= f_cP^{2/3}J_b^{1.8}&{} if\; J_b \le 25.4 mm\\ f_z(\textbf{P})= 3.647f_cP^{2/3}J_b^{1.4}&{} if\; J_b > 25.4 mm \\ \end{array}}\right. \end{aligned}$$
(31)
$$\begin{aligned} \begin{aligned} Subject \hspace{10pt} to \\ g_1(\textbf{P})&= \frac{\phi _0}{2sin^{-1}(J_b/J_m)}-P+ 1 \ge 0,\hspace{5pt} g_2(\textbf{P})= 2J_b- K_{Dmin}(J-d)\ge 0, \\ g_3(\textbf{P})&= K_{Dmax}(J-j)-2J_b \ge 0, \hspace{5pt} g_4(\textbf{P})= J_m-(0.5-e)(J+j)\ge 0, \\ g_5(\textbf{P})&= (0.5+e)(J+j)-J_m\ge 0, \hspace{5pt} g_6(\textbf{P})= J_m - 0.5(J+j)\ge 0, \\ g_7(\textbf{P})&= 0.5(J-J_m-J_b)-\epsilon J_b\ge 0, \hspace{5pt} g_8(\textbf{P}) = \zeta B_w -J_b \le 0, \\ g_9(\textbf{P})&= f_i \ge 0.515, \hspace{5pt} g_{10}(\textbf{P}) = f_o \ge 0.515 \end{aligned} \end{aligned}$$
(32)

where

$$\begin{aligned} f_c &= 37.91[1+\left\{ 1.04\left( \frac{1-\gamma }{1+\gamma }\right) ^{1.72}\left( \frac{f_i(2f_o-1)}{f_o(2f_i-1)}\right) ^{0.4}\right\} ^10/3]^{-0.3} \times \left( \frac{\gamma ^{0.3}(1-\gamma )^{1.39}}{f_o(1+\gamma ^{\frac{1}{3}})}\right) \left( \frac{2f_i}{2f_i-1}\right) ^{0.41} \end{aligned}$$
(33)
$$\begin{aligned} \gamma &= \frac{J_b}{J_m}, \hspace{5pt} f_i = \frac{r_i}{J_b}, \hspace{5pt} f_o = \frac{r_o}{J_b}, \end{aligned}$$
(34)
$$\begin{aligned} \phi _o &= 2\pi - 2cos^{-1} \frac{\left\{ (J-j)/2 -3(T/4)^2\right\} ^2+ \left\{ J/2-(T/4)-J_b \right\} ^2- \left\{ j/2+ (T/4)\right\} ^2}{2\left\{ (J-j)/2-3(T/4)\right\} \left\{ J/2-(T/4-J_b)\right\} } \end{aligned}$$
(35)

\(T = J-j-2J_b\), \(J =160\), \(j= 90\), \(B_w = 30\), \(r_i = r_o = 11.033\)

Variable range

\(0.5(J+j)\le J_m \le 0.6(J+j), 0.15(J-j)\le J_b \le 0.45(J-j), 4\le Z \le 50,\)

\(0.515\le f_i \le 0.6, 0.515\le f_o \le 0.6, 0.4 \le K_{Dmin}\le 0.5, 0.6 \le K_{Dmax} \le 0.7, 0.3 \le \epsilon \le 0.4,\)

\(0.02 \le e \le 0.1, 0.6 \zeta 0.85\)

Figure 3
figure 3

Rolling element bearing design challenges.

Table 7 Statistical outcomes for Rolling element bearing design challenges.

In this design example, the algorithms used for comaprion are ARO55, GA263, MBA66, PVS63, TLBO76, SOA68, DTCSMO, PSO, and DE, and results are given in Table 7. The design variables obtained by MHDE for this particular scenario are determined as \( x = (125.7191, 21.2716, 11, 0.5150, 0.5150, 0.4195017, 0.6430438, 0.3000, 0.0310311, 0.6963122)\), and optimized cost is given as \(f = 85549.2391\). From the results in Table 7, it can be seen that the proposed algorithm is highly competitive with respect to other algorithms.

Tension/compression spring design

For a compression spring, there are three design variables, including the wire diameter (d), mean coil diameter (D), and the number of active coils (N). The design is given in Fig. 4. The mathematical formulation is as:

Consider, \(\textbf{P}=\left[ L_1L_2L_3\right] =\left[ dDN\right] \).

$$\begin{aligned} Minimize, \ f\left( \textbf{P}\right) =\left( L_1+2\right) L_2L_1^2 \end{aligned}$$
(36)
$$\begin{aligned} \begin{aligned}{}&Subjected \hspace{2pt} to, g_1\left( \textbf{P}\right) =1-\frac{L_2^3L_3}{71875L_1^4}\le 0 \\&\quad {g}_2\left( \textbf{P}\right) =\frac{4L_2^2-L_1L_2}{12566\left( L_2L_1^3-L_1^4\right) }+\frac{1}{5108L_1^2}\le 0 \\&\quad {g}_3\left( \textbf{P}\right) =1-\frac{140.45L_1}{L_3L_2^2}\le 0 \\&\quad {g}_4\left( \textbf{P}\right) =\frac{L_1+L_2}{1.5}-1\le 0 \end{aligned} \end{aligned}$$
(37)

Limits, \(0.005\le L_1\le 2.0,\ 0.25\le L_2\le 1.30,\ 2.0\le L_3\le 15.0\)

Table 8 Statistical outcomes for compression spring design challenges.
Figure 4
figure 4

Tension compression spring design challenge.

In this scenario, a comprehensive comparison is conducted, with respect to RPO, CDE, GCAII55, MMA79, CPSO, SI, ARO55, SOS62, CS55, MFO56, GCAI55, MFO, BFOA, HHO, GOA59, and others, as outlined in Table 9. For this particular case, the optimal design variables derived using the MHDE algorithm are given in Table 8 and Fig. 5, are specified as \(x = (0.0526768, 0.380935, 10)\). The resulting optimized cost is calculated as \(f = 0.012684\). These results prove the significance of the proposed algorithm for tension spring design problem.

Figure 5
figure 5

Convergence of tension/compression spring design.

Cantilever beam design

This problem is meant for reducing the weight of a cantilever beam, having one constraint and five distinct blocks, representing several design variables, and is given by Fig. 6.

The design problem is mathematically given by Consider variable \(\textbf{P}= [L_1, L_2, L_3, L_4, L_5]\)

Minimize \(f_4(\textbf{P}= 0/0624(L_1+L_2+L_3+L_4+L_5))\)

Subject to \(g_1(\textbf{T})= \frac{61}{L_1^3}+\frac{37}{L_2^3}+\frac{19}{L_3^3}+\frac{7}{L_4^3}+\frac{1}{L_5^3}-1 \le 0\)

Variable range \(0.01 \le P_i \le 100, \hspace{5pt} i =1, \ldots , 5.\)

Table 9 Statistical outcomes for cantilever beam design challenges.
Figure 6
figure 6

Cantilever beam design challenge.

A comparison is performed with respect to MFO56, ARO55, SOS62, CS55, GCAII55, GCAI55, MMA79, and GOA59. The results, in Table 9, show that design variables for this problem are \(x = (6.0140, 5.3128, 4.4914, 3.4993, 2.1563)\) and the optimized cost is \(f = 1.34000\). The convergence patterns are given in Fig. 7. Here also, the proposed algorithm is significant with respect to others.

Figure 7
figure 7

Convergence of cantilever beam design.

Real-world applications II: frame design problems

Here, MHDE algorithm is used for weight minimization of 1-bay 8-story, 3-bay 15-story and 3-bay 24-story structures, respectively. The optimization results are compared with recently introduced hybrid algorithms to prove the significance of MHDE. Also, the frame structure benchmark problems are highly challenging to design due to higher level of difficulty in their implementation80. The figures of the three frames are taken from81.

Meta-heuristic algorithms (MHAs) have emerged as the core of modern optimization research and have set the trend for its use in almost every research domain. MHAs have been found to provide good solutions for frame design problems, and various algorithms have been presented in literature for optimal frame structure designing28,36,82. In the present work, the proposed MHDE is tested for optimizing weights in the frame structures. The termination criteria are based on maximum function evaluation and is inspired from36. The objective function is analysed 20,000 times for 1-bay 8-story frame and 30,000 for 3-bay 15-story frame whereas for 3-bay 24-story frame it is 50,000 respectively. For each problem, the population size used is 50 and a total of 20 independent runs have been performed. Apart from that, it has been kept in mind that there are no violations for a fair comparison among the algorithms. Here, a randomly generated initial population containing both feasible and infeasible solutions has been used to obtain statistically significant results.

Designing 1-bay 8-story frame

For this case, fabrication conditions from the initial foundation steps are achieved by using the same beam section and same column section for every two successive stories. The modulus of elasticity for the material is E = 200 GPa (29000 ksi) and 267 W-shaped sections must be used for choosing cross-sectional areas of all the elements. The only constraint is that the latent drift must be less than 5.08 cm. The design is shown in Fig. 8.

Table 10 Optimization results for the 1-bay 8-story frame.

The comparison has been performed with respect to some of GA28, ACO28, DE28, ES-DE28, PSOACO82, HGAPSO82, PSOPC82 and SFLAIWO82. From the experimental results in Table 10, it has been found that MHDE has the minimum weight of 30.70 kN for the frame structure. The other best algorithms, ACO and SFLAIWO having 31.05 kN and 31.08 kN optimized weights respectively, are second and third best. Overall, MHDE provides more reliable results than most of the well-known algorithms reported in literature.

Figure 8
figure 8

Design of 1-bay 8-story frame81.

Designing 3-bay 15-story frame

For a 3-bay 15-story frame design, the ASIC combined strength constraint and displacement constraint is included as an optimization constraint. The material properties of the frame include: E = 200 GPa (29000 ksi), yield stress \(F_y=248.2\) MPa and sway length on the top must be less than 23.5 cm. The length factor is calculated as \(k_x \ge 0\) for sway permitted frame and the length factor out-of-plane is \(k_y = 1.0\). The length of each beam is 1/5 span length and the design structure is given in Fig. 9.

Table 11 Optimization results for the 3-bay 15-story frame.

Here, nine improved algorithms are used for comparison including HPSACO82, HBB-BC82, ICA-ACO36, DE28, ES-DE28, AWEO36, EVPS36, FHO7, SDE36 and SFLAIWO82. From the optimization results in Table 11, it is evident that the minimum weight is obtained by MHDE and is equal to 360.22 kN. The second-best algorithm is SFLAIWO having an optimized weight of 379.21 kN whereas for the third best SDE it is 387.89 kN. In comparison to second best and third-best algorithm, MHDE has a reduced weight of 18.99 kN and 27.67 kN respectively. The optimized average of 20 runs for this frame using MHDE is 364.73 kN with a 2.16 kN std. This further proves the superiority of MHDE algorithm in comparison to others.

Figure 9
figure 9

Design of 3-bay 15-story frame81.

Designing 3-bay 24-story frame

This frame consists of 168 members28 and must be designed in accordance with LRFD specifications. This frame has a displacement constraint and properties of its material includes, \(E=205\) GPa and \(F_y=230.3\) MPa. The effective length is, \(K_x \ge 0\) and the out-of-plane length is \(K_y =1.0\). Here it should be noted that all the beams and columns are unbraced along the lengths and the design structure is shown in Fig. 10.

Table 12 Optimization results for the 3-bay 24-story frame.
Figure 10
figure 10

Design of 3-bay 24-story frame81.

For fabrication, the first and third bay of each floor uses the same beam section except the roof beam, and hence there are only 4 groups of beams. The initial stages of foundation, interior columns are grouped together over three consecutive stories. Overall, this frame consists of 4 groups of beams and 16 groups of columns, making the total number of design variables 20. The beam elements are chosen from 267 W-shapes, whereas column sections are restricted to W14 (37 W-shapes).

The optimized weights for this example are presented in Table 12. Here MHDE is compared with HBB-BC82, HS37, ICO28 ICA84, HBBPSO82, ES-DE28, AWEO36, FHA7, EVPS36 and SFLAIWO82. Here it has been found that among all the algorithms, MHDE achieved the minimum weight of 904.91 kN. The second and third best are EVPS and SFLAIWO algorithm and here the optimized weight is 905.67 kN and 911.78 kN respectively. The mean weight for 20 independent runs for MHDE is 910.23 kN with 3.78 kN deviation. The best values of results further prove the superiority of MHDE in contrast to other algorithms. Also, the function evaluations used for MHDE is much less than compared to other algorithms. For example, only, 50000 function evaluations are used for MHDE in contrast to SFLAIWO where 168,000 function evaluations have been utilized for weight optimization. Overall, it can be said that in this case also MHDE has superior performance and is easily able to enter the neighbourhood space of the global optimal solution.

Conclusion

This article presented a multi-hybrid algorithm by combing the concepts of iterative division along with adaptive mutation for improved expl, adaptive parameter for a balanced expl and exploitation, population size reduction, and Gaussian random sampling for mitigating the local optima stagnation problem. The new optimization strategy helps to carry out global search in a more efficient way by using GWO based equations. All the above-discussed features ensure good performance of MHDE.

MHDE was evaluated using CEC 2005 classical benchmarks, CEC 2014 and CEC 2017 benchmark datasets. The experimental and statistical results prove that MHDE is superior with respect to DE variants such as JADE, SaDE, SHADE and others. The algorithm was then applied for weight minimization of three frames design problems with discrete variables. Optimization results prove the superior performance and competitiveness of MHDE over other algorithms for frame design also. To summarize, it is concluded that MHDE is reliable and an efficient algorithm for solving complex structural design problems.

Further studies should aim at providing theoretical analysis of the sensitivity and performance of MHDE. More work can be done to find a suitable combination of adaptive parameters to make the algorithm suitable for most of the domain research problems. Another possibility is to introduce some of the recent algorithms instead of GWO or CS for equation modifications, in order to control the search operation for better accuracy. Apart from that, combination of multiple strategies might lead to negative interference in algorithm’s behavior. For example, changing F factor will lead to different expt/expl performance, however if a good step size is achieved, it might not be carried out due to lack of a changed crossover permission. In this sense, a careful sensitivity analysis should be performed in order to verify interference of the proposed strategies combined. Finally, work on the convergence analysis can be performed to provide more insights into the working capabilities of the proposed algorithm.