In real world applications, optimization problems are frequently non-differentiable, non-convex and discontinuous. Before the introduction of the most extensively used metaheuristic optimization technique, gradient descent approach was one of the optimization techniques employed as well as the Gauss–Newton technique1,2. The gradient-based optimization method is vulnerable to getting over the local optimums and reduces the precision of optimization. On the other hand, metaheuristic optimization algorithms are able to find ideal or nearly ideal solutions in a manageable time. These algorithms have been studied by many researchers in order to deal with difficult optimization problems, some of them are Genetic Algorithm (GA)3, Particle Swarm Optimization (PSO)4, Hybrid Artificial Humming Bird-Simulated Annealing (HAHA-SA)5, Differential Evolution (DE)6, Hybrid Flow Direction Optimizer-dynamic oppositional based algorithm (HFDO-DOBL)7, Firefly Algorithm (FA)8, Artificial Electric Field Algorithm (AEFA)9,10, Artificial Bee Colony Optimization (ABC)11, Hybrid Heat Transfer Search and Passing Vehicle Search Optimizer (MOHHTS–PVS)12, Cuckoo Search (CS)13, Chaotic Marine Predators Algorithm (CMPA)14 and Nelder Mead-infused INFO algorithm (HINFO-NM)15. AEFA has become the focus of research among these algorithms in recent years with its few parameters and its simple principle.

AEFA is a stochastic optimization algorithm based on swarm intelligence. Due to the interaction between the charged particles via electrostatic force; attraction or repulsion. The particles travel along the electrostatic field with the most charged particle leading. When it was first introduced, academics were intrigued by the efficiency of AEFA. It has been used extensively in numerous fields, such as machine learning16,17, assembly lines18, engineering problems19,20, feature selection21 and economic load dispatch problem22. The leading charged particle controls each iteration of the search process of AEFA.

In multimodal problems, the leading charged particle may occasionally enter a sub-optimal location23. The population is nevertheless subject to local optimal, when the leading charged particle becomes stranded in a sub-optimal location. Less diversity in population becomes inevitable because of the significant convergence of other particles toward the leading charged particle. Consequently, the standard AEFA has the similar problems as the majority of metaheuristic algorithms such as; lack of population variety and tendency to be trapped in local optimum points.

The aforementioned problems are the main motivation of this study. Therefore, a hybridized version of Artificial Electric Field Algorithm (AEFA) with Cuckoo Search (CS) using Refraction Learning (RL) is proposed and called AEFA-CSR. In this hybrid version, two search methods with different properties are introduced to provide new solutions in the population. One of them makes use of the idea of light refraction to learn opposite solution. This method is proposed in order to enhance the lead particle search functionality and broaden its range to avoid sub-optimality. Additionally, the CS method is used to increase population variety. By weakening the leadership of the leading particle and allowing particles to identify viable solutions will enable particles to be moved from other sub-optimal solutions. Using the aforementioned strategies in combination, the performance of AEFA-CSR becomes quite noticeable.

There are many AEFA variations in the literature. However, to the best of our knowledge, AEFA-CSR is the first variant of AEFA that employs CS and RL. In order to overcome the limitations of AEFA, CS is chosen in particular for enhancing the population diversity and RL was specifically applied for increasing the ability of escaping from multiple local optimums. These features assess the AEFA-CSR to perform contributions to the field with an enhanced solution quality and for real-world engineering problems. The steps are specifically investigated to obtain enhanced and strengthened features in order to overcome the limitations of AEFA.

The rest of the paper is structured as follows. In Section “Related work”, related metaheuristic optimization techniques, in particular AEFA variations and their contributions are given. In Section “Methodology”, the methodology used to form AEFA-CSR is discussed. The algorithms; AEFA, CS, RL are elaborated individually by mentioning the main motivation of this study. In Section “Experimental results”, the experimental results obtained by the proposed algorithm AEFA-CSR with the counterpart algorithms such as well-studied and commonly used ones, recently developed ones and hybrid ones for the benchmark functions are presented. Additionally, some particular tests to observe the efficiency of AEFA-CSR are employed and the results are discussed. Moreover, to measure the suitability of the algorithm for real world engineering design problems, some problems are studied and analyzed. Finally, in Section “Conclusion and future work”, the concluding statements and the future work are given.

Related work

There is a widespread use of metaheuristic optimization techniques. These techniques are used to address optimization challenges and can be categorized into three groups: physics-inspired algorithms, evolution-inspired algorithms and swarm-inspired algorithms. Physics-inspired algorithms imitate the physical laws that govern how individuals engage with each other and their search space. These laws include the laws of inertia, light refraction, gravitation and many others24. Few of the popular algorithms in this category are Gravitational Search Algorithm (GSA)25, Colliding Bodies Optimization (CBO)26 and Henry Gas Solubility Optimization (HGSO)27. Evolution-inspired algorithms simulate the natural process of evolution by trying different combinations of individuals to find the best solution. The best individuals are combined to form a new generation, which is the key advantage of this technique. Few of these algorithms are Genetic Algorithm (GA)3, Differential Evolution (DE)6 and Evolutionary Programming (EP)28. Swarm-inspired algorithms aim to develop intelligent swarm behaviors like animal grazing and bird flocking. In the search space, promising areas will be discovered by a population that collaborate and interact. Few of the recent swarm-inspired algorithms are Whale Optimization Algorithm (WOA)29, Seagull Optimization Algorithm (SOA)30, Particle Swarm Optimization (PSO)4, and Harris Hawks Optimization (HHO)31.

Each of the numerous metaheuristic algorithms has its own drawbacks. The local and global search balance in Grey Wolf Optimizer (GWO) is weak24. Rodríguez et al. investigated the possibility of altering the initial control parameters to improve the exploration process in GWO32. GWO does not have an effective diverse population, so Lou et al. switched from the typical real-valued encoding method to a complex-valued one which makes the population more diverse33. WOA easily enters the local optimum and suffers from premature convergence. Using chaotic maps, Oliva et al. modified WOA to prevent the population from entering local optima34. Shi et al. proposed a new chaos-based operator and a new neighbor selection strategy to speed up the convergence of Artificial Bee Colony (ABC) both of which improved the standard ABC35.

AEFA is a type of physics-inspired algorithm that mimics the group of particles interact and move along the search space. It is simple to implement and has fewer parameters. As a result, a variety of optimization problems have been successfully solved using this algorithm. Controller design36, multi-objective optimization problems37, soil shear strength prediction38, pattern search 39, vehicle routing 40 and tumor detection 17 are some of the examples of the research problems solved by AEFA. To improve performance and address the shortcomings of the AEFA, numerous academics have developed variants of the original AEFA in recent years. Malisetti and Pamula used Moth Levy methodology to create Moth Levy Artificial Electrical Field Algorithm (ML-AEFA) to solve the problem of entering the sub-optimality41. An algorithm with new strategy for velocity update and population initialization known as improved Artificial Electrical Field Algorithm (IAEFA) has been introduced in order to enhance the robustness of AEFA in handling complex problems42. Furthermore, due to the algorithm’s focus primarily on local search, it is unable to effectively perform efficient global exploration across the entire solution space. Cheng et al. used a log-sigmoid function in order to strike a balance between exploration and exploitation16. AEFA with inertia and repulsion known as improved Artificial Electrical Field Algorithm (IAEFA) is introduced by Bi et al. to avoid premature convergence and improve population diversity43. Modified Artificial Electrical Field Algorithm (mAEFA) is proposed by Houssein et al.23. Levy flight, local escaping operator and opposition learning are introduced to avert stagnation in regions of local optimal23. Extensive experiment shows improvement in convergence rate and search ability of AEFA. To attain better exploitation and exploration balance, Anita, Yadav and Kumar introduced AEFA for solving constrained optimization problems (AEFA-C) by constraining particle interaction to the search space's border using new velocity and location updates19. The improved version of the AEFA that was proposed by Demirören et al. as Opposition based Artificial Electrical Field Algorithm (ObAEFA) makes use of the opposition-based learning strategy to improve the AEFA exploration capabilities36. The improved performance of ObAEFA was vetted through several experiments. Petwal and Rani’s experimental findings show that the proposed algorithm is highly competitive and achieves the desired level of population diversity37. AEFA based on opposition learning is proposed to enhance its global exploration and local development capabilities. The opposition learning strategy is used to increase population diversity and exploitation, while the chaos strategy is used to improve the quality of the initial population, experiments demonstrate the algorithm's superior performance44. Furthermore, Levy flight mechanism that provides multiple distinct evolutionary strategies and enhances the local search capability was introduced to AEFA by Sinthia and Malathi17. The elitism selection theory ensures that the fittest survive and mutation operators increase population diversity. The performance of the multi strategy Artificial Electrical Field Algorithm (M-AEFA) is enhanced by the dynamic combination of these adaptive strategies.

Hybridizing AEFA with other types of algorithms like swarm-inspired or physics-inspired algorithms to enhance the performance of AEFA is another area of research interest. The poor exploitation as a result of the stochastic nature of AEFA is be improved by hybridizing AEFA with Nelder-Mead (NM) simplex. AEFA performs the global search while NM performs the local search. Test on popular functions show improved performance20. One of the well-known optimization algorithms DE is applied to create an effective hybrid by combining the capabilities of AEFA and DE (AEFA-DE). On IEEE Congress on Evolutionary Computation-2019 (CEC-2019) test functions, the performance of the suggested hybrid method is validated. The experimental findings imply that AEFA-DE performs better than the compared algorithms45.

AEFA is able to conduct a more in-depth search across the solution space using the local search mechanism18. It was discovered that the location of the charged particle and the mutual attraction of the nearby particles influence how the artificial electric field algorithm updates its position. Despite of the strong local search ability of AEFA, it has limited global search capacity. It is because of the charged particles have a strong ability to interact with information. The Sine–Cosine Algorithm (SCA) can better balance local and global search than AEFA. As a result, SCA’s update mechanism is included into the AEFA (SC-AEFA) by changing the iterative process of the algorithm40.

The major goal of the derivatives of AEFA is to increase search accuracy and convergence speed, in accordance with the numerous improvement methodologies indicated above. As a result, this study presents the AEFA-CSR, an improved Artificial Electric Field Algorithm based on Cuckoo Search (CS) with Refraction Learning (RL).


Artificial electrical field algorithm

The Coulomb's law states that "electrostatic forces of repulsion or attraction among two different charge particles are in direct proportion with the product of charges and in inverse proportion with the square of the distances between their positions". This idea serves as the basis for the limitation of AEFA9. In this case, the charged particles are referred to as agents and the charges of particles are used to evaluate the agents' potentials. All charged particles may experience an electrostatic force of either repelling or attracting as a result of the movement of objects in the search space. The charges use electrostatic forces to communicate directly and their positions provide the best solution. As a result, charges are referred to as a function of the population's fitness and the potential solution. The electrostatic force of attraction states that the charged particles with the least charge are drawn to the charged particles with the most charge. In addition, the solution with the highest charge is thought to be the best46. The pseudocode of AEFA can be seen in Algorithm 1.

Assuming \({Y}_{j}=\left({Y}_{j}^{1},{Y}_{j}^{2},\dots ,{{Y}_{j}^{{ D}_{N}}}\right)\forall j=1,2 ,\dots ,N\) where the jth particle has dimension \({D}_{N}\). By employing the location and best personal fitness value of particular particle, that particle is able offer the best global fitness value in AEFA. To acquire the optimum position fitness value of any particle \(j\) throughout interval formulas are expressed below43,

$$\begin{array}{c}\\ {B}_{j}^{{D}_{N}}(t+1)=\left\{\begin{array}{ll}{B}_{j}^{{D}_{N}}(t)& \text{if fitness }\left({B}_{j}^{{D}_{N}}(t)\right)<{\mathrm{fit}}\left({Y}_{j}(t+1)\right)\\ {Y}_{j}(t+1)& \text{if fitness }\left({Y}_{j}(t+1)\le {\mathrm{fit}}\left({B}_{j}(t)\right)\right.\end{array}\right.\end{array}$$

where a particle’s personal best fitness and current position are represented as \({B}_{j}\) and \({Y}_{j}\) respectively. Furthermore, the force on the charge \(l\) exacted by \(j\) throughout interval \(I\) is given in the following Eq. (2)9,

$${\text{Force }}_{jl}^{{D}_{N}}(t)=K(t)\frac{{q}_{j}(t)\times {q}_{l}(t)\left({B}_{l}^{{D}_{N}}(t)-{Y}_{j}^{{D}_{N}}(t)\right)}{{DIST}_{jl}(t)+\varepsilon }$$

where \({q}_{l}\) and \({q}_{j}\) are charge of any particle \(l\) and \(j\) is expressed as,

$${q}_{l}\left(t\right)=\mathrm{exp}\left(\frac{\text{ Fitness }\left({B}_{l}\right)(t)-\mathrm{Worst}(t)}{\mathrm{Best}(t)-\mathrm{Worst}(t)}\right)$$
$$\left\{\begin{array}{c}\mathrm{Best}\left(t\right)=\mathrm{min}\left({\text{ Fitness }}_{l}\left(t\right)\right) l=(\mathrm{1,2},\dots ,N)\\ \text{ Worst }\left(t\right)=\mathrm{max}\left({\text{ Fitness }}_{l}\left(t\right)\right) l=(\mathrm{1,2},\dots ,N)\end{array}\right\}$$

\(\mathrm{Worst} \left(t\right)\) and \(\mathrm{Best}(t)\) represent the worst and best fitness among all charges. \(K(t)\) and \(\varepsilon \) denote the Coulomb’s constant and a positive epsilon constant, respectively. The Euclidean distances between two independent particles at interval \(t\) is therefore represented as \({DIST}_{jl}(t)\) and are calculated as follows.

$${DIST}_{jl}(t)= \left\| {{Y}_{j}(t),{Y}_{l}(t)}_{2} \right\| $$

In addition, Eq. (6) gives the assessment of \({\text{max }}_{\text{iter}}\) and current \({\text{iteration}}\) with respect to the Coulombs rule. The parameters of the Coulombs rule are represented by \({K}_{0}\) and \(\gamma \) respectively.

$$K(t) ={K}_{0}\times \mathrm{exp}\left(-\gamma \frac{\text{ iter }}{{\text{ max }}_{\text{iter}}}\right)$$

where \({\text{max }}_{\text{iter}}\) refers to total number of iteration preset at the beginning and iter is the current iteration number when computing Coulomb’s constant.

The total electric force on \(jth\) particle with the dimension \({D}_{N}\) is thus stated as follows,

$${\text{Total Force }}_{jl}^{{D}_{N}}\left(t\right)=\sum_{l=1,l\ne j}^{N} R\times \left[{\text{ Force }}_{jl}^{{D}_{N}}(t)\right]$$

where \(R\) depict random number from the range of [0–1]. Individual charge divided by total individual charge of all particles is expressed as \({Q}_{l}\left(t\right)\) as follows.

$${Q}_{l}\left(t\right)=\frac{{q}_{l}(t)}{\sum_{l=1}^{N} {q}_{l}(t)}$$

Equations (9) and (10) describe the equations for the respective electric field \(E{F}_{j}^{{D}_{N}}(t)\) and acceleration \({\mathrm{Acc}}_{j}^{{D}_{N}}(t)\) of the jth particle having the dimension as \({D}_{N}\) over interval \(t\),

$$E{F}_{j}^{{D}_{N}}(t)=\frac{{\text{ Total \, Force }}_{jl}^{{D}_{N}}(t)}{{Q}_{l}\left(t\right)}$$
$${\mathrm{Acc}}_{j}^{{D}_{N}}(t)=\frac{{Q}_{j}\left(t\right)\times E{F}_{j}^{{D}_{N}}(t)}{M{a}_{j}(t)}$$

where \(M{a}_{j}\left(t\right)\) represent the mass of particle \(j.\) Equations (11) and (12) provide the update equations for the velocity and location of the jth particle as follows,

$${V}_{j}^{{D}_{N}}(t+1)=R\times {V}_{j}^{{D}_{N}}(t)+{\mathrm{Acc}}_{j}^{{D}_{N}}(t)$$
figure a

Cuckoo search

The strong reproductive strategy of some cuckoo species encourages the idea of Cuckoo Search (CS)13,47 which is a type of metaheuristic algorithm inspired by the swarm intelligence. Three rules regulate CS operations and the last rule entails adding some fresh random solutions to the process48. An approximation of it is a fraction \(Pa\) of the \(n\) number of host nests to create new nests. The basic steps of the CS can be determined by following the cuckoo breeding behavior, which can be found in49 and was summed up in50. The optimization issue to be tackled is portrayed as \(f(Y)\) where nests are represented as \(Y=\) \(\left\{{Y}_{1},{Y}_{2}, {Y}_{3},{\dots Y}_{D}\right\}\) with \(D\) dimensions. Within the designated search space there are \(N\) host nests \(\left\{{Y}_{i},i=1,\dots ,N\right\}\). Each of the \({Y}_{i}=\left\{{Y}_{i1},\dots ,{Y}_{iD}\right\}\) nests indicates a potential solution to the optimization task at hand. Finding the new population of \({Y}_{i}(t+1)\) nests is one of the algorithm’s crucial phases. Additionally, the following equation shows the use of the Levy flight to gain the new nests at a time \(t\),

$${Y}_{ij}(t+1)={Y}_{ij}(t)+\alpha \oplus \mathrm{Le}vy(\lambda )$$

where \(\lambda \) is a Lévy flight parameter, \(\oplus \) is an entry-wise multiplication operation, and \(\alpha \) is the step size. As aforementioned the third rule imitate this notion, host birds will abandon nest given alien eggs are found. In this case, the following method can be used to regenerate new nest with probability \(Pa\),

$${Y}_{i}\left(t+1\right)={Y}_{i}\left(t\right)+R\times \left({Y}_{y}\left(t\right)-{Y}_{j}\left(t\right)\right)$$

where two randomly chosen nest from host of nests are \({Y}_{y}\left(t\right)\) and \({Y}_{j}\left(t\right)\). \(R\) is a random value between [0,1]. The step by step execution of CS can be seen in Algorithm 2.

figure b

Artificial electric field employing Cuckoo search algorithm with Refraction learning (AEFA-CSR)

AEFA-CSR algorithm has been proposed with the proper use of the previous techniques. The algorithm does not only combine the benefits of the Artificial Electric Field Algorithm (AEFA) but two of the algorithms; Cuckoo Search (CS) and Refraction Learning (RL) as well as incorporating a sub-optimal avoidance technique. This offers quite noticeable global search capabilities as well as the ability to avoid being trapped into local optimum points. In the Algorithm 3, the detailed steps of AEFA-CSR are given.

figure c


An algorithm may become inefficient, if the exploration ability performs excessive. In the same way excessive exploitation may trap the algorithm in sub-optimal prematurely and may provide unacceptable results. Consequently, the balance between exploration and exploitation is crucial for an algorithm’s efficiency51. It was discovered that the location of the charged particle and the mutual attraction of the nearby particles cause the position update of the candidate solutions. AEFA has good exploitation ability with limited global search capacity due to the strong ability of interaction of the particles40. AEFA solely uses the electrostatic force. This technique of updating population only draws other agents to the lead agent quickly by limiting population variety. As a result, AEFA is susceptible to becoming temporarily trapped.

Figure 1 displays the migration of 20 agents utilizing the Sphere function, with the dimension set to 2 for visualization. The upper and lower bounds set to 10, − 10 and agents are shown at various iterations. The lead agent leads the migration as the other agents begin to transverse the solution space at iteration one. At iteration 10, AEFA starts to collect agents around the ideal area in the problem space as informed by the lead agent. The strong attraction force discourage exploration in other particles. This leads to reduction in population diversity as seen in Fig. 1. Peradventure the lead agent is trapped in some local optimal, other agents will also become trapped. In other words, the lead agent dominates the exploring capabilities of AEFA. The other particles need to be more capable of exploration and exploitation by weakening the leadership of lead agent. Also, the lead agent requires a local optimum avoidance approach. This drawback serves as the foundation for this work's motive.

Figure 1
figure 1

Distribution of 20 agents in AEFA with (a) the random distribution of agents, (b) the updated locations of agents after 10 iterations and (c) the gathered positions of agents.

Cuckoo search nest replacement strategy

Cuckoo Search (CS) nest replacement operator is used to replace some nests randomly with newly produced solutions to enhance the algorithm's exploration capability. With the use of this replacement operator in the CS, the exploration capability of the algorithm is quite strong52,53,54. Due to the efficiency that CS has, this method is applied to AEFA as an aid tool for its poor exploration ability.

This process involves by replacing a set of nests with new values based on a probabilistic selection. It is possible to choose any nest \({Y}_{i} \left(i\in \left[1,\dots ,N\right]\right)\) with a probability of \({p}_{a}\in [\mathrm{0,1}]\). A uniform random number \(\mathrm{R}\) within [0, 1] is assigned in order to carry out this procedure. When \(\mathrm{R}\) falls below \({p}_{a}\), the nest \({Y}_{i}\) is chosen and adjusted as shown in Eq. (15). In all other respects, \({Y}_{i}\) is unchanged. The Eq. (15) is shown as follows,

$${Y}_{i}\left(t+1\right)=\left\{\begin{array}{ll}{Y}_{i}+rand\cdot \left({Y}_{y}-{Y}_{j}\right),& \text{with probability }{p}_{a},\\ {Y}_{i },& \text{with probability }\left(1-{p}_{a}\right),\end{array}\right.$$

where \(y\) and \(j\) are random numbers from 1 to \(N\) and rand is a random number with normal distribution.

Refraction learning

Refraction Learning (RL) is based on the idea that light rays bend when they pass through an air-to-water transition. As an object's medium shifts the velocity shifts as well, bending in the direction of the boundary's normal. This theory aims to assist a candidate’s solutions in leaving the sub-optimal while retaining variety55. This kind of opposition-based learning can be considered more advanced to avoid sub-optimality. Refraction learning is used in Whale Optimization Algorithm (WOA) and Equalized Grey Wolf Optimizer (EGWO)24,56. In both of the applications, it can be seen from the statistical results that the local optimality is avoided via RL method. RL equations are stated as follows,

$${x}^{{^{\prime}}*}=(\mathrm{LB}+\mathrm{UB})/2+(\mathrm{LB}+\mathrm{UB})/(2k\eta )-{x}^{*}/(k\eta )$$

where \({x}^{*}\) represents a variable in the potential solution and \(\eta \) is the specified refraction index which is expressed as follows,

$$\eta =\frac{{\sin}{\theta }_{1}}{{\sin}{\theta }_{2}}$$
$${\sin}{\theta }_{1}=\left((\mathrm{LB}+\mathrm{UB})/2-{x}^{*}\right)/h$$
$${\sin}{\theta }_{2}=\left({x}^{{^{\prime}}*}-(\mathrm{LB}+\mathrm{UB})/2\right)/{h}^{{^{\prime}}}$$

The refraction absorption index \(k\) is expressed as,


where Fig. 2 depicts the light's refraction with all variables, \(x\) and \({x}^{^{\prime}}\) denote the incidence point and the refraction point, respectively. Serving as upper limit, upper limit and center are the symbol of \(\mathrm{LB}\),\(\mathrm{UB}, O\). The parameters \(h\) and \({h}^{{^{\prime}}}\) define the distances from \(x\) to \(O\) and from \({x}^{{^{\prime}}}\) to \(O\). The refracted solution of \({x}^{*}\) is \({x}^{{^{\prime}}*}\).

Figure 2
figure 2

Fundamentals of light refraction.

Experimental results

Experiments are carried out on 20 standard benchmark functions to confirm the efficiency of AEFA-CSR for solving global optimization functions. The algorithms; Artificial Electric Field Algorithm (AEFA)9, Cuckoo Search (CS)47, Differential Evolution (DE)6, Firefly Algorithm (FA)8, Particle Swarm Optimization (PSO)4, Jaya Algorithm (JAYA)57, Hybrid-Flash Butterfly Optimization Algorithm (HFBOA)58, Sand Cat Swarm Optimization (SCSO)59, Salp Swarm Algorithm with Local Escaping Operator (SSALEO)60, Transient Search Optimization (TSO)61 and Chaotic Hybrid Butterfly Optimization Algorithm with Particle Swarm (HPSOBOA)62 were chosen for a detailed observation. The algorithms are chosen in a way to give a better insight to the readers such as well-studied and commonly used ones, recently developed ones which gained attention from the researchers in a short period of time and finally hybrid algorithms that are made up of powerful optimizers. Each algorithm is individually tested on the functions for 30 independent trials to ensure about their problem solving capabilities. The Wilcoxon Rank Sum test and the nonparametric Friedman test are used for statistical testing to represent the variations in the algorithms’ performances63,64. Several parameter combinations are put up to examine the influence of each control parameter on each of the algorithms. Additionally, convergence analysis, overall effectiveness with changing populations and dimensions, exploration and exploitation analyses, computational complexity tests are conducted. Afterwards, the efficiency of AEFA-CSR is validated using such real-world engineering problems; optimization of antenna S-parameters, welded-beam and compression designs.

Benchmark functions

Table 1 displays the fundamental properties of the 20 functions that were chosen for testing. F1 through F7 are unimodal functions that have just one global optimum solution within the specified boundary and are typically used to gauge the algorithm's ability to exploit regions of potential solution. F8–F20 are multimodal functions, F8–F13 are high-dimensional multimodal functions and F14–F20 are fixed-dimensional multimodal functions. These functions have multiple local extremes in each self-defined function's domain which are capable of detecting global exploration and can cause the algorithm's premature convergence.

Table 1 Benchmark functions.


The values published in the original publications or often used in many research are chosen as parameters for the corresponding algorithms which are provided in Table 2.

Table 2 Parameter settings.

Overall effectiveness

In this study, the results from Tables 3, 4, 5, 6 and 7 were used to evaluate the Overall Effectiveness (OE) of the AEFA-CSR to that of its counterparts. Equation (21) demonstrates that the number of test functions and losses for each algorithm can be used to determine the OE of the comparison algorithms60.

$$OE=\left(\frac{N-L}{N}\right)\times 100$$

where N is the total number of function and L is number the number of losses incurred by an algorithm. In the tables, W and T indicate the number of wins and the number of ties respectively.

Table 3 F1–F20 comparison with dimension = 30 and population = 30.
Table 4 F1–F13 comparison with dimension = 50 and population = 30.
Table 5 F1–F13 comparison with dimension = 50 and population = 60.
Table 6 F1–F13 comparison with dimension = 50 and population = 90.
Table 7 F1–F13 comparison with dimension = 100 and population = 30.

Dimension analysis

Given that dimensionality has a substantial impact on optimization accuracy, F1 through F13 are expanded from 30 to 100 dimensions to test the algorithms' abilities in solving the problems. The outputs of each algorithm is then assessed. The mean value (Avg) and standard deviation (Std) are used as the assessment indices to give the experiments greater credibility. Avg might indicate the algorithm's quality and accuracy of the solutions, while Std indicates the algorithm's stability. Population size is 30 and the maximum number of iterations for all algorithms is set to 1000.

The experimental findings with a dimension of 30 are displayed in Table 3. The analysis suggests that with unimodal functions (F1–F7), AEFA-CSR finds the best near ideal solution. It is noticeable that AEFA-CSR outperforms the other algorithms by a larger margin. This is because of the included RL mechanism which improves the algorithm's ability for local search as well as exploitation. With F8–F20 which are multimodal functions, AEFA-CSR performs the best on F8–F12, F14–F20 and obtained the global optimum for F9 and F11. All of the outcomes produced by AEFA-CSR are superior to those that are produced by AEFA. This suggests that after incorporating CS approach the improved population variety, the algorithm's exploration ability has increased in comparison to AEFA. A solid balance between exploitation and exploration is also successfully achieved by the algorithm as observed in the results of the fixed dimension functions.

Tables 3, 4 and 7 show the experimental study for varying dimensions while the population size kept as constant 30. Tables 4, 5 and 6 show the experimental study for varying population size while the dimension kept as constant 50. It can be observed that for a fixed population size, when the problem size grows in all the cases AEFA-CSR is superior to other compared algorithms in terms of Overall Effectiveness (OE) ranging from 76.93 to 90.0%. Similarly, it is apparent that for a fixed dimension size, when the population size grows, AEFA-CSR produced higher OE values ranging from 61.53 to 76.93%.

Convergence analysis

The convergence trajectories of 12 algorithms are presented in Fig. 3 to further analyze how well different algorithms accomplish convergence while addressing optimization functions. The dimension for the functions; F1–F13 is set to 30, while the functions F14–F20 are the fixed dimension functions. It is evident that the convergence precision of AEFA-CSR is significantly better on unimodal functions; F1–F6. AEFA-CSR maintained an extraordinary convergence rate and converges to the global optimal on F9, F11, F14, F16 and F20 in it performance with multimodal functions; F8–F20.

Figure 3
figure 3figure 3

Convergence trajectory with dimension = 30.

The convergence efficiency of AEFA-CSR is noticeably better than the AEFA. It is demonstrated that the population diversification adjustments and the introduction of RL technique are quite successful. The experimental findings show that AEFA-CSR has improved its optimization capability and convergence performance. Worthy of note is the algorithm’s rapid convergence will be applicable in optimization problems that the convergence is the essential component.

Statistical test

Garcia et al. made the point that it is insufficient to compare metaheuristic algorithm performance using only mean and standard deviation65. Therefore, during the iterative process, inescapable factors that have an impact on the experimental outcomes have been added66,67. The Wilcoxon Rank Sum test and the Friedman test are used in this study to examine the effectiveness of the algorithms.

The Friedman test is used to evaluate the experiment's validity by comparing the proposed AEFA-CSR to other algorithms. The Friedman test, one of the most popular and commonly applied statistical tests which is used to find significant differences between the outputs of two or more algorithms68. Table 8 displays the results of the Friedman tests. The algorithm with the lowest ranking is thought to be the most effective algorithm according to the Friedman test findings. The suggested AEFA-CSR is always rated first in the various scenarios; 30, 50 and 100 dimensions with population set to 30 according to the results in the Table 8. The AEFA-CSR has stronger competitive edge over the other algorithms.

Table 8 Friedman’s test for dimension 30, 50 and 100 dimensions for 20 functions.

The significance threshold p for the Wilcoxon Rank Sum test is set at 0.05. The technique is shown to be statistically better when p < 0.05. Table 9 displays the results achieved by Wilcoxon Rank Sum test. The symbols +/−/= denote that the suggested ways are better, worse or equal to than the existing approach67. Table 9 demonstrates that AEFA-CSR consistently offers R + values that are greater than R- values. Additionally, AEFA-CSR is superior than the other algorithms as an observation from Table 9 which shows that p values of the six algorithms are less than 0.05. That is an indication of which imply they are substantially different from AEFA-CSR. Table 8 further reveals when the dimension expanded from 30 to 100 with population set to 30, the + value of AEFA-CSR increased. This indicates the performance of AEFA-CSR does not decline like other algorithms. It is able to produce substantial improvement compared to the other algorithms as dimension increase. The findings demonstrate that the suggested AEFA-CSR has higher level of solution accuracy.

Table 9 Wilcoxon Rank sum test 30, 50 and 100 dimensions for 20 functions.

Conclusively, AEFA-CSR is more competitive than both traditional algorithms such as DE, PSO and innovative algorithms such as FA, JAYA, CS, SCSO and hybrid algorithms such as HPSOBOA. The newly introduced strategies are to be credited for the proposed algorithm’s greater achievements. Due to the RL solution strategy, improves local optimal escape mechanism and the CS lessen the dominance of the lead agent. Therefore, combining the two approaches significantly enhance the ability of AEFA to solve multimodal and unimodal functions.

Sensitivity analysis to parameters

Assessment to the sensitivity of parameters is also carried out in this part to investigate the impact of various parameters of AEFA-CSR. Population size, iteration number and dimension are maintained at 30, 1000 and 30 throughout the experiment. The starting values of the \(k\), \(\eta \) in Eqs. (15) and (16) are set to 1 and 0.25, then are altered throughout the test. The parameter \(Pa\) is varied within the range [0, 1] and the author of CS suggests a value of 0.25, while parameters \(k\) and \(\eta \) is varied within [1, 1000]47. As indicated in Table 10, there are eight variations of the AEFA-CSR developed. Each of which represents a combination of various parameters. Note that these settings can be changed to fit the particular problem. It can be also noted that for \(k\), \(\eta \) and \(Pa\), the values of 1000, 1000 and 0.1 are used in earlier experiments. The variation of AEFA-CSR with these parameters performed best based on the Friedman rank.

Table 10 Statistical Test for parameter combination with dimension = 30.

As seen in Table 10, AEFA-CSR with the parameters set as \(k\) = 1, \(\eta \) = 1 and \(Pa\) = 0.25 finds poorer results on all of the test functions compared to when the parameters of AEFA-CSR is set as \(k\) = 1000, \(\eta \) = 1000 and \(Pa\) = 0.1 with the exception of F1, F13 and F14-F20 where results are comparable, this imply that in functions with high dimensions (F1-F13)\(, k\) = 1000, \(\eta \) = 1000 and \(Pa\) = 0.1 is more robust to handle them with more accuracy. Also, result from Table 10 shows that with the parameters \(k\) = 1000, \(\eta \) = 1000 and \(Pa\) = 0.25 AEFA-CSR performance is comparable to, when the parameters are set as \(k\) = 10, \(\eta \) = 10 and \(Pa\) = 0.25 and \(k\) = 100, \(\eta \) = 100 and \(Pa\) = 0.25 with the exception of F8, F9 and F12 where the performance of \(k\) = 10, \(\eta \) = 10 and \(Pa\) = 0.25. Additionally, the result of varying \(Pa\) from 0.1 to 0.3 keeping \(k\) and \(\eta \) at 1000 have no significant difference on the extraordinary performance of the AEFA-CSR. From Table 11, it can be seen that there is no significant difference between the best set of parameters which is \(k\) = \(\eta \)  = 1000 and Pa = 0.1 against \(k\) = \(\eta \)  = 100 and Pa = 0.25, is \(k\) = \(\eta \)  = 1000 and Pa = 0.25, \(k\) = \(\eta \)  = 1000 and Pa = 0.2 and is \(k\) = \(\eta \)  = 1000 and Pa = 0.3 as depicted by the P-values.

Table 11 Wilcoxon Rank Sum test for parameter combination.

The average objective function values are illustrated for all test functions with number of independent runs as 30 are shown in Fig. 4 for various combinations of the \(k\), \(\eta \), and \(Pa\). As shown in Fig. 4, the parameter combination obtained show comparable exceptional outcomes for the majority of test functions when \(k\) and \(\eta \) = 1000, incases pairings of the \(k\) and \(\eta \) = 1, and 10. AEFA-CSR tends to perform poorly compared to when \(k\) and \(\eta \) = 1000. The illustrated convergence trajectory depict that to attain the best performance for AEFA-CSR, \(k\) and \(\eta \) may be tuned to a somewhat big number ideally 1000.

Figure 4
figure 4figure 4

Convergence trajectory values for different parameter combinations.

Exploration and exploitation analysis

Exploration is an ability of an optimization algorithm pursuing the diverse solutions in the unexplored area while exploitation is an ability of an optimization algorithm pursuing the solutions around the optimum solution of a problem. Since F1 and F2 are unimodal functions, they are quite appropriate to observe the algorithm’s exploitation ability. Similarly, F10 and F12 are multimodal functions which have multiple local optimums and they are quite suitable for measuring the exploration ability of the algorithm.

When there is an increase in the dimension, the local optimum points in the multimodal functions increase drastically. As it is indicated in Tables 3, 4 and 7, the proposed AEFA-CSR produced better results on varying dimensions 30, 50 and 100. It is a good indication that the algorithm overcome the multiple local optimum points by reaching the global optimum. This is achieved by a balanced exploration and exploitation.

Additionally, Fig. 5 is given to show the exploration and the exploitation stages of the algorithm AEFA-CSR graphically. For the functions analyzed in this figure, it is seen that the algorithm starts with a broad exploration and narrow exploitation. As the optimization process goes on, the balance between these are assembled.

Figure 5
figure 5

Exploration and exploitation stages through the optimization process.

Computational complexity

The computational complexity Big O notation is one of the metrics to evaluate the performance of metaheuristics. As it is shown in the Algorithm 3, there is only one loop and it is considered as O (N) where N is the number of agents in the population. When it comes to the entire complexity which include modifying the agents towards the optimum solution by calculating the fitness values, it is considered as O (\({\text{max }}_{{\text{iter}}{\text{ation}}}\) x N x D) where \({\text{max }}_{\text{iteration}}\) is the number of iterations and D is the dimension.

AEFA-CSR is compared with its competitors with respect to their computational time in Table 12. When we have a separate glance on the algorithms AEFA and CS, it is obvious that these algorithms require higher CPU time than the others. Since the proposed algorithm AEFA-CSR is a combination of AEFA, CS and RL, each of the method needs to be carried out during the optimization process individually. Therefore, the CPU time of AEFA-CSR is not always better than the compared methods because of its complex nature. Overall, it can be said that AEFA-CSR requires more computational time, but its efficiency is a way ahead of these algorithms. By considering the significant contributions of the AEFA-CSR, even in real engineering problems, an equilibrium can be constructed between the high accuracy and the amount of time required to solve the problems.

Table 12 CPU time comparison for benchmarks at dimension = 30 and population = 30.

Engineering problems application

Optimization of antenna S-parameters

The suitability and efficiency of the algorithm in solving engineering problems are shown in this section. In order to demonstrate the suitability of the algorithm, a test suite that is made up of eight test problems for antenna design is chosen. The results are compared to those of other algorithms. Several formulas aimed at analyzing different antenna design problem known as test functions make up the test suite69,70. These enable effective assessment of an algorithm's performance.

It is not practical to evaluate optimization algorithms using electromagnetic simulation, since doing so for an antenna often takes a lot of time. Therefore, it is desirable for the pre-test of many antenna optimization techniques to have an effective test suite with analytical test functions. Several authors have proposed several test suites69,70,71. However, the test suite intended for antenna parameter optimization is rarely researched. The features of many antennas types using various types of formulas can be represented by a suitable test suite for antenna design. The objective functions that Zhang et al.72 successfully researched and proposed a test suite which covers a diverse characteristic of various types of antenna problems72,73 as displayed in Table 13.

Table 13 Test functions for antenna S-parameter optimization.

The dimensions of the test functions F21–F24 and F26–F28 with the exception of F25 which is a non-scalable test function are set to 8 as suggested by Zhang et al.72. Parameters of each algorithm as seen Table 2 are maintained with population size of 20 and iteration size of 500.

The average (Avg) and standard deviation (Std) for each test function across 30 runs are shown in Table 14 and Fig. 6 displays their averaged convergence curves. In comparison to CS, PSO, DE, and JAYA, AEFA-CSR converges faster for F21 and F23 which represent single-antenna design problem. Also, obtaining the global optimal value for this functions suggests that AEFA-CSR is effective in addressing the single-antenna design problem. When compared to the other algorithms, AEFA-CSR has high efficiency for F22, F23 and F24 which depict multi-antenna properties and it is also able to escape the sub-optimality in F5. Because, multi-antenna problem typically exhibits the features of F22, F23, F24 and F25 concurrently. We may assume that AEFA-CSR is well equipped to solve them. The performance of AEFA-CSR for solving F26, F27 and F28 with the isolation characteristic of multi-antenna is comparable to that for F5. Refraction learning's ability to simulate complicated landscapes with several local extremums and steep, long, narrow valleys. CS improving variety in the population are the primary factors in AEFA-CSR's success.

Table 14 Test function optimization results for F21-F28.
Figure 6
figure 6

Convergence trajectory values for F21- F28.

Welded beam design problem

As a further validation of the performance of AEFA-CSR in real world optimization problem a well-known problem is chosen which is the welded beam design problem who was formulated by Rao74 and used in CEC 2020 test function suite75. The welded beam design problem has several design parameters as outlined in55. There are four design variables that need to be determined \({x}_{1},{x}_{2},{x}_{3}\) and \({x}_{4}.\) Under specific restrictions, the objective of WBD optimization is to reduce the overall cost. The restrictions are the buckling critical load \(PC\), the bending stress \(\sigma \), the shear stress \(\tau \), beam deflection \(\delta \) and the tail of the beam. The following are the objective function and constraints. The objective function which needs to be minimized is given below in Eq. (22).


The objective function is subject to the constraint equations given below (23) to (29).

$${g}_{1}\left(\mathbf{x}\right)=\sqrt{{\left({\tau }^{{^{\prime}}}\right)}^{2}+2{\tau }^{{^{\prime}}}{\tau }^{{^{\prime}}{^{\prime}}}\frac{{x}_{2}}{2R}+{\left({\tau }^{{^{\prime}}{^{\prime}}}\right)}^{2}}-{\tau }_{\mathrm{max}}\le 0$$
$${g}_{2}(\mathbf{x})=\frac{6PL}{{x}_{3}^{2}{x}_{4}}-{\sigma }_{\mathrm{max}}\le 0$$
$${g}_{3}\left(\mathbf{x}\right)={x}_{1}-{x}_{4}\le 0$$
$${g}_{4}\left(\mathbf{x}\right)=0.10471{x}_{1}^{2}+0.04811{x}_{3}{x}_{4}\left(14+{x}_{2}\right)-5\le 0$$
$${g}_{5}\left(\mathbf{x}\right)=0.125-{x}_{1}\le 0$$
$${g}_{6}\left(\mathbf{x}\right)=\frac{4P{L}^{3}}{E{x}_{3}^{3}{x}_{4}}-{\delta }_{\mathrm{max}}\le 0$$
$${g}_{7}(\mathbf{x})=P-\frac{4.013E{x}_{3}{x}_{4}^{3}}{6{L}^{2}}\left(1-\frac{{x}_{3}}{2L}\sqrt{\frac{E}{4G}} \right)\le 0$$


$${\tau }^{{^{\prime}}}=\frac{P}{2{x}_{1}{x}_{2}}$$
$${\tau }^{{^{\prime}}{^{\prime}}}=MRJ$$

with the constants; \(P=6000\mathrm{lb}\), \(L=14\mathrm{in},E=30\times {10}^{6}\mathrm{psi},G=12\times {10}^{6}\mathrm{psi}\), \({\tau }_{\mathrm{max}}=\mathrm{13,600psi},{\sigma }_{\mathrm{max}}=30,000\mathrm{psi}, {\delta }_{\mathrm{max}}=0.25\mathrm{in}\).

The boundaries of the variables are given as;

$$0.1\le {x}_{1},{x}_{4}\le 2.0\text{ and }0.1\le {x}_{2},{x}_{3}\le 10.0$$

The results are compared with all the algorithms used for the experiments; AEFA, CS, DE, FA, PSO, JAYA, HFBOA, SSALEO, TSO, HPSOBOA and SCSCO. The population size is 30 and the algorithms are individually performed 20 times with a maximum of 500 iterations.

The best value of each optimization result for the twelve methods used to solve the welded beam design problem is shown in Table 15. Table 15 indicates that AEFA-CSR performs better than other methods on the optimization of welded beam design problem. It obtained the optimum value which is the least cost to be 1.695258.

Table 15 Welded beam design problem.

Tension/compression spring optimization design problem

The tension/compression spring design problem’s optimization objective is to lower the spring weight60. It is a continuous constrained problem and the variables are wire diameter d, average coil diameter D, and effective coil number P. Constraints include subject to minimal deviation (g1), shear stress (g2), shock frequency (g3), and outside diameter limit (g4). The objective function and constrained equations are given below.

$$\mathbf{x}=\left[\begin{array}{lll}{x}_{1}& {x}_{2}& {x}_{3}\end{array}\right]=\left[\begin{array}{lll}d& D& P\end{array}\right]$$
$${g}_{1}(\mathbf{x})=1-\frac{{x}_{2}^{3}{x}_{3}}{71785{x}_{1}^{4}}\le 0$$
$${g}_{2}(\mathbf{x})=\frac{4{x}_{2}^{2}-{x}_{1}{x}_{2}}{\mathrm{12,566}\left({x}_{2}{x}_{1}^{3}-{x}_{1}^{4}\right)}+\frac{1}{5108{x}_{1}^{2}}\le 0$$
$${g}_{3}(\mathbf{x})=1-\frac{140.45{x}_{1}}{{x}_{2}^{2}{x}_{3}}\le 0$$
$${g}_{4}(\mathbf{x})=\frac{{x}_{1}+{x}_{2}}{1.5}-1\le 0$$

For decision variables, boundaries are given as,

$$0.05\le {x}_{1}\le 2.0, 0.25\le {x}_{2}\le 1.3 \text{and }2.0\le {x}_{3}\le 15.0$$

The results are compared with all the algorithms used for the experiments; AEFA, CS, DE, FA, PSO, JAYA, HFBOA, SSALEO, TSO, HPSOBOA and SCSCO. The population size is 30 and the algorithms are individually performed 20 times with a maximum of 500 iterations.

The findings in Table 16 show that each algorithm's weight is relatively low, which puts the algorithms' engineering problem-solving precision to the test. The AEFA-CSR produced the lowest weight as 0.012663 when the algorithms are taken into consideration.

Table 16 Tension/compression spring optimization design.

In order to measure the effect of hybridization applied to AEFA, such tests are carried out; overall effectiveness in changing dimension and population, convergence analysis, Wilcoxon rank-sum and Friedman statistical tests, sensitivity, exploration and exploitation analyses and computational complexity. Additionally, it’s performance is validated through a set of real engineering design problems. In all analyses, it is depicted that CS significantly increase the population diversity while RL updates the lead agent. Therefore, it gets closer to the global optimum at each time which is a result of successfully built balance between exploration and exploitation.

Conclusion and future work

This article proposes a solid optimizer AEFA-CSR that allows to solve engineering optimization problems with satisfactory performance. The comprehensive experimental analyses are conducted by including the commonly used, recently developed and hybrid algorithms on a benchmark test suite of 20 problems and three engineering design problems. It is well observed that the proposed algorithm AEFA-CSR is superior than the compared algorithms in terms of overall effectiveness. The algorithm’s performance for increasing population size is measured with the higher overall effectiveness in between 61.53 and 76.93%. When the dimension grows, the overall effectiveness is measured in between 76.93 and 90.0%. For the Wilcoxon Rank Sum statistical test for different control parameters combinations, AEFA-CSR attained the best performance than the other algorithms. However, in terms of computational time, since the algorithm is a combination of three separate methods, the results are not desirable as they are expected. Although the running time of AEFA-CSR is slightly more than the others, the running time is still acceptable and the algorithm produces more accurate and the efficient results than the others for all the functions analyzed. As a future work, the computational time results can be analyzed for further improvements. By considering the important contributions of AEFA-CSR, a balance might be built between the high accuracy and the computational time. Apart from this, it can be said with a confidence that the AEFA-CSR is a quite promising optimization algorithm and quite applicable in solving real-world engineering problems.