Chaotic opposition learning with mirror reflection and worst individual disturbance grey wolf optimizer for continuous global numerical optimization

The effective meta-heuristic technique known as the grey wolf optimizer (GWO) has shown its proficiency. However, due to its reliance on the alpha wolf for guiding the position updates of search agents, the risk of being trapped in a local optimal solution is notable. Furthermore, during stagnation, the convergence of other search wolves towards this alpha wolf results in a lack of diversity within the population. Hence, this research introduces an enhanced version of the GWO algorithm designed to tackle numerical optimization challenges. The enhanced GWO incorporates innovative approaches such as Chaotic Opposition Learning (COL), Mirror Reflection Strategy (MRS), and Worst Individual Disturbance (WID), and it’s called CMWGWO. MRS, in particular, empowers certain wolves to extend their exploration range, thus enhancing the global search capability. By employing COL, diversification is intensified, leading to reduced solution stagnation, improved search precision, and an overall boost in accuracy. The integration of WID fosters more effective information exchange between the least and most successful wolves, facilitating a successful exit from local optima and significantly enhancing exploration potential. To validate the superiority of CMWGWO, a comprehensive evaluation is conducted. A wide array of 23 benchmark functions, spanning dimensions from 30 to 500, ten CEC19 functions, and three engineering problems are used for experimentation. The empirical findings vividly demonstrate that CMWGWO surpasses the original GWO in terms of convergence accuracy and robust optimization capabilities.

Optimization is described as the process of determining the most suitable values for the parameters of a problem with the goal to obtain the ideal solution 1 .Optimization Algorithms have gained acknowledgment as effective instruments for improving various types of single-objective, multi-objective, and many-objective problems 2 .The effectiveness of these algorithms has resulted in the creation of a large number of swarm intelligence algorithms and their extensive use in numerous applications across numerous fields 3 .Swarm intelligence algorithms are developed by studying the interactions of self-organized living beings in nature and are a subset of Metaheuristic Algorithms (MAs) 4 .Examples of recent MAs include Gannet Optimization Algorithm (GOA) 5 , African Vultures Optimization Algorithm (AVOA) 6 , Material Generation Algorithm (MGA) 7 , Beluga Whale Optimization (BWO) 8 , Archimedes Optimization Algorithm (AOA) 9 , Artificial Gorilla Troops Optimizer (GTO) 10 , Dandelion Optimizer (DO) 11 , Golden Eagle Optimizer (GEO) 12 , Chaos Game Optimization (CGO) 13 , Fire Hawk Optimizer (FHO) 14 and Honey Badger Algorithm (HBA) 15 .It is also worthwhile to explore certain modified algorithms that exhibit exceptional performance, such as Modified Social Group Optimization (MSGO) 16 , Chaotic Vortex Search Algorithm (VSA) 17 , Modified Marine Predators Algorithm (MMPA) 18 , and Hybrid Binary Dwarf Mongoose Optimization Algorithm (BDMSAO) 19 .They have found practical applications in various domains, including parameter identification 20 , feature selection 21,22 , Antenna Optimization 23 , Image Segmentation 24,25 , demand prediction 26 , Reliability-Based Design 27,28 , constrained optimization problems 21,22 .These algorithms, however, share several challenges, such as a propensity to get trapped in local optimal solutions, sluggish convergence rate, and limited precision in identifying the optimal solution 29 .
The Grey Wolf Optimizer (GWO) is a Swarm intelligence metaheuristic algorithm developed by Mirjalili et al. that emulates the leadership structure and hunting behaviour of grey wolves in the wild 30 .The GWO algorithm has been successfully used to address different optimization problems, including numerical optimization 31 , feature subset selection 32 , engineering design 33 , image analysis 34 , and other real-world applications 35 .Researchers have attempted to improve the original GWO by creating various variants, which can be categorized into two groups.The first group focuses on implementing distinct optimization strategies to overcome GWO's limitations.The second group includes variants that combine GWO with other algorithms to enhance its optimization capabilities by leveraging the advantages of these combined algorithms.
In the first group, Nadimi-Shahraki et al. 36 introduced the Improved Grey Wolf Optimizer (I-GWO) that incorporates a novel movement strategy called dimension learning-based hunting (DLH) search strategy, modelled after the solitary hunting tactics used by wolves in the wild.DLH establishes wolf neighbourhoods in a way to facilitate the exchange of neighboring information among them.The incorporation of dimension learning in the DLH search strategy improves the equilibrium between local and global search and diversity preservation in the optimization process.The proposed I-GWO algorithm's efficacy was assessed using the CEC 2018 test set and four real-world problems.I-GWO is contrasted across many tests to six other algorithms.Friedman and Mean Absolute Error (MAE) statistical tests are also used to assess the results.In comparison to the algorithms employed in the studies, the I-GWO algorithm was highly efficient and frequently outstanding.Mirjalili et al. proposed a Multi-Objective Grey Wolf Optimizer (MOGWO) to address multi-objective problems' optimization 37 .For that purpose, a fixed-sized external archive was incorporated into the GWO, serving as a repository to store and retrieve the best solutions.The incorporated archive influences the definition of social ranking and the emulation of grey wolves' hunting patterns in multi-objective search areas.To assess its performance, the novel MOGWO was evaluated on ten multi-objective standard problems and benchmarked against two other popular MAs.The outputs of the assessments indicate that the MOGWO algorithm surpassed the other MAs under consideration in terms of performance.Bansal and Singh suggested an improved grey wolf optimizer to enhance the exploration and exploitation capabilities of the traditional GWO 38 .Opposition-based learning (OBL) and the explorative equation were used to make this improvement.The explorative equation contributed to improving GWO's capacity for exploration.The OBL sped up convergence and prevented the GWO from stagnating.23 popular standard functions were used to evaluate the suggested IGWO.The results have been contrasted against some recent GWO versions along with additional well-known MAs.The results confirmed that the IGWO has better exploration capabilities while yet retaining an excellent speed of convergence.Meidani et al. presented  another variant called Adaptive GWO (AGWO) that tackles the non-automated variable adjustment and absence of precise stopping conditions that frequently result in wasteful consumption of computing resources 39 .The optimization process was carried out by incorporating an adaptive calibration of the intensification/diversification variables depending on the fitness records of the potential solutions.A satisfactory optimal solution can be reached by AGWO within a brief period by regulating the stopping criteria depending on the importance of fitness increase in the optimization.Through a comprehensive comparative study, they demonstrated that AGWO is significantly more efficient than the original GWO and a number of GWO variations that were already in use.AGWO achieved this by lowering the number of iterations necessary to arrive at similar solutions to those of GWO.Lei et al. introduced Levy Flight to the GWO (LFGWO) to tackle the challenges of premature convergence and inadequate results 40 .By conducting experiments with eight common algorithms and 23 common benchmark functions from CEC 2005, the overall performance of LFGWO was assessed.The findings showed that LFGWO performs better than the competing algorithms.Gutpa and Deep introduced a revised RWGWO employing a random walk in an effort to enhance the grey wolf 's search capabilities 41 .The algorithm's performance is demonstrated by comparing it with GWO and other advanced algorithms using IEEE CEC 2014 benchmark problems.To gauge the effect of enhancing the leaders in the proposed algorithm, a non-parametric test, Wilcoxon, and Performance Index Analysis were used to analyze outcomes.The findings show that the suggested algorithm offers grey wolves greater leadership when searching for prey.Nasrabadi et al. introduced parallelism and opposition-based learning methods in an attempt to enhance the basic GWO's outcomes 42 .The setup and execution of the revised method on renowned benchmark functions yielded results that showed improvements in convergence and accuracy.
In the second group also, noteworthy outcomes were achieved by researchers.By integrating the Elephant Herding Optimization (EHO) algorithm with the Grey Wolf Optimizer (GWO), the exploitation and exploration performances, as well as the speed of convergence, of the GWO were significantly enhanced by Hoseini et al. 43 .To confirm the effectiveness of the proposed Grey Wolf Optimizer Elephant Herding Optimization (GWOEHO), a set of twenty-three benchmark functions and six engineering problems were employed for testing.The performance of GWOEHO was compared to that of the GWO and EHO, along with several other popular MAs.The statistical analysis using Wilcoxon's rank-sum test demonstrates that GWOEHO consistently performed better than the other algorithms in the majority of function minimization tasks.With the merging of Particle Swarm Optimization and Grey Wolf Optimizer, Singh and Singh formed a Hybrid Particle Swarm Optimization and Grey Wolf Optimizer (HPSOGWO) 44 .The major goal was to increase the exploration and exploitation capacities of the two algorithms to boost their strengths.A few unimodal, multimodal, and fixed-dimension multimodal testing (2) By incorporating Chaotic Opposition learning into GWO, the algorithm mitigates stagnation and enhances diversification, leading to improved solution accuracy.(3) The integration of the Mirror Reflection Strategy into the GWO updating process amplifies population exploration and expands the search space.This enables the algorithm to explore a wider range of potential solutions.(4) The proposed worst individual disturbance strategy reduces the probability of the algorithm getting stuck in local optima.By exchanging information between the best and worst wolves, it enhances population diversity and improves the algorithm's ability to trap prey.(5) The performance of the proposed algorithm is thoroughly evaluated by comparing it to nine other algorithms across twenty-three test functions.This evaluation provides insights into its effectiveness and efficiency.(6) In addition to numerical optimization problems, the proposed algorithm is also evaluated on three engineering design issues, demonstrating its applicability and effectiveness in solving practical problems.
The subsequent sections of this paper are organized as follows: "Grey wolf optimizer (GWO)" section provides an introduction to the background of GWO.In "Proposed CMWGWO" section, the proposed algorithm's mechanism is explained and the proposed CMWGWO is presented.The complexity of the new CMWGWO is discussed in "Computational complexity of CMWGWO" section, The experimental results are discussed and displayed in "Experiments and result analysis" section.Lastly, "Conclusion" section concludes the paper and outlines future research directions.

Grey wolf optimizer (GWO)
GWO, an optimization algorithm inspired by the hierarchical structure and hunting dynamics of grey wolves 30 , employs a population division into four levels denoted as α, β, δ and ω .The uppermost level comprises the α wolf, followed by the β wolf in the subsequent tier, and the δ wolf in the third tier.The remaining wolves, situ- ated in the lowermost layer, are known as ω wolves or search wolves as seen in Fig. 1.The α, β and δ wolves serve as leaders, each with a count of one.In GWO, the objective is for the ω wolves, representing the search wolves, to update their position and attain the optimal solution.Meanwhile, the α, β, and δ wolves represent the best, Second best, and third-best Solutions, respectively.The hunting behavior of grey wolves is primarily directed by the leading wolves ( α, β, and δ ), guiding the iterative position updates of the search wolves ( ω ) based on the leaders' locations.This iterative process can be mathematically described as the formula governing the movement of the grey wolves in pursuit of their prey where t represents the current iteration count, * denotes the product operation, X p represents the position vector of the prey, X represents the position vector of a grey wolf, and the calculation formulas for random vectors A and C are expressed as follows: The utilization of random vectors and linearly decreasing values to optimize the position updates in GWO is discussed below.Figure 2 illustrates the potential areas that the ω wolf can occupy around the prey by adjusting The distances between the lead wolves and search wolves in this situation are represented by the symbols D a , D β , and D δ , respectively.The locations of the lead wolves are shown by the symbols X α , X β , and X δ .While X 1 , X 2 , and X 3 represent the step size and direction of the ω wolf towards the lead wolves, respectively, C 1 , C 2 , and C 3 are random vectors.Equation ( 6) is used to determine the wolf 's ultimate location.Algorithm 1 shows the iterative process of GWO. (5)

Chaotic opposition learning (COL)
Opposition-based learning (OBL) stands as a robust Optimizer improvement methodology in the domain of intelligence computation, initially introduced by Tizhoosh 51 .Generally, MAs begin with random initial solutions and iteratively strive to move closer to the global best solution.The termination of the search process occurs when specific predetermined requirements are met.In the absence of pertinent advance information about the solution, convergence might require a considerable amount of time.To address this, OBL incorporates a novel approach, depicted in Fig. 3, which involves assessing the fitness values of the current solution and the matching opposing solution at the same time.The superior individual is then retained for the next iteration, thereby promoting population diversity effectively.Notably, the opposite candidate solution has nearly a 50% higher chance of being closer to the global optimum compared to the current solution 52 .Consequently, OBL has gained widespread adoption as it significantly enhances the optimization performance of various MAs 53,54 .The mathematical representation of OBL is as follows: The opposite solution is denoted by X , while X represents the current solution.lb and ub correspond to the lower and upper limits of the search area.As evidenced by Eq. ( 7), OBL has the limitation of producing the opposite solution at a given position 55 .This approach proves effective during the initial optimization phases.However, as the search process advances, there is a possibility that the opposite solution may end up close to a local optimum.Consequently, other individuals in the population might rapidly gravitate towards this area, leading to premature convergence and reduced solution accuracy.In response to this issue, the random opposition-based learning (ROBL) strategy which incorporates random perturbations to modify Eq. ( 7) as follows was introduced in this work:

Mirror reflection strategy (MRS)
The mirror reflection principle describes the phenomena that occur when light comes into contact with the boundary between two different media 56 .This principle comes into play when a portion of the incident light returns to the original medium.There are two basic rules that govern mirror reflection.Firstly, the angle at which the light is reflected (angle of reflection) is equivalent to the angle at which it strikes the surface (angle of incidence).Secondly, the reflected and the incident ray lie on opposite sides of an imaginary line denoted the "normal" that is perpendicular to the surface at the point of reflection.Drawing inspiration from these wellestablished principles, the proposed CMWGWO includes a Mirror Reflection Learning (MRL) strategy.In the MRL strategy, we represent the incident angle direction of a potential solution on the x-axis to denote its location.Simultaneously, the reflected angle direction on the x-axis represents the mirrored version of the solution.The MRL method explores both the potential solutions and their mirror reflections, to choose the best solution thereby expanding the search area.Figure 5 gives a visual demonstration of the concept of mirror-reflection learning.The potential answers are chosen within the [lb, ub] interval.The halfway between lb and ub is denoted by O = (X 0 , 0) and X(a, 0) denotes an arbitrary variable inside the same interval, (b, 0) is the location of X m , the mirror reflection of X .The following Eqs.(11) to (14) define the relationship between incident and reflection angles and subsequently provide a method for determining the mirror-reflected solution.They are based on the first law of mirror reflection previously mentioned in the subsection: The angle of reflection is equal to the angle of incidence.Equation (11) and (12) establishes the relationship between the incident angle (α) and the reflection angle (β) using the tangent function: By considering α as the incident angle and β as the reflection angle, Eq. ( 13) can be derived following the first rule of reflection.
Here, µ and Q are the elasticity coefficients and neighborhood radius, both occurring inside the interval of [0,1], and r 1 and r 2 are arbitrary values between 0 and 1.The inverse solution's updated equation is expressed as follows: In this work, we have uniquely incorporated the levy mechanism into Eq.( 17).This incorporation is motivated by its potential to significantly contribute to the exploration-exploitation balance, which is a crucial aspect in improving the performance of CMWGWO.The Levy flight, inspired by the Levy distribution, possesses unique characteristics that facilitate long-range exploration in the search space 57 .By leveraging this feature, MRS can effectively escape local optima, thus promoting the exploration of promising regions that may lead to superior solutions.Moreover, the Levy flight mechanism enhances the algorithm's capability to diversify the search process, which helps maintain population diversity and mitigate premature convergence issues.

Worst individual disturbance (WID)
Majority of the improved variants of the GWO algorithm focus on increasing the chances of population individuals converging towards the best wolf.For example, the Grey Wolf Optimizer based on a new Weighted Distance (GWO-WD) introduced by Yan et al. eliminates and repositions several of the worst individuals 58 .However, it is important to reflect on the natural laws that Grey wolves must adhere to while hunting.During the process of surrounding their prey, Grey wolves encounter both the chance of successfully encircling the prey and the potential risk of the prey evading capture.This phenomenon is accurately modelled in the HHO algorithm that mimics the hunting behaviour of Harris hawks when they catch rabbits 59 .In HHO, there is a probability that the rabbit being chased by the hawk may escape.In that case, while the global optimal individual guides the entire population towards the best solution, there is a risk of getting stuck in a local optimum, leading to stagnation and failure to escape the local optimal space.Based on this idea, the proposed CMWGWO incorporates a worst individual disturbance strategy to escape local optima in case of unsuccessful encircling leading to a greater and more dynamic exploration of the search area as illustrated in Fig. 6, thus increasing the chances of finding better solutions.Equation (18) represents the encirclement phase, taking into account the global worst wolf: www.nature.com/scientificreports/ In the equation, X t w represents the global worst wolf, and rand is a randomly generated number from the interval [0,1].rand and (1-rand ) are assigned randomly to X t α and X t w .Due to the uncertainty introduced by rand and its random variation between 0 and 1, the search process is influenced not only by the global optimal individual but also by X t w .A higher value of rand implies a more pronounced impact of the optimal individual on the formula, bringing the wolves closer to the target, effectively simulating a successful prey encirclement scenario.In contrast, if rand is small, the impact of the worst individual on the formula becomes prominent, replicating the situation where wolves fail to encircle their prey effectively.
CMWGWO is an improved variant of the GWO algorithm, incorporating three novel techniques (WID, COL, and MRS) to enhance its performance.The algorithm starts by initializing a population of grey wolves as eventual solutions to an optimization problem.Each wolf 's fitness is evaluated, and the best-performing wolves ( α, β, δ ) and Worst the Worst wolf are identified.The main loop iteratively updates wolf positions using calcu- lated parameters A, a, and C .The WID technique is applied with a probability of a random number less than p 1 and when |A|< 1 to some wolves.|A|< 1 implies the exploitation phase in other words, during this phase, if the best wolf gets trapped in suboptimal or the prey evades capture, the population can weaken the leadership of the best wolf to avoid convergence towards local optimal by using the information exchange between the best wolf and worst to break out of local optimal, furthermore the population is able to keep track of the prey effectively, followed by COL with a probability of p 3 , and MRS with a probability of p 2 to improve diversity and amplify population exploration by expanding the search space respectively, all these improvements are subject to boundary constraints.The process continues until a termination condition is met.These newly introduced techniques aim to improve the exploration and exploitation abilities of the original GWO, potentially leading to improved optimization results.The step-by-step procedure of CMWGWO is expressed in Algorithm 2 and the graphical illustration is given in Fig. 7.

Computational complexity of CMWGWO
To analyze the computational complexity of the CMWGWO algorithm, we need to assess the complexity of each individual step and the number of iterations performed in the while loop.The breakdown of the steps and analysis of complexity is given below: 1. Random initialization: Initializing the grey wolf population X i (i = 1, 2, 3 . . .n) involves generating random values for each individual wolf 's position in the search space.The complexity of this step is O(n) , where n is the size of the population and big O denotes CMWGWO's complexity 60,61 .2. Fitness evaluation: Evaluating the fitness of each grey wolf requires evaluating the objective function of each individual.The computational complexity of this step depends on the complexity of the objective function and how it scales with the problem size.The complexity of evaluating the objective function as O fitness .3. Finding the α, β, δandWorst : This step involves identifying the best, second-best, third-best, and worst grey wolves based on their fitness values.The complexity of finding these wolves is O(n).4. The main loop (While loop): The main optimization loop iterates until the termination condition is met (t < Maxit) .The number of iterations is determined by Maxit , so we can denote the complexity of the while loop as O(Maxit).5. Calculations within the loop: Within each iteration of the while loop, there are three separate techniques included in traditional GWO.The complexity of each of these techniques can be denoted as O(1) since they involve basic arithmetic operations and comparisons.
6. Boundary checks are carried out once each wolf 's new position has been determined to make sure that it remains inside the bounds of the search area.The dimension of the search area and the effectiveness of the boundary-checking method determine how complicated these boundary checks are.Boundary check complexity is expressed as O(d) , where d is the search area 's dimensionality.The computational complexity of the CMWGWO algorithm can be approximated as expressed in Eq. ( 19), due to the introduction of these new techniques it is evident that the complexity of CMWGWO is higher than that of the original GWO:

Experiments and result analysis
In this part, we will carry out tests to verify CMWGWO's efficacy while highlighting the improvement it offers.
To confirm their effectiveness, each mechanism's analysis will be used to comprehensively assess the improvement techniques used.To support the validity of CMWGWO's superiority, studies will also be undertaken to evaluate the optimization performance of CMWGWO with various improved versions of GWO.The enhanced GWO in this work will also be put up against original algorithms, further demonstrating the optimization value of GWO.Benchmarking the performance of several algorithms using a variety of complex tasks is an important step 62 .Therefore we will put CMWGWO through 23 benchmark functions, 10 CEC 2019 functions, and 3 realworld engineering situations to show its supremacy.The 23 benchmark functions are specifically described in Table 1, together with their mathematical formulations, dimensions, and theoretically ideal values.Researchers have carefully chosen these test functions from a list of frequently used CEC functions 63 .Table 1 displays a set of 7 unimodal functions (F1-F7), each containing a single minimum value.These functions are ideal for evaluating the algorithm's exploitation performance, as they test its ability to converge to the global minimum.Additionally, Table 1 includes 6 multimodal functions (F8-F13), which differ from F1-F7 by having numerous local optimal.These functions assess the algorithm's exploration capability 64 , as they require it to search for multiple optimal solutions.Their expressions are provided in Table 1.Moreover, F14-F23 are multimodal functions as well, but they have a fixed dimensionality.In addition to 23 functions, CEC 2019 functions (C1-C10) are employed.The intricacy of this test suite has been increased by shifting and rotating them relative to the usual functions.Table 2 includes the details of the test suite.Throughout this work we will carry out 500 iterations with a population size of 50, in order to preserve the validity of the studies 30 repeated runs will be carried out to lessen the effects of population randomness and population concentration brought on by randomness, and the average value (AVG), standard deviation (STD) and Best will indicate the outcomes of each algorithms optimization.

Statistical and non-parametric analysis of each improvement technique contribution
Three strategies WID, COL, and MRS, are used by the CMWGWO algorithm to improve optimization performance.Three GWO variations were evaluated on 23 functions to show the impact of various techniques on GWO.Each variant denotes the employment of a single strategy: WIDGWO denotes the only application of the WID strategy, COLGWO denotes the sole application of the COL strategy, and MRSGWO denotes the sole application of the MRS approach.CMWGWO stands for the entire combination of all three methodologies.By contrasting the AVG, STD, and Best of the outcomes attained by each method across several functions, as shown in Table 3, the impacts of these techniques on GWO's search capability can be investigated.The average and best values provided by the COLGWO, WIDGWO, and MRSGWO algorithms are typically better than those of the conventional GWO, demonstrating that these three optimization techniques significantly enhance the algorithm's optimization accuracy in both exploration and exploitation.Additionally, CMWGWO surpasses COLGWO, WIDGWO, and MRSGWO in the majority of functions when considering average values, best values, and standard deviations of their results, outperforming all three optimization procedures.This shows that using all three of these procedures together enhances GWO's optimization speed and guarantees stable optimization capability.The nonparametric Wilcoxon signed-rank test was used across the 23 functions to compare the differences between the 4 distinct GWOs and CMWGWO in Table 3 at a significance threshold of 5% recorded as a P Value in Table 3. Table 3 also shows the contrast between CMWGWO and various GWOs.The symbols "+", "−", and "=" denote that CMWGWO is more superior to, less superior than, and identical to the comparison algorithm.According to the results, CMWGWO performs better than the original GWO in 17 out of 23 functions and is inferior to GWO in just 2 of them.Using the three strategies, CMWGWO exceeds COLGWO, CIGWO, and MRSGWO in 16 functions, 17 functions, and 14 functions, respectively.This indicates that the three strategies employed in CMWGWO complement each other, compensating for the shortcomings of GWO and significantly enhancing its performance across different test functions that test both the diversification and intensification capacity of CMWGWO.Notably, when comparing CMWGWO to other variants of GWO, including the traditional GWO in Table 3, the difference is less than 0.05 as indicated by the P Value, implying significant improvements in performance.The exception is MRSGWO, where CMWGWO shows no significant difference because it achieves similar results to MRSGWO in some functions, this also shows that the MRS being part of CMWGWO contributes to its exceptional performance.The Friedman Average (FRD-AVG) of CMWGWO is 1.80, ranking first among the five algorithms, and the FRD-AVG of the GWOs with other strategies is also smaller than that of the original GWO.This highlights that CMWGWO's overall performance surpasses other GWO variants and the traditional GWO in the comprehensive ranking using the Friedman Rank.
Figure 8 presents the convergence paths of CMWGWO and other variants based on different techniques, with the goal of evaluating the distinct performance of CMWGWO in achieving convergence while dealing with optimization functions.The study involves comparing CMWGWO with other algorithms derived from different techniques and the traditional GWO.The outcomes clearly demonstrate that CMWGWO outperforms the traditional GWO and other variants developed from different techniques in terms of convergence precision, www.nature.com/scientificreports/particularly on all unimodal functions except F5.Remarkably, CMWGWO showcases exceptional convergence rates and successfully reaches the best optimal solution for F10, F11, F14-19, F21, and F23, showcasing its proficiency in handling multimodal functions.Comparatively, CMWGWO exhibits better convergence efficiency than GWO and other counterparts.These findings provide compelling evidence that the population diversification adjustments and the introduction of enhanced exploration techniques have significantly contributed to the success of CMWGWO.The experimental data strongly support the notion that CMWGWO has greatly improved its optimization capability and convergence performance.

Dimension impact statistical analysis and non-parametric test of 23 test functions
CMWGWO is compared with several variants of GWO and Original algorithms, namely GWO 30 , Adaptive GWO (AdGWO) 39 , GWO based on Aquila exploration (AGWO) 65 , Augmented GWO & Cuckoo Search (AGWOCS) 66 , Random Walk GWO (RWGWO) 41 , Hybrid-Flash Butterfly Optimization Algorithm (HFBOA) 67 , Chimp Optimization Algorithm (CHOA) 68 , Particle Swarm Optimization (PSO) 69 and Sine Cosine Algorithm 70 in this section on 23 functions while varying the dimension of each function.The parameters of this algorithm can be found in Table 4.Other parameters like iteration, population, and number of runs are set to 50, 500, and 30, respectively.
For functions F1-F7, in Table 5 (functions F1-F6), Table 6 (functions F1-F3, F5-F7), Table 7 (Functions F1-F3, F5-F7) and Table 8 (Functions F1-F7) CMWGWO obtained the best solution in these functions as the complexity of problem increased with the dimension.This suggests that CMWGWOO has the ability to converge to the global optimal value.This observation demonstrated that the CMWGWO has a high exploitative ability while solving unimodal functions compared to the original GWO.In addition, GWO variations such as AdGWO and AGWOCS produced competitive results.Moving on to F8-F13 in Tables 5, 6, 7 and 8, CMWGWO consistently outperforms other competitors and GWO variants, in functions F8, F11-F13.Furthermore, in Table 5, F21-F23, which are fixed-dimension functions, CMWGWO maintains superior performance in F14-F19, F21, and F23.The superior performance of CMWGWO can be attributed to the improvement strategies.COL maintains high diversity during optimization, MRS improves population exploration capacity, and WID enhances the population's ability to approach the optimal solution while reducing the dominance of the best wolf in order to escape local optima in multi-peaked problems (F13-F24).The P Value results from the Wilcoxon signed-rank test on the 23 benchmark suite at Dim = 30, 100, 200, and 500, shown in Tables 5, 6, 7 and 8, confirm that CMWGWO is significantly superior to other competitors.The statistical analysis further verifies that CMWGWO effectively enhances optimization performance in the search process.

Statistical and non-parametric analysis of CEC 2019 functions
To evaluate the proposed optimizers performance in intricate objective functions, the AVG, STD, and Best were used as assessment metrics to gauge the precision as well as the reliability of the CMWGWO and other optimizers.It is evident from the statistics in Table 9 that CMWGWO obtains the most optimum solution for five out of ten functions.It is extremely crucial to highlight that the effectiveness of CMWGWO is a substantial advancement beyond the traditional GWO as well as other methods in C1, C4, C6, C7, C8, and C9 in terms of AVG.This significant enhancement is anticipated as a result of the addition of improvement strategies to enhance CMW-GWO's capacity to enhance local and global search while preserving variety.As a consequence, CMWGWO's overall performance has significantly improved.Based on the Wilcoxon signed-rank test the P Values in Table 9, CMWGWO shows statistically significant improvement compared to AdGWO, AGWO, CHOA, HFBOA, GWO, AGWOCS, RWGWO, PSO, and SCA (P < 0.05).The Friedman test ranks CMWGWO as the best-performing algorithm among the ten, indicating its overall superiority in terms of these metrics.This shows that with MRS, COL, and WID, CMWGWO is able to maintain stability in overcoming local optimal and keeping population diversity consistent throughout the iteration process in challenging problems.

Convergence and box plot analysis on 23 functions and CEC 2019 functions
Figures 9, 10, 11 and 12 compare the CMWGWO method with nine different cutting-edge algorithms using convergence curves and box plots on 23 functions (30 dim) and CEC 2019 respectively.These charts show how each algorithm's average accuracy changes as the number of iterations rises, as shown in Figs. 9 and 10.The distribution of the final optimal solutions attained by each method is shown by the box plots.The minimum, maximum, lower quartile (Q1), median, upper quartile (Q3), and any outliers can all be viewed clearly inside the box plots in Figs.11 and 12.The best set of solutions from each iteration of 30 iterations is displayed by a box plot, while the orange line inside the box denotes the median.Notably, an outlier is a data point that deviates significantly from the norm and is identified by a red "+" sign.This comparison's goal is to illustrate and assess the variations in optimization performance between CMWGWO and other cutting-edge algorithms.The convergence curves provide www.nature.com/scientificreports/information on how the best solution values change when the search process of each approach is performed.A low value representing the best solution found indicates that the approach is more capable of optimization.
The box plots, on the other hand, give details about how the best results from each approach are distributed.A technique is more stable and hence more resistant to changes in the search space if the approach's box sizes in the box plots are smaller.To put it another way, the box plots illustrate how consistently each approach finds the ideal answer while the convergence curves show how effectively each method achieves that goal.
The CMWGWO technique displays quick convergence in its early phases, as seen in Fig. 9. It's interesting to note that the CMWGWO approach continues to explore high-quality regions while other algorithms tend to have a flattened curve meaning they can easily be stuck in local optimal.Furthermore, according to the findings, CMWGWO demonstrates quicker convergence for all functions other than F7 in uni-modal functions (F1-F7).The suggested technique, however, performs better for multimodal functions than current approaches, with better results for functions F8, F11, F12, and F13.Additionally, the suggested method exhibits admirable and exceptional convergence for functions F14-F19, F21, and F23, categorized as fixed-dimension functions.Notably, the CMWGWO outperforms AGWO, AdGWO, and AGWOCS in establishing a balance between convergence and divergence.Figure 9's comparison further demonstrates that CMWGWO maintains higher convergence accuracy than other techniques.These findings confirm that, in comparison to the traditional GWO approach, the modifications made in this work not only improve the trade-off between exploration and exploitation, it also demonstrate the method's capacity to avoid local optima and get close to the overall best outcome.Three crucial strategies WID, COL, and MRS were incorporated into the CMWGWO technique to increase its effectiveness in this area.While the COL technique increases population variation throughout the search process, the MRS strategy enables the wolf agent to keep investigating the optimum solution.The WID tactic also effectively traps prey, all these add to the efficiency of CMWGWO.Furthermore, the CMWGWO approach is able to find probable solutions inside the problem domain characterized by shifted, rotated, and hybrid in CEC 2019 functions in Fig. 10 because of the combination of various tactics, which finally results in improved diversity and more accurate solutions in functions C1, C4, C6, C7, C8, and C9.The boxplot analysis of each function also makes it quite evident that CMWGWO has strong stability as seen in Figs.11 and 12.This suggests that the CMWGWO's approach to exploration and exploitation capabilities is well-balanced.www.nature.com/scientificreports/

Exploration and exploitation analysis
Exploration and exploitation stages are often two essential phases in optimization algorithms.The Algorithm prioritizes exploration in the first stage with the goal of identifying areas of the feasible domain space that have promising prospects for improved candidate solutions.The algorithm then progressively moves from the exploration to the exploitation , putting more effort into looking for better candidate solutions close to the existing best solution.An algorithm's optimization efficiency is largely influenced by how well its exploration and exploitation capabilities are balanced.The chances of discovering improved candidate solutions may increase with more exploration capabilities, but the speed of convergence may be slowed.On the other hand, increasing the exploitation capabilities might hasten convergence but increase the chance of being stuck in local optima.To establish a delicate balance between the exploration and exploitation phases, we enhanced CMWGWO's exploitation and exploration.This balance is essential since it affects the effectiveness of optimization as a whole.In order to locate high-quality solutions quickly while avoiding premature convergence to local optima, The algorithm must ideally balance exploration and exploitation.To enhance the algorithm's efficacy and resilience in tackling optimization issues.www.nature.com/scientificreports/In this part, the exploration and exploitation stages of the CMWGWO are numerically investigated and compared to the traditional GWO.We use Eqs.(20) to (23) to determine the proportion of these two phases in order to more accurately characterize the algorithm's exploration and exploitation process while it is running.
The percentages of the algorithm's exploration and exploitation stages are shown by the symbols % EPR and % EPL , respectively.The diversity of all population members in the technique is denoted by Div and Div max denotes the highest diversity value thus far observed among the population members.Furthermore, Div j stands for the diversity of the jth dimension throughout the whole population.The algorithm's parameters n and dim correspond to the population's size and the problem's dimension, respectively.While median x j designates the median value of the jth dimension across all population members, x j specifies the jth dimension of the ith member in the technique.
Specific illustrations depicting unimodal and multimodal functions selected from the functions utilized in the previous experiment are used to analyze the algorithm's exploration and exploitation levels during their search process, as shown in Fig. 13.The first column compares the convergence curves of CMWGWO and GWO, while the second and third columns show the exploration and exploitation phases of CMWGWO and GWO, respectively.F1, F3, and F5 are categorized as unimodal functions, whereas F10 and F23 are categorized as multimodal.The balance of the suggested CMGWO and GWO for unimodal and multimodal functions is shown in Fig. 13 by the convergence and diversity patterns.It is clear that when compared to the original GWO approach, the CMWGWO method shows enhanced exploration of optimum solutions.Additionally, CMWGWO outperforms GWO in terms of striking a balance between the algorithm's exploitation and exploration stages.
The percentage of exploration length ( % EPR) attained by the CMWGWO approach is as follows when looking at the second column of Fig. 13: 1.1164% for F1, 1.5338% for F3, 1.2949% for F5, 3.4933% for F10, and 32.4377% for F23.Furthermore the % EPL is 98.8836% for F1, 98.4662% for F3, 98.7051% for F5, 96.5067% for F10, and 67.5623% for F23.The suggested CMWGWO approach exhibits an increase of around 2.1% on the unimodal functions F1, F3, and F5 in the exploration phase when compared to the % EPR attained by GWO in the unimodal functions.Additionally, there is an increase of around 19% in the exploration phase compared to GWO for the multimodal functions F10 and F23.It can be concluded that the proposed CMWGWO more efficiently divides the execution time between the exploitation and exploration phases of the algorithm based on the convergence curves of CMWGWO and GWO on F1, F3, F5, F10, and F23.To put it another way, it shows a greater balance between the two stages, which enhances performance.

Computation time analysis
Tables 10 and 11 present a comparison of the average computation time of CMWGWO and its competitors.A detailed analysis of CMWGWO highlights that it generally necessitates more CPU time when compared to other methods.This can be attributed to CMWGWO's incorporation of MRS, COL, and WID, wherein each method is

Optimizer Settings
AdGWO a 0 = 2 , γ = 0.95 Vol:.( 1234567890) www.nature.com/scientificreports/independently executed in the course of the optimization process.Consequently, the CPU time of CMWGWO does not consistently outperform the compared methods due to its inherent complexity, as elucidated in Eq. 19.In Figs. 14 and 15, it becomes evident that CMWGWO requires greater computational time than the original GWO and other GWO variants such as AdGWO, AGWO, AGWOCS, and RWGWO.Nonetheless, despite its increased computational demands, CMWGWO exhibits remarkable efficiency, surpassing these algorithms in terms of performance.Taking into consideration the substantial contributions of CMWGWO, a harmonious balance can be achieved between attaining high accuracy and effectively managing the time required to solve problems.

Engineering problem application
Based on the constraints and particular needs of the optimization method they are employing, researchers must take thorough and well-founded assessments.They need efficient tools that provide them the ability to make wise decisions within a logical framework in order to do this 71,72 .By using it to solve three traditional engineering constraint issues, the performance of CMWGWO is carefully assessed in this context.The purpose of this inquiry is to confirm the useful and practical uses of the CMWGWO approach.The three issues under  www.nature.com/scientificreports/consideration are as follows: Welded Beam Design Problem (WBDP) 73 , Three Truss Bar (TTB) 74,75 and I-Beam Design Problem(IBDP) 76,77 .

Welded beam design (WBDP)
In the welded beam problem, a stiff support member needs to be welded to a beam.The ideal cost problem, depicted in Fig. 16, is used to estimate the beam's ideal dimensions in order to reduce costs 78 .Four main factors, namely, weld seam thickness (h(x 1 ) ), steel bar length (l (x 2 ) ), steel bar height (t (x 3 ) ) and steel bar thickness (b (x 4 ) ), have an impact on the production cost.Additionally, the model is subject to four constraints: buckling load (Pc), shear stress (τ), beam internal bending stress (σ), and end deflection rate (δ).The mathematical expression of this problem can be stated as in Fig. 16.

Objective function
Subject to: where ( 24)

Continued
Based on the data shown in Table 12, the results reveal that the CMWGWO method attains the smallest cost for WBDP, measuring 1.670217726.This outcome highlights a significant advantage over the GWO, RWGWO, and AGWOCS algorithms.Clearly, CMWGWO effectively meets the requirements of the design problem with the lowest cost, leading to reduced engineering consumption.These findings demonstrate the practical superiority of CMWGWO in achieving optimal solutions, resulting in cost-effective designs and resource savings in engineering applications.

Three truss bar (TTB)
Firstly introduced by Ray and Saini, the three bar truss design optimization problem is a classic engineering optimization problem in structural mechanics 79 .The problem consists of two variables and three constraints.It involves finding the optimal dimensions of a truss made of three bars to achieve certain design objectives while respecting constraints such as buckling, stress, and bending, as presented in Fig. 17.
Objective function: Subject to by: where l = 100 cm; P = 2 kN cm 2 ; σ = 2 kN cm 2 .The information in Table 13 makes it readily apparent that the CMWGWO approach earns the top spot in terms of best costs.This result shows that the CMWGWO, works remarkably well for this particular situation.It verifies the suggested algorithm's superiority over competing approaches and shows that it can produce costoptimization solutions that are both highly competitive and superior.

I-beam design problem (IBDP)
The I-beam design problem, as shown in Fig. 18, involves a beam subjected to two pressures 80 .The goal is to design an I-beam with minimal vertical deflection.The structural parameters of the problem consist of height, length, and two thicknesses.The mathematical representation of this problem is presented below: Objective function: Subject to: where 10 ≤ z 1 ≤ 50 , 10 ≤ z 2 ≤ 80 , 0.9 ≤ z 3 , z 4 ≤ 5.

Conclusion
This paper introduces CMWGWO with the primary objective of addressing the limitations of the original GWO.These limitations include premature convergence, insufficient diversity within the population, subpar global search capabilities, and susceptibility to be trapped in local optimum due to convergence towards the best wolf.CMWGWO employs three strategies to overcome these limitations.Firstly, the WID strategy is employed to enhance population diversity by facilitating better information exchange between the best and worst wolves.This improvement enables the algorithm to escape stagnation and explore a more extensive range of solutions.Secondly, the algorithm incorporates the embedded COL search mechanism to increase the likelihood of individuals approaching the global optimum.By doing so, it elevates the optimization accuracy and alleviates stagnation issues.Lastly, the integration of MRS amplifies population exploration and significantly expands the search space.As a result, CMWGWO is able to effectively explore a wider range of potential solutions, enhancing its overall performance in optimization tasks.The experiments in this study involve the testing of 23 functions and 10 CEC 2019 with distinct characteristics.The initial comparison includes WID_GWO, COL_GWO, MRS_GWO, GWO, and CMWGWO to confirm the effectiveness of the optimization mechanisms introduced in this paper.Furthermore, CMWGWO is pitted against well-known GWO variants, namely RWGWO, AGWO, AdGWO, and AGWOCS.The results clearly demonstrate that CMWGWO outperforms these competitive algorithms significantly, a fact that becomes evident when examining the convergence curves of these algorithms.In contrast to the original algorithms, such as CHOA, SCA, HFBOA, and PSO, CMWGWO exhibits a robust exploration ability and improves solution accuracy substantially.Extensive testing on high-dimensional problems, coupled with exploitation and diversity analysis, further confirms its capability to achieve higher-quality solutions.Lastly, the application of CMWGWO to WBDP, TTB, and IBDP problems showcases its effectiveness in effectively solving these typical engineering constraint problems, thereby highlighting its potential for practical applications.
Although CMWGWO can surpass the original GWO and other rival algorithms, its optimization performance can yet be enhanced.Tables 5, 6, 7 and 9 display the results of such functions i.e.F7 and F9 functions.This proves the No Free Lunch theorem that no single optimizer is efficient for all problems.To attain greater solution accuracy, we intend to improve CMWGWO's exploration and exploitation capabilities going forward.This will need combining more modification approaches, such as applying novel population initializing strategies, hybridizing with other algorithms, and adaptively lowering some parameters in a nonlinear way.Additionally, CMWGWO has difficulties when tackling large-scale and complicated issues; therefore, future work will entail extensive tests on complex problems and comparison with more state-of-the-art algorithms.CMWGWO requires more time than the original GWO, making it necessary to take into account parallel computing in the next research stage to speed up the procedure.A fascinating research path also involves merging CMWGWO with machine learning.Furthermore, the applicability of CMWGWO can be extended to various real-world optimization problems across different fields.For instance, it can be effectively utilized in optimal power flow problems 81 , classification of neuroimaging 82 , heat removal systems 83 , and water distribution systems 84 .Expanding CMWGWO's potential, it would be reasonable to explore the development of a multi-objective version of the algorithm, catering to complex multi-objective challenges that require simultaneous optimization of multiple criteria.

Figure 2 .Algorithm 1
Figure 2. Illustration of search wolf during exploration and exploitation.

Figure 4 .
Figure 4. Graphical illustration of chaotic opposition learning.

Figure 5 .
Figure 5. Illustration incident reflected light on a mirror surface.

Figure 6 .
Figure 6.Information exchange between alpha wolf and worst wolf.

Figure 8 .
Figure 8. Convergence plot of different improvement techniques.

Figure 9 .
Figure 9. Convergence trajectory of CMWGWO and nine compared optimizers on 23 functions.

Figure 14 .
Figure 14.Comparison of optimizer average computation time on 23 functions.

Figure 15 .
Figure 15.Comparison of optimizer average computation time on CEC 2019 functions.

Table 3 .
Statistical and non-parametric test comparison of GWO outcomes using different techniques.Significant values are in [bold].

Table 5 .
Statistical comparison of CMWGWO with GWO variants and original algorithms with Dim = 30.Significant values are in [bold].

Table 6 .
Comparison of CMWGWO with GWO variants and original algorithms with Dim = 100.Significant values are in [bold].

Table 7 .
Comparison of CMWGWO with GWO variants and original algorithms with Dim = 200.Significant values are in [bold].

Table 9 .
Statistical comparison of CMWGWO with GWO variants and original algorithms on CEC 2019.Significant values are in [bold].

Table 8 .
Comparison of CMWGWO with GWO variants and original algorithms with Dim = 500.Significant values are in [bold].
deflection, measuring 0.013074119.This outstanding outcome demonstrates that, when compared to other optimization techniques, CMWGWO provides the best answer for this particular problem design type.

Table 10 .
Computation time comparison of CMWGWO with GWO variants and original algorithms on 23 functions.Significant values are in [bold].

Table 11 .
Computation time comparison of CMWGWO with GWO variants and original algorithms on CEC 2019.Significant values are in [bold].

Table 12 .
Results of CMWGWO and other algorithms on WBDP.Significant values are in [bold].

Table 13 .
Results of CMWGWO and other algorithms on TTB.Significant values are in [bold].

Table 14 .
Results of CMWGWO and other algorithms on IBDP.Significant values are in [bold].