Enhancing grasshopper optimization algorithm (GOA) with levy flight for engineering applications

The grasshopper optimization algorithm (GOA) is a meta-heuristic algorithm proposed in 2017 mimics the biological behavior of grasshopper swarms seeking food sources in nature for solving optimization problems. Nonetheless, some shortcomings exist in the origin GOA, and GOA global search ability is more or less insufficient and precision also needs to be further improved. Although there are many different GOA variants in the literature, the problem of inefficient and rough precision has still emerged in GOA variants. Aiming at these deficiencies, this paper develops an improved version of GOA with Levy Flight mechanism called LFGOA to alleviate the shortcomings of the origin GOA. The LFGOA algorithm achieved a more suitable balance between exploitation and exploration during searching for the most promising region. The performance of LFGOA is tested using 23 mathematical benchmark functions in comparison with the eight well-known meta-heuristic algorithms and seven real-world engineering problems. The statistical analysis and experimental results show the efficiency of LFGOA. According to obtained results, it is possible to say that the LFGOA algorithm can be a potential alternative in the solution of meta-heuristic optimization problems as it has high exploration and exploitation capabilities.

The nature-inspired meta-heuristic algorithm with levy flight. Wang, Shuang et al. 22 (2022) proposed an improved version of ROA called Enhanced ROA (EROA) using three different techniques: adaptive dynamic probability, SFO with Levy flight, and restart strategy; and have successfully overcome slow convergence and stagnation in local optima of the origin ROA. As soon as the Levy flight trajectory-based WOA (LWOA) algorithm is proposed by Zhou, Y., Ling, Y. and Luo, Q. 23 (2018), which attracted researchers and practitioners and applied the LWOA algorithm to many dominions, because the LWOA algorithm effectively adaptation, few control parameters, and simplicity of structure. Xuan Chen et al. 24 (2021) employments of Oppositionbased learning and the Genetic algorithm with Levy's flight to improve the Wolf Pack Algorithm and achieved maintain the diversity of the initial population during the global search. Their experimental results show that their proposed algorithm has a better global and local search capability, especially in the presence of multi-peak and high-dimensional functions.
Above mentioned cases are only a few typical models, but they show the nature-inspired meta-heuristic algorithm gets the best global value largely dependent on together with levy flight. On the other hand, these studies affirm that levy flight can considerably enhance the performance of meta-heuristic optimizers.
Our main contribution is to use the grasshopper optimization algorithm with Levy Flight distribution strategy (LFGOA) to seven real-world problems, which cover hybrids (continuous, discrete, and integer variables) nonlinear constrained optimization, such as Himmelblau's nonlinear optimization problem, Cantilever beam design, Car Side Impact Design, Gear train Design, Pressure vessel design, Speed Reducer Design, and tabular column design.
Another contribution is that the levy flight strategy is properly embedded with GOA to help explore the search space. The comprehensive effect of levy flight mechanisms strengthens the exploration-exploitation balance during the search process.
The third contribution is that performance of the LFGOA algorithm was validated by 23 mathematical benchmark functions in comparison with the eight well-known meta-heuristic algorithms (AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO) and the comprehensive performance of the LFGOA algorithm is superior to the eight algorithms and the origin GOA algorithm.
The fourth contribution is that the extensibility test with different scales of dimensions 50, 100, 300, and 500, is undertaken by comparing LFGOA with the original GOA to assess the dimensional influence on problem consistency and optimization quality. The comparisons show that the proposed LFGOA algorithm still holds a simple and efficient structure that significantly improves the performance of the origin GOA algorithm.
In the rest of this paper, Section 2 provides the key idea and structure of the Grasshopper Optimization Algorithm (GOA). Section 3 provides Grasshopper Optimization Algorithm with Levy Flight (LFGOA), improvement where X i indicates the ith grasshopper's position, S i denotes the grasshopper interaction between the solution and the other grasshoppers' swarms, G i is the gravity force on the ith solution, and A i represents the wind advection, which can be represented by the below equations: where N denotes the number of grasshoppers, d ij = |x j − x i | defines the Euclidean distance between the ith and the jth grasshoppers swarm, d ij = |x j −x i | d ij represents the unit vector from the ith to the jth grasshopper swarm. (1) where i � = j  However, the mathematical model of equation (6) cannot be used directly to solve the optimization problems, as mainly the grasshoppers quickly reach their comfort zone and the grasshopper's swarms from failing to converge to the location target or a specified point (global optimum). To solve optimization issues and prevent grasshopper swarms from quickly reaching their comfort zone, the equation truly actuarily applied to solve optimization problems is proposed by the author as follows: where UB d and LB d are the upper and lower bounds in the dth dimension respectively, T d denotes the best solution found so far in the dth dimension space. In Eq. (7), the gravity force is not considered, that is, there is no G i component. And assume that the wind direction ( A i component) is always towards a target T d . The second term T d , simulates the tendency of grasshoppers to move towards the food source.
The key parameter c in mathematical model. In the grasshopper swarm algorithm, parameter c in Eq.
(7) is very important for local and global search. The inner c in Eq. (7) is used to reduce the repulsion, attraction and comfort zone between grasshoppers correspondingly to the number of iterations; is also responsible for the reduction of repulsion/attraction forces between grasshoppers' swarms, which is proportional to the number of iterations. The outer c in Eq. (7) is responsible to reduce the grasshopper's movements around the target (food) and helps reduce the search coverage around the target as the iteration goes on increasing. The coefficient c is proposed as follows: where c max and c min are the maximum and minimum values of c respectively, c max and c min can be set as 1 and 0.00001 respectively, where t is the current iteration, and t max is the maximum iteration value. The position of a grasshopper is updated based on its current position, the global best position, and the positions of other grasshoppers within the swarm.

The grasshopper optimization algorithm with levy flight
Mantegna's algorithm from levy flights random walks. The study shows that the distribution probability density function of the variation of the Levy's flight step can be approximated as follows: Where s is the random step length of Levy's flight behavior, and θ is bounded as [0, 2] as a power-law index and is set to be 1.5, which controls the peak sharpness of the levy distribution graph. The different values of the parameter θ cause different distributions, it makes longer jumps for smaller values, whereas it makes shorter jumps for bigger values. True Levy distribution is hard to implement in computer code, but the approximate form. Mantegna algorithm is one of the fast and accurate algorithms which generate a stochastic variable whose (4) G i = −g e g (5) A i = u e w (6) Where, the factor value f (f = 0.01) derived from L/100 determines the levy walks and the factor is dependent on the dimension of the desired problem, where L is the wide-scale; unless Levy flights become too aggressive, it helps the new solution move away from the search space. The process of Levy flight can be exhibited in Algorithm 1.
The step size value will be added to update the equations of the LFGOA algorithm for finding the best position. From theoretical perspectives, this random walk is based on a long tail distribution which can be used to help an algorithm escape from getting stuck at a local optimum [25][26][27] . In other words, the Levy flight distribution is an effective mathematical operator for producing varied solutions in the searching space and increasing the exploration capability of the LFGOA algorithm.
From Algorithm 1, it is worth noting the formula: NewPosition = currentPosition * LFGOA_Levy(dim) ′ ; Firstly, LFGOA_Levy(dim) represents the Levy flight function, and dim is the dimension size of the problem. the Levy flight Strategy is integrated into the GOA by the above formula. The Levy flight has a relatively high probability of large strides in random walking, which can effectively improve the randomness of the GOA algorithm. This way, the risk that the algorithm gets stuck in a local optimum is drastically reduced, while it is www.nature.com/scientificreports/ still possible to perform sufficient local refinements. In other words, the algorithm presents a natural balance between exploration and exploitation. Secondly, in the case of stagnation, Levy-triggered searching (hunting) patterns can help LFGOA to jump out of them toward new better positions. By this mechanism, the LFGOA algorithm can overcome the deficiencies of the little diversity of the origin GOA algorithm and greatly increase the probability of getting the best position (solution), which is also the highlight and unique feature of the LFGOA algorithm.
Despite being a simple change in the LFGOA algorithm, this new distribution induces drastic changes in the optimization process, LFGOA-based jumps can redistribute grasshoppers around the fitness landscape to prevent the population from the loss of diversity and to put more emphasis on the global searching tendency.

Enhancing grasshopper optimization algorithm (GOA) with levy flight. How and where place
Levy Flight in the GOA algorithm will directly produce totally different results, in some cases even give worse results. Based on the above facts, through an in-depth comprehensive study and trial-and-error experiments, we successfully embedded Levy flight into the GOA algorithm by the following simple but effetely mechanisms.
Firstly, except for the first grasshopper initialled with rand values (since the first iteration was dedicated to calculating the fitness of the grasshopper), the other grasshoppers were assigned Levy flight distribution values, not rand values, which directly produced a better start for most of the grasshoppers with wide diversity in the initialization stage. Secondly, the target is achieved by the Levy flight mechanism during executing iteration, which overcomes the deficiencies and can be escaped from a local optimum and restarted in a different region of the search space for the LFGOA. The flow chart of the Levy flight mechanism embedded in the GOA is shown in Fig. 2. The pseudo-code of the LFGOA algorithm is presented in Algorithm 2.  www.nature.com/scientificreports/ In sharp contrast: although the existing method has greatly improved GOA, there is still a large probability of falling into local optimum by the reason of immature convergence, and the truth reason derived from the diversity is underdeveloped for the GOA algorithm. On the other hand, initializes the position of agents in the search space by Levy flight as the below formula: The above formula, LFGOA_Levy(dim) represents the Levy flight function, and dim is the dimension size of the problem, which provides a large-scale deployment schema for the LFGOA algorithm, all grasshoppers assigned Levy flight value not random numbers between [0, 1] from the uniform distribution at the initialization stage, which directly increase the wide diversity of the LFGOA algorithm. Secondly, randomization is more efficient as the step length is heavy-tailed random redistribution, and any large step is possible, which effectively increases the probability of LFGOA's global search ability and precision.
From Fig. 2, it is worth noting the following three formulas: Where Tp is assigned logical '0' . when the value of the grasshoppers' position is less than the upper boundary, otherwise Tp is assigned logical '1' . Where Tm is assigned logical '0' . when the value of the grasshoppers' position is more than the lower boundary, otherwise Tm is assigned logical '1' .
Where (∼ (Tp + Tm)) is assigned to value 1 when the grasshoppers' position is not at the boundary, otherwise is assigned to value 0.
When the grasshoppers go outside the search space, the grasshoppers will be drawn back by the above formula. After that, the positions of the grasshoppers are directly replaced (similar restarted) 28 by the below formula: Based on the above formula, the positions of all the grasshoppers random redistribution around the fitness landscape to prevent the population from the loss of diversity and to put more emphasis on the global searching tendency. The balance between exploration and exploitation can be achieved according to the Levy flight based GrassHopperPositions(i, :) = (GrassHopperPositions(i, :). * (∼ (Tp + Tm))) + ub ′ . * Tp + lb ′ . * Tm.
GrassHopperPositions(i, :) = GrassHopperPositions(i, :). * LFGOA_Levy(dim). www.nature.com/scientificreports/ jumps, which allows grasshoppers to escape from local minima and explore different search areas. However, it cannot ensure the new update position is better than the current position.
The proposed approach. As a newly proposed algorithm, GOA has achieved good results on some test functions. However, experiment results show that it still has the defects of insufficient global exploration and local optimum stagnation. The lack of global exploration capacity can be attributed to the deficient searches with two stages. Thus, GOA properly integrated with Levy Flight is utilized to improve the global search ability in this work. Meanwhile, a restart strategy of Levy Flight is added to GOA that helps the GOA algorithm escape from local optima.
To the best of our knowledge, the main reason behind the effectiveness of LFGOA is that the Levy flight based jumps can effectively redistribute the search agents to enhance their diversity and to emphasize more explorative steps in case of immature convergence to local optima. It is a successful GOA variant of combining GOA with Levy Flight and gained better results of applying LFGOA in seven real-world engineering problems. The statistical analysis and experimental results show the efficiency of LFGOA.
In section 4, the strict experiments will exhibit that LFGOA is superior to the GOA algorithm in most performance metrics, especially at the parts of correct getting the best solutions with quick convergence speed. In fact, LFGOA still holds the advantages of simple structure and few-parameter-turnings even added extra Levy flight mechanism.

Experimental results and analysis
In this section, all experiments were carried out under the Windows 10 OSx64 using MATLAB R2019a software, and the hardware platform used was configured with Intel(R) Core (TM) i7-8700 CPU @ 3.20GHz and 8 GB RAM.
The performance of the suggested LFGOA is assessed in this section by using five experiments. Accordingly, the first one evaluates AHA, AO, DA, DMOA, GBO, HGS, HHO, LFGOA, and MVO about the average value, the standard deviation, and the best value using twenty-three mathematical benchmark functions presented in Table 1. These benchmark functions are categorized into three groups: unimodal, multi-modal, and composite.
Here, the LFGOA performance is tested using twenty-three benchmark functions. This benchmark contains seven unimodal, six multimodal, and ten fixed-dimension multimodal functions. The mathematical description of each type is given in Table 1 where N denotes the number of grasshoppers, T refers to the maximum iteration value, dim refers to the number of dimensions, Range shows the interval of search space, F min refers to the optimal value that the corresponding functions can achieve.
The second one strictly tests the convergence performance of the LFGOA with AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO respectively. The third experiment aims to test the LFGOA by a non-parametric Wilcoxon, Friedman, and Nemenyi statistical test. The fourth tests the scalability performance of the LFGOA compared with the GOA comprehensively and thoroughly under conditions of 50, 100, 300, and 500 Dimensions. The fifth part presents some quantitative metrics of LFGOA.
Comparing LFGOA with AHA, AO, DA, DMOA, GBO, HGS, HHO, and MVO. To comparing and evaluating the performance of the LFGOA on the well-known 23 mathematical benchmark functions, we select below the eight advanced well-known and the latest meta-heuristic algorithms respectively. 1) Artificial hummingbird algorithm (AHA) 29 , 2) Aquila Optimizer (AO) 30  In order to provide a fair comparison, the main controlling parameters of these algorithms all run 30 times on each of the benchmark function, number of search agents and maximum iteration are all equal to 100 respectively. In the experiments, the key parameters of these nine algorithms are set up as shown in Table 2.
In the following Tables, where best results are all marked in bold. In addition, to check the differences and rankings between nine algorithms, another non parametric multiple comparison method is used to calculate the average ranking value by the Friedman test. When applying Friedman's test, the best algorithm is the one that receives the lowest rank while the worst algorithm receives the highest rank. In order to assess the statistical performance of LFGOA and each other method on the 23 test suites, the average (or mean) and standard deviation values of the rank of each method were taken into account. The average and Std rankings of LFGOA in conjunction with other methods using Friedman's test are summarized in Tables 3 and 4, respectively.
In Table 3, there are 17 out of 23 average values obtained by LFGOA algorithm, which are all less than those obtained by the other eight algorithms. From Table 3, it can be seen that the average searching quality of LFGOA is better than those of other methods.
Step function 100,100,30 F9: Generalized rastrigin's function 100,100,30 where    Table 3, it is clear that the LFGOA with the complete improvement strategies performs best with a Friedman test ranking value of 2.4783. All in all, there are 18 out of 23 average ranking first obtained by LFGOA, which are all more than those obtained by the other eight optimization algorithms. However, LFGOA gives unsatisfactory results in F14, F15, F17, and F18. The results show that LFGOA achieves the average ranking third in F12. The LFGOA performs the best among the nine algorithms, proving that the utilization of Levy Flight can effectively enhance the performance of the GOA algorithm.
In Table 4, only the composite functions F14-F18, the standard deviation value obtained by LFGOA algorithm are 5.90E+01, 4.91E−03, 7.49E−03, 8.58E−01, and 6.66E+00, which are not less than the other eight algorithms. All in all, there are 18 out of 23 standard deviation values obtained by LFGO algorithm, which are all less than Where, in this exercise: Where, in this exercise: Where, in this exercise: Where, in this exercise:  www.nature.com/scientificreports/ those obtained by the other eight algorithms. The better values of the standard deviations prove that the LFGOA algorithm stable performs better than the other eight algorithms. As shown in Table 4, we evaluate the performance of the algorithms using the Friedman test. All algorithms are ranked according to the Std value. LFGOA ranks first in all unimodal functions (F1-F7) and all multi-modal functions (F8-F13) and achieves a Std ranking value of 2.2609. However, LFGOA gives unsatisfactory results in F14, F17, and F18. In this regard, the results show that LFGOA achieves the Std ranking third in F16 and the fourth in F15. The statistical results show that LFGOA has the best performance compared to the eight algorithms mentioned above for solving the 23 classical test functions.
In Table 5, for the unimodal functions and the multimodal functions, the best values obtained by the LFGOA algorithm are not desired in comparison with other eight algorithms. For the composite functions, only the F15, the LFGOA algorithms get nearly accurate approximation values, for the other composite functions F14 and F16-F23, the LFGOA algorithm all get better accurate approximation values.
To further analyze the differences between the algorithms, a post-hoc Nemenyi test was employed. If the null-hypothesis is rejected, we can proceed with a post-hoc test. The Nemenyi test (Nemenyi, 1963) is similar to the Tukey test for ANOVA and is used when all classifiers are compared to each other. The performance of two classifiers is significantly different if the corresponding average ranks differ by at least the critical difference (CD).
where N is the number of datasets (23) and k (9) is the number of algorithms being compared.
At α = 0.05 , the critical value (Table 6) q α for 9 classifiers (algorithms) is 3.102 and the corresponding CD is 3.102 × 9×10 6×23 ≈ 2.5051. At α = 0.10 , q α = 2.855 , N = 23 , k = 9 ; corresponding CD is 2.855 × 9×10 6×23 ≈ 2.3056. To find differences in nine algorithms, critical difference (CD) based on the Nemenyi test was used. The critical value q α is 3.102, so the CD is 2.5051. A post-hoc test concludes that if the difference in Friedman ranking values between the two algorithms is less than the CD value, there is no significant difference between the two algorithms; conversely, there is a significant difference between the two algorithms.
In Table 7, the "Diff with LFGOA" in the third column indicates the differences in average rank between LFGOA and other eight algorithms, and the "Diff with LFGOA" in the fifth column indicates the differences in Std rank between LFGOA and other eight algorithms respectively.
Critical Difference (CD) diagrams in Fig. 3 are simple and intuitive visualizations of the results of a Nemenyi post-hoc test that is designed to check the statistical significance between the differences in average rank of a set of nine algorithms respectively on a set of 23 benchmark test functions.
quality function used to equilibrium the search strategies QF the slope from the first location (1) to the last location (t) in the GBO to balance the exploration and exploitation searching processes HHO E 0 is the initial state of its energy, E indicates the escaping energy of the prey. www.nature.com/scientificreports/ Fig. 3 shows the analysis results of the data from Table 7. In each line segments, we plot the average ranks about mean (left side in the Fig. 3) and Std (right side in the Fig. 3) of nine algorithms. The length of the line segment indicates the CD value, and the center of each line segment labeled "circle mark" represents the value of the average rank position about mean (left side) and Std (right side) of the respective each algorithm across all 23 benchmark test functions. If the value of the center between two line-segments (intervals) is greater than the CD, it means that the two algorithms do not overlap each other, which indicate there is a statistically significant difference between them.        www.nature.com/scientificreports/ The unimodal test functions F1-F7. Since there is only one extreme point in F1-F7 unimodal benchmark functions, the unimodal benchmark functions are suitable for assessing the convergence rate and benchmarking the exploitation behavior of the algorithm. In the second column in Fig. 4, the LFGOA algorithm shows the best results in 6 out of 7, especially on F1-F4 respectively in unimodal benchmark, but for F5, the result unsatisfactory for the LFGOA algorithm. In unimodal functions of F6-F7, the GBO algorithm shows better result that nearly reaches to zero.
The multimodal test functions F8-F13. The F8-F13 multimodal benchmark functions are used to assess the exploration capability of the LFGOA algorithm to find global optima when the number of local optima increases exponentially with the problem dimension. The second column of Fig. 5, for F8, as the GBO algorithm present a wrong value of positive (reference the Table 5) against the value of negative that gotten by the AHA, AO, DA, DMOA, HGS, HHO, LFGOA, and MVO algorithms respectively, the figure only shows the best convergence progress of the AHA, AO, DA, DMOA, HGS, HHO, LFGOA, and MVO algorithms respectively without GBO, because great difference values on two directions can't be appropriately plotted in the same figure. For F9-F13, the convergence progress of the LFGOA algorithm is satisfactory especially for F9-F11; the convergence rate of the LFGOA algorithm is rapidly. Since the multimodal functions have an exponential number of local solutions, the results show that the LFGOA algorithm can explore the search space extensively and find promising regions of the search space.
For the third column of Fig. 5, the convergence progress of the LFGOA algorithm on each of the F8-F13 benchmark functions all exhibit excellent convergence rate on each of the F8-F13 benchmark functions. It can also be seen in the third column of the Fig. 5, that the LFGOA algorithm does not provide uniform convergence behavior in all the benchmark functions. This shows that the LFGOA algorithm is good in handling of different problems.
The composite test functions F14-F23. The second column of Fig. 6, for F14-F20, all of the algorithms reached the satisfactory convergence rate. For F21 and F23 (reference the Table 5), only the final results of the convergence progress of the GBO and LFGOA algorithms respectively are satisfactory, the other seven algorithms unsatisfactory. For F22 (reference the Table 5), only the final results of the convergence progress of the HHO and LFGOA algorithms respectively are satisfactory, the other seven algorithms unsatisfactory. All in all, for the composite benchmark functions of F14-F23, the comprehensive result of the convergence progress of the LFGOA algorithm is superior to the other algorithms, which is very similar to the situation of the Table 5. From the third column of the Fig. 6, the convergence progress of the LFGOA on each of the F14-F23 benchmark functions all exhibit better convergence rate. For the fourth column of the Fig. 6, even the average fitness of all grasshoppers on the F20-F23 with high fluctuation during the exploration phase (at nearly the early iteration stage) and low changes in the exploitation phase (at the end of iteration stage). This proves that the LFGOA algorithm is able to eventually improve the fitness of initial random solutions for a given optimization problem. For the fifth column of the Fig. 6, even the best fitness of all grasshoppers on the F14 and F20-F23 with high fluctuation during the exploration phase (at nearly the early iteration stage) and low changes in the exploitation phase (at the end of iteration stage), which guarantee that the LFGOA algorithm exploration extensively over the initial stage and exploitation locally at the end of optimization, and eventually convergences to optimization points.   37 , recommended that to evaluate the performance of algorithms, statistical tests should be done. The non-parametric Wilcoxon statistical test is conducted and the p-values that are less than 0.05 could be considered as strong evidence against the null hypothesis. To assess the overall performance of the LFGOA algorithm, and to confirm the significance and robustness of the results, we apply Wilcoxon's statistical test with a 5% significance level to the obtained average accuracy results. At Table 8, the P-values are more than 0.05 appeared in the following cases: • LFGOA/AO in F2, F9, and F14 in the third column of Table 8.
• LFGOA/DA, the F4, F14, and F17 in the fourth column of Table 8, as above depicted, both the DA and LFGOA algorithms all embedded Levy Flight mechanism, which means both of the two algorithms have some extent similarity properties. • LFGOA/DMOA in F6 and F11 in the fiveth column of Table 8.
• LFGOA/GBO in the F2 in the sixth column of Table 8.
• LFGOA/HGS, which is consistent with the F2 and H9 in the seventh column of Table 8.
• LFGOA/MVO, in the F4, F11, and F17, as above depicted, both the exploration and exploitation swarming behaviors of MVO are very similar to LFGOA.
The results of the p-values in Table 8 show that the superiority of the LFGOA algorithm is statistically significant.
The scalability test for LFGOA. Comparing comprehensive and thoroughly the property of the LFGOA algorithm with the GOA algorithm, we conducted the scalability test here. As we known, the scalability test can help us to some extent understand the impact of the dimension on the capability of the solution and the effectively of the LFGOA algorithm. An in-depth exploration of the impacts on the solution functionality to catch what appears for the features of the LFGOA and GOA algorithms as the dimension of function experiences a growth respectively. Therefore, four dimensions of the functions F1-F23 are used here: 50, 100, 300, and 500. The whole circumstances have remained consistent; each algorithm uses 100 search agents and runs 30 times respectively. The mean values, the standard deviation values and the best optimal values were picked by the LFGOA and GOA algorithms under 50, 100, 300, and 500 dimensions, which are shown in the following tables.
In Table 9 (D = 50), there are 15 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm.
In Table 9 (D = 100), there are 14 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm. Table 9 also tell us the LFGOA algorithm consumed a little more time than the GOA algorithm under dimensions equal to 50 and 100 respectively.
In Table 10 (D = 300), there are 15 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm.
In Table 10 (D = 500), there are 14 out of 23 average values obtained by the LFGOA algorithm, which are all less than those obtained by the GOA algorithm. Table 10 also tell us the LFGOA algorithm consumed a little more time than GOA under dimensions equal to 300 and 500 respectively.
In In Table 13 (D = 50, D = 100), there are 17 out of 23 best values obtained by the LFGOA algorithm, such that the number is far exceeded by the GOA algorithm.
In Table 14 (D = 300, D = 500), there are 17 out of 23 best values obtained by the LFGOA algorithm, such that the number is far exceeded by the GOA algorithm.
Some quantitative metrics of LFGOA algorithm. In the first, second and third columns of Fig. 7,   Fig. 8, and Fig. 9, the quantitative metrics about the dynamic change of grasshopper position (search history), www.nature.com/scientificreports/  www.nature.com/scientificreports/ www.nature.com/scientificreports/   From the first column in Fig. 7, we can see that the search history of grasshoppers is mostly concentrated in one region, which indicating that the LFGOA algorithm can quickly search for promising regions. In order to see the changes of the grasshoppers' positions during searching, the trajectories of eight grasshoppers are picked in the second and third columns in Fig. 7, Fig. 8, and Fig. 9 as well. In the fourth and fifth columns of Fig. 7, the Box plot is used to check affirmed of the LFGOA algorithm stability. In the fourth column of Fig. 7, the Box plot is used to depict the fitness status by five groups (each group covering 20 iterations) at every stage. In the fifth column of Fig. 7, the Box plot is used to depict the position change by five groups (each group covering 20 iterations) at every stage.
The unimodal test functions F1-F7. In the first columns in Fig. 7, for the unimodal test functions F1-F7, it can be clearly seen that agents tend to exploration promising regions of the search space and exploitation very accurately around the global optima over the course of iterations in the form of rough like adozens of agents clustered together.
In the second and third columns in Fig. 7, the trajectory graphs of eight grasshoppers (as representative of all grasshoppers) are selected to show the grasshopper's dynamic position changes respectively during optimization. From the second and the third columns in Fig. 7: we can see that the third in F2 and F7, the fifth and the seventh in F1, the fifth in F5, all of these grasshoppers undergo slight fluctuations during the grasshoppers searching respectively. From the second and third columns of Fig. 7: we also can see trajectory curves that the third in F1, the fourth and the sixth in F3, the third in F4, the first, the third and the eighth in F6, all of the grasshoppers made abrupt largely fluctuations in the initial stages of optimization respectively. Exploration of search space takes place due to high repulsive rate of the LFGOA algorithm. It is also seen that, as these grasshopper's optimization approaches further the fluctuation decreased gradually over the course of iterations. This is done due to the attraction forces as well as comfort zone between grasshoppers. According to Berg et al. 38 , this behaviour can guarantee that an algorithm eventually convergences to a point and search locally in a search space.
There are some mild autocorrelations and cross-linked between the trajectory graphs of grasshopper in the first columns of Fig. 7 with the second and third columns of Fig. 7, and the search history of grasshoppers in the first column of Fig. 7, the small fluctuation of the grasshoppers corresponding to the small scatter graph of the grasshopper clustered together, the great fluctuation of the grasshoppers corresponding to the big scatter graph of the grasshopper clustered together. It is meaningful on some extent, the inferences about the effectively          www.nature.com/scientificreports/ convergence of the LFGOA algorithm while avoiding most locally optimal from the trajectory graphs of grasshopper and search history of grasshoppers. To analysis the LFGOA randomness nature, the Box plot is used to show the difference by comparisons the fitness about the LFGOA algorithm. As the box contains 50% of the data, therefore, the height of the box can directly reflect the fluctuation level of the fitness about the LFGOA algorithm. The box plot is relatively short for the unimodal benchmark function F5 in the fourth column of Fig. 7 that reflects the fluctuation of the fitness is slight, which corresponding little promising regions of the search space of F5 in the first column in Fig. 7. There are more or little outliers among the entire unimodal benchmark functions F1-F7, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space. The box plot is relatively tall for the unimodal benchmark function F4 in the fifth column of Fig. 7 that reflects the fluctuation of the position changes are great at every search stage, which corresponding the grasshoppers made abrupt largely fluctuations in the initial stage of optimization respectively in the second and third columns in Fig. 7.
The multimodal test functions F8-F13. From the first column in Fig. 8, for the multimodal benchmark functions F8-F13, it can be clearly seen that agents tend to exploration promising regions of the search space and exploitation very accurately around the global optima over the course of iterations in the form of rough like adozens of agents clustered together.
From the second and third columns of Fig. 8: we can see that the F9, the sixth and the eighth in F10, the third in F12, all of the grasshoppers undergo slight fluctuations during the grasshoppers searching respectively. From the second and the third columns in Fig. 8: the first, the fifth, the sixth, the seventh, and the eighth in F8; the first, the third, the fourth, the sixth, and the eighth in F11; the fifth, the sixth, and the eighth in F12; the third in F13; all of these grasshoppers made abrupt largely fluctuations in the initial stages of optimization respectively during the grasshoppers searching respectively.
There is not outlier in the box plot of the multimodal benchmark function F10 in the fourth column of Fig. 8 that reflects the fluctuation of the fitness is not large and the grasshoppers clustered around a relatively little promising regions of the search space.
There are more or little outliers among the entire multimodal benchmark functions F8-F13 in the fifth column of Fig. 8, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space.
The composite test functions F14-F23. From the first column in Fig. 9, for the composite benchmark functions F14 and F15, it can be clearly seen that agents tend to exploration promising regions of the search space and exploitation very accurately around the global optima over the course of iterations in the form of rough like adozens of agents clustered together. From the first column in Fig. 9, for the composite benchmark functions F21, F22 and F23, from a search history point of view, the agents tend to extensively exploration promising regions of the search spaces and exploitation the best target in the form of the scatter shape is rough like a thin stripe shape.
From the second and third columns of Fig. 9, we can see that the F21, F22, and F23, all of these grasshoppers made abrupt largely fluctuations from positive to the zero with one direction in the initial stage of optimization respectively during the grasshoppers extensively searching. There are more or little outliers among the composite benchmark function in the fourth column of Fig. 9, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space. There are more or little outliers among the entire composite benchmark functions in the fifth column of Fig. 9, which corresponding there are separate scatter clustered regions, except the big promising regions of the search space.

Results and discussion
As we can see in Section 4, the LFGOA algorithm significantly outperforms others in terms of numerical optimization. There are several reasons why the LFGOA algorithm did perform well in most of the test cases. First, Levy-flight strategy: Levy flight can increase the diversity of the population and make the algorithm jump out of local optimum more effectively. This approach is helpful to make LFGOA faster and more robust than GOA. Second, in GOA, it is assumed that the fittest grasshopper (the one with the best objective value) during optimisation is the target. This will assist GOA to save the most promising target in the search space in each iteration and requires grasshoppers to move towards it. This is done with the hope of finding a better and more accurate target as the best approximation for the real global optimum in the search space.
Therefore, this approach promotes the exploration of promising feasible regions and is the main reason for the superiority of the LFGOA algorithm. Third, the LFGOA algorithm has an explicit restart mechanism. These are the reasons why LFGOA performs better than other algorithms at the end of the results section.                   www.nature.com/scientificreports/ are not good enough. There is no restart mechanism for significant abrupt movements in the search space and this is likely to be the reason the performance of most of the eight algorithms is not good enough. In summary, the discussion and findings of this work clearly demonstrate the quality of the exploration, exploitation, local optima avoidance, and convergence rate of the LFGOA algorithm.
Real application of LFGOA in constrained engineering problems. Engineering constrained optimization problems are complex, sometimes even the optimal solutions of interest do not exist 39 . Engineering constrained optimization problems have been utilized by many researchers to evaluate the performance of different algorithms 40 . Although the above-discussed results prove and verify the high performance of the LFGOA algorithm, there is also to confidently confirm the performance of this algorithm in engineering constrained optimization problems in real life. In this section, the effectiveness of the LFGOA algorithm is verified in terms of its ability to solve constrained engineering optimization problems in practical application; seven well-studied constrained engineering design examples are selected to verify the proposed LFGOA algorithm, including: Himmelblau's nonlinear optimization problem, Cantilever beam design, Car Side Impact Design, Gear train Design, Pressure vessel design, Speed Reducer Design, and tabular column design. However, different real-world problems often have different constraints, so a suitable approach is demanded to deal with such problems 41 . The main idea is to transform the actual optimization problem into a mathematical model, and then use the LFGOA algorithm to find the optimal solution. Normally, f(x) is the fitness function, x represents the search space, x 1 , x 2 , . . . , x n represent different dimensions, there are several equality and inequality constraints in engineering constrained optimization problems. In order to be suitable for these engineering constrained problems, the search agent of our proposed LFGOA algorithm does not only rely on fitness functions to update the location. So, the simplest method of dealing with constraints (penalty functions) can be used effectively to deal with constraints in algorithms 42 . That is, if the search agent violates any constraints, it will be assigned a large objective function value. This way, it is automatically replaced by a new search agent after the next iteration. So, we use penalty functions in which the LFGOA algorithm has achieved good values if it violates one of these constraints.
Himmelblau's nonlinear optimization problem.. Before solving the engineering constrained problems, the LFGOA was benchmarked using a well-known problem, namely, Himmelblau's problem, which is a relatively complex constrained problem of minimization five positive design variables and six nonlinear inequality constraints, and ten boundary conditions. This problem has originally been proposed by Himmelblau 43 and it has been widely used as a benchmark nonlinear constrained optimization problem and applied to many fields. The problem can be outlined as follows: Consider: Minimize: Subject to: Where:  47 , and Differential gradient evolution plus algorithm 48 respectively in the literatures. It can be clearly seen that the LFGOA algorithm performed better without any violation and is feasible on this issue. The convergence curve in Fig. 10 shows the function values versus the iteration numbers for the constrained problem.
Cantilever beam design. Cantilever beam design is a type of concrete engineering problems. It works to minimize the total weight of a cantilever beam by optimizing the hollow square cross-section parameters. There are five squares of which the first block is fixed and the fifth one burdens a vertical load.
f (x) = 5.3578547x 2 3 + 0.8356891x 1 x 5 + 37.293239x 1 − 40792.141, www.nature.com/scientificreports/   www.nature.com/scientificreports/ For this well-known case, Fig. 11 shows the shape of the cantilever beam, the beam is rigidly supported at right side end, and a vertical force acts on the cantilever free node of the left side, which is supported at the rightmost block and the other blocks are left free. The widths and heights of the five beams considered of the problem are used as design parameters of the optimization. The beam consists of five hollow square blocks with constant thickness, whose heights (or widths) are the decision variables. The cantilever weight optimization is formulated in the following equation: Consider: Mathematically speaking, it is possible to write most optimization problems in the generic form: Minimize: Subject to: Variable range: To evaluate the performance of the proposed LFGOA in solving this problem, some of the algorithms that are chosen for comparison are Artificial hummingbird algorithm 29 and Gradient-Based Optimizer 33 in the literatures. The results obtained by LFGOA and their comparison with the aforementioned state-of-the-art metaheuristics are reported in Tables 17 and 18, while the statistical results for each considered strategy are detailed in Table 18.
It is evident from Tables 19 and 20 that the proposed LFGOA algorithm performed better without any violation. The convergence curve shows the function values versus the Iteration numbers for the constrained problem are given in Fig. 13.
Discrete engineering problem-gear train design. The high-speed train drive wheel transmission system mostly adopts a gear transmission structure. Due to the limited size of the structure, the pinion gear and the motor drive shaft are connected by an interference fit. The vibration is caused by an unreasonable design, which www.nature.com/scientificreports/ causes a system failure. The objective of gear train design problem is to minimize the cost of the "Gear ratio" of the gear train in field mechanical engineering problem. The "Gear ratio" defined as the ratio of the angular velocity of the output shaft to the angular velocity of the input shaft, the "Gear ratio" is calculated as follows: The parameters of this problem are discrete with the increment size of 1 since they define the teeth of the gears (T a , T b , T c , T d ). There constraints are only limited the variable ranges. The design of gear train is a kind of mixed problems which have to determine various types of design variables such as continuous, discrete, and integer variables. This problem simply stated is: given a fix input drive and a number of fixed output drive spindles, how can the spindles be driven by the input using the minimum number of connecting gear in the train. To handle discrete parameters, each search agent was rounded to the nearest integer number before the fitness evaluation. The number of teeth of gears T a (= x 1 ) , T b (= x 2 ) , T c (= x 3 ) , and T d (= x 4 ) are considered as the design variables, and illustrates at Fig. 14. Consider: The mathematical formulation is provided as follows: Minimize: Gearratio = angular velocity of output shaft angular velocity of input shaft   www.nature.com/scientificreports/ The design engineering constraint is defined as the number of teeth on any gear that should only be in the range of [12,60], in other words, the constraints are only limited the variable ranges: This section uses the proposed LFGOA algorithm to solve the gear train design problem and compares the results with other optimization algorithms, including Social Network Search 49 , An enhanced hybrid arithmetic optimization algorithm 51 , The Ant Lion Optimizer 52 , and Multi-Verse-Optimizer 36 respectively in the literatures. Table 21 compares the minimum cost and design variables obtained using the LFGOA algorithm and other optimization algorithms, while the statistical results for each considered strategy are detailed in Table 22.
However, the optimal values for variables obtained are different. It is worth pointing out that any feasible solution is an optimal solution, the values in Table 21 which gained by the five algorithms, only rough agreed with each other. Therefore, this design can be considered as a new design with a similar optimal "Gear ratio". Table 21 shows that the LFGOA algorithm gives competitive results for numbers of function evaluations and is suitable to solve discrete constrained problems. Once more, these results prove that the proposed LFGOA algorithm can solve discrete real problems efficiently. As shown in the Fig. 15, the convergence curve is quickly and the solutions were obtained instantly under satisfy all constraints.
Pressure vessel design. The pressure vessel design optimization task has also been popular among researchers and optimized in various studies. Pressure vessel design is a mixed discrete-continuous constrained optimization problem. Using rolled steel plate, the shell is made in two halves that are joined by two longitudinal welds to forms a cylinder. The objective of this problem is to minimize the total cost consisting of material, forming, and welding of a cylindrical vessel as in Fig. 16. Both ends of the vessel are capped, and the head has a hemi-spherical shape. There are four variables in this problem: • Thickness of the shell (T s ), • Thickness of the head (T h ), • Inner radius (R), • Length of the cylindrical section without considering the head (L).
In pressure vessel, the thickness of the shell (T s ) and head (T h ) , the internal radius (R), and the extent of the section, minus the head (L), are variables to be optimized. This problem is subject to four constraints: T s and T h are the available thicknesses of rolled steel plates, which are integer multiples of 0.0625 inch, and R and L are continuous variables. Many meta-heuristic methods that have been adopted to optimize this problem includes Social Network Search 49 , Composite Differential Evolution with Modified Oracle Penalty Method 53 , Artificial hummingbird algorithm 29 , Manta ray foraging optimization 54 , a Hybrid Co-evolutionary Particle Swarm Optimization Algorithm 55 , the Automatic Dynamic Penalisation method (ADP) for handling constraints with genetic algorithms 56 , and a Hybrid Generalized Reduced Gradient-Based Particle Swarm Optimizer 57 respectively in the literatures.  www.nature.com/scientificreports/ These constraints and the problem are formulated as follows: Consider: Minimize: Subject to: Variable range:  Best fitness obtained so far 10 5 Convergence curve Figure 17. Convergence of pressure vessel.  From Tables 23 and 24, it is evident that LFGOA obtain the better solution among these compared approaches. From Table 24, once more, the statistical results of different methods also demonstrate that the proposed LFGOA method can solve this constrained optimization problems with discrete-continuous variables effectively and provide competitive statistical results. It should be noted the results of LFGOA do not denote that it can find better solutions due to the accuracy.
As shown in the Fig. 17, the convergence curve quickly converge towards the global optimum and the solutions was obtained instantly under satisfy all constraints.

Speed reducer design.
In mechanical systems, one of the essential parts of the gearbox is the speed reducer, and it can be considered as a challenging benchmark engineering problem and can be employed for several applications. In this optimization problem, the weight of the speed reducer is to be minimized with subject to 11 constraints, as shown in Fig. 18. The goal of the speed reducer design problem is to minimize the total weight of the reducer by optimizing the seven variables, which describe as follow: • the width of the gear surface (cm) (x 1 = b), • the module of teeth (cm) (x 2 = m), • the number of teeth in the pinion (x 3 = p), • the length of the first shaft between bearings (cm) (x 4 = l 1 ), • the length of the second shaft between bearings (cm) (x 5 = l 2 ), • the diameter of first shafts (cm) (x 6 = d 1 ), • the diameter of second shafts (cm) (x 7 = d 2 ).
The statistical results of LFGOA and nine optimization methods are compared in Tables 25 and 26. Among the compared optimization algorithms, the LFGOA ranks first as superior to other approaches in optimizing the reducer design, our method can find better geometric variables for this case. Hence, our result is feasible and verifies the effectiveness of the proposed LFGOA algorithm. The results demonstrate that the proposed LFGOA can provide reliable and very comprising solutions compared with the other algorithms.
As shown in the Fig. 19, the convergence curve quickly converge towards the global optimum and the solutions was obtained instantly under satisfy all constraints. Tubular column design. Tubular column design is an example of designing a uniform column of the tubular section to carry a compressive load at minimum cost as described in Fig. 20. There are two design variables in this problem, which describe as follow: • the mean diameter of the column d(= x 1 )(cm), • the thickness of tube t(= x 2 )(cm).
The five characteristic parameters in the constituent materials of the column are set as: • P is a compressive load(= 2500kgf ), • σ y represents the yield stress(= 500kgf /cm 2 ), • E is the modulus of elasticity(= 0.85 × 10 6 kgf /cm 2 ), • ρ is the density(= 0.0025kgf /cm 3 ), • L denotes the length of the designed column (= 250cm).  Subject to: Variable range: The stress included in the column should be less than the buckling stress (constraint g 1 ) and the yield stress (constraint g 2 ). The mean diameter of the column is restricted between 2 and 14cm (constraint g 3 and g 4 ), and f (x) = 9.8x 1 x 2 + 2x 1 2 ≤ x 1 ≤ 14, 0.2 ≤ x 2 ≤ 0.8. www.nature.com/scientificreports/ columns with thickness outside the range 0.2 − 0.8cm are not commercially available (constraint g 5 and g 6 ). The mean diameter d(x 1 ) and the thickness t(x 2 ) vary in the range of [2,14] and [0.2,0.8].
This case was previously tackled by many scholars using various heuristic methods, including Social Network Search 49 , Cuckoo search algorithm 46 , krill herd algorithm 60 , Cooperation search algorithm 61 , and a Hybrid Generalized Reduced Gradient-Based Particle Swarm Optimizer 57 respectively in the literatures.
The statistical results of LFGOA and other optimization methods are compared in Tables 27 and 28. Among the compared optimization algorithms, the LFGOA ranks first as superior to other approaches in optimizing the tubular column design, our method can find better geometric variables for this case. Hence, our result is feasible   www.nature.com/scientificreports/ and verifies the effectiveness of the LFGOA algorithm. The results demonstrate that the LFGOA algorithm can provide reliable and very comprising solutions compared with the other algorithms. As shown in the Fig. 21, the convergence curve quickly converge towards the global optimum and the solutions was obtained instantly under satisfy all constraints.

Results and discussion
As we can see in Section 5, seven real-world constrained engineering design examples including Himmelblau's nonlinear optimization problem, Cantilever beam design, Car Side Impact Design, Gear train Design, Pressure vessel design, Speed Reducer Design, and tabular column design are selected to verify the proposed LFGOA algorithm. The LFGOA has been demonstrated to perform better than or be highly competitive with the other algorithms in the literature on the seven constrained engineering optimization problems, and can solve different real-world constrained engineering optimization problems. The advantages of LFGOA involve performing simply and having few parameters to regulate. The work here proves the LFGOA to be robust, powerful, and effective over all types of the other algorithms in the literature. Constrained engineering optimization evaluation is a good way for testing the performance of the metaheuristic algorithms, but it also has some limitations. For example, different tuning parameter values in the optimization methods might lead to significant differences in their performance. Also, constrained engineering optimization tests may arrive at fully different conclusions if the termination criterion changes. If we change the population size or the number of iterations, we might draw a different conclusion.

Conclusion
This paper presented a novel enhancing Grasshopper Optimization Algorithm with Levy Flight algorithm, call LFGOA algorithm. Five metrics (i.e., search history, average fitness function, the best fitness history, the trajectory of the first dimension, and convergence curve) are implemented to investigate the LFGOA qualitatively. Next, 23 benchmark test functions to investigate the exploration, exploitation, local optima escape, and convergence performance of the LFGOA. The results demonstrated the effectiveness of LFGOA towards achieving optimal global solutions having more reliable convergence compared to other eight well-known optimization algorithms published in the literature. Freidman ranking test is applied to evaluate the efficacy of the LFGOA scientifically. The statistical results demonstrated that the LFGOA can guarantee the effectiveness of explorations while producing excellent exploitation, hence maintaining an equilibrium between exploitation and exploration strategies, which reveals the superior performance of the LFGOA in a statistical sense against other comparative algorithms. Moreover, seven real-world engineering problems are used to investigate the effectiveness of the LFGOA further. The results of the engineering design problems proved that the LFGOA achieved extremely better results against the other well-known optimization algorithms, and it can handle various constraints problems.
Of course, there are still many applications of the LFGOA algorithm worthy of further study because of the tremendous potential of the LFGOA algorithm. Moreover, the LFGOA algorithm can be used to solve constrained engineering optimization problems such as industry and engineering applications, and other application domains. There are several possible future directions and possible ideas worth investigating regarding the new variants of the LFGOA algorithm and its widespread applications, for example, features selection, job scheduling, and parameter optimization are still need to be resolved and can be suggested as future work.

Data availibility
The datasets generated during or analysed during the current study are available from the corresponding author on reasonable request.