A modified shuffled frog leaping algorithm with inertia weight

The shuffled frog leaping algorithm (SFLA) is a promising metaheuristic bionics algorithm, which has been designed by the shuffled complex evolution and the particle swarm optimization (PSO) framework. However, it is easily trapped into local optimum and has the low optimization accuracy when it is used to optimize complex engineering problems. To overcome the shortcomings, a novel modified shuffled frog leaping algorithm (MSFLA) with inertia weight is proposed in this paper. To extend the scope of the direction and length of the updated worst frog (vector) of the original SFLA, the inertia weight α was introduced and its meaning and range of the new parameters are fully explained. Then the convergence of the MSFLA is deeply analyzed and proved theoretically by a new dynamic equation formed by Z-transform. Finally, we have compared the solution of the 7 benchmark functions with the original SFLA, other improved SFLAs, genetic algorithm, PSO, artificial bee colony algorithm, and the grasshopper optimization algorithm with invasive weed optimization. The testing results showed that the modified algorithms can effectively improve the solution accuracy and convergence property, and exhibited an excellent ability of global optimization in high-dimensional space and complex function problems.


Original shuffled frog leaping algorithm
A frog population lives in a swamp or pond, and there are many discrete stones for frogs to jump when looking for food.Frog individuals are allowed to communicate with each other, so as to learn from the experience of other individuals to improve their own jumping direction and step size, and achieve the purpose of information sharing.In order to find food quickly and accurately, the frog population is divided into several memeplexes with the same number but different abilities to form a small group in a local range.The local elite individuals guide other individuals to search for food independently in different directions.After each memeplex has searched a certain number of times, different memeplex exchange information through each memeplex shuffling, which makes many frogs learn the new ideas of different memeplexes and realize the social sharing of information so that the whole frog population can quickly and successfully find the food source in the right direction.The basic concept of the SFLA is shown in Fig. 1.
The original SFLA is a combination of random and deterministic approaches.The deterministic strategy, the local and global explorations, could effectively ensure evolution guide of the algorithm toward the global optimum using the heuristic information (or fitness function).The random elements also could improve the flexibility and robustness of search pattern.Some main steps of the algorithm are shown below 10,11 .
Step 1 A virtual population of F different frogs is generated randomly in the feasible D-dimensional space.Each frog represents a candidate solution of optimization problem and D is the number of decision variables.So the i th frog is expressed by a vector U i = (U i1 , U i2 , ..., U iD ) .Each frog has an associated fitness value f i that measures the performance of the frog.
Step 2 All frogs are sorted in a descending order according to their fitness values and the entire population is partitioned into m memeplexes (communities)Y 1 ,Y 2 ,⋯Y m , each containing n frogs (i.e.F = m × n), such that Record the frog with the best fitness value as U g in the entire population.
Step 3 The memetic evolution of SFLA starts.Firstly, q distinct frogs are selected randomly from n frogs within the memeplex Y m to construct a submemeplex.The selection strategy is to give a higher probability of (1) www.nature.com/scientificreports/being selected to the frogs that have higher performance values.The frogs within submemeplex are resorted in order of decreasing performance.For each submemeplex, the frogs with the worst and the best performance are identified as U w and U b , respectively.Then, the worst frog U w in each submemeplex is updated as follows: where S is the updated step size and is a D-dimensional vector; r is a random number between 0 and 1; S max is the maximum step size allowed to be adopted by a frog after being infected.The new frog is then computed by The evolution rule presented above is shown as Fig. 2a.
If the performance of the new U , w is better than the old U w , it replaces the worst U w .Otherwise, the calculations in Eqs. ( 2) and (3) are repeated with respect to the global best frog, i.e., U b is replaced by U g .If no improvement becomes possible in this case, then a new frog (solution) is randomly generated to replace the frog U w .This operation is repeated by the required number of iterations L max .The search process above is called the local exploration of the SFLA.
Step 4 Once the local exploration is completed for the m memeplexes, the algorithm returns to the global exploration for shuffling.For a global information exchange, the frog population is rearranged in accordance with the new fitness values.Update the global best frog U g .Then, the entire frogs are partitioned into m memeplexes and a new local search starts again.The local exploration and global shuffling process are carried out alternatively www.nature.com/scientificreports/until the iteration numbers G max or convergence criteria are satisfied.The updated U g is the optimal solution of optimization problem.
The main parameters of the SFLA are: number of frogs F, number of memeplexes m, number of frogs in each memeplex n, number of frogs in each submemeplex q and the maximum local search number of evolutionary iterations L max before shuffling.The last parameter is the stop criteria of algorithm.It can be the maximum itera- tions number of global shuffling G max or the solution accuracy ɛ.

Modified shuffled frog leaping algorithm (MSFLA)
The original frog leaping rule is inspired by this natural memetics (see Fig. 2a).As can be seen from the figure, the possible position of the updated new frog U , w is restricted in the narrow area between its old position and the best frog's position U b (or U g ), and its length and direction will never surpass the best one.Therefore, it indicates that the performance of U , w is not better than the performance of U g in the process of evolution 26 .Clearly, this frog leaping rule limits the local search space in the memetic evolution process and might fall into the local optimum.To overcome this limitation, a modified frog leaping rule is introduced in this study.From the perspective of social cooperation, the second part of Eq. ( 3) represents its social ability to learning from others.Meanwhile, the first part represents the ability to self-diagnose in the evolution step.In SFLA, the evolution process is only applied to update the frog with the worst performance (i.e.not all frogs) within each submemeplex, which is obviously different from the other swarm intelligence algorithms.Therefore, in the ideal case, the updated new frog should inherit and increase the advantages of the better one, while reducing the impact of the old worst frog as far as possible.On the basis of the analysis mentioned above, a new parameter called the inertia weight α is introduced to improve the original frog leaping rule by controlling the inherited share from the worst frog.The new frog leaping rule is expressed as: This new parameter α displays roles in balancing the self-cognitive ability and team learning capability of the worst frog.Besides, the new parameter α can not only make the worst frog maintain the leaping inertia, but it also greatly increases the diversity of the solution.If α = 0, the worst frog has no self-cognitive ability and the algorithm would be trapped into the complete random state.If α > 1, the newly updated frog would keep the much gene of the worse performance and the convergence speed will slow down greatly.When α = 1, it is the same as Eq. ( 3).So the reasonable range of inertia weight α is in the range 0-1.The 2-dimensional vector syntheses of the modified frog leaping rule is demonstrated in Fig. 2b.It can be seen that the new rule can extend the direction and the length of each frog's jump.Through widening the local search space, the MSFLA helps to prevent premature convergence and effectively improve the solution performance.
Theoretically, the inertia weight can be a positive constant or even a positive linear or nonlinear function of time.If α is a constant, especially set as an unreasonable value, the diversity of MSFLA could decrease.Thus, it is contrary to the original improved intention.Therefore, in this research, the three time-varying strategies for determining the value of inertia weight are proposed and form the different modified models of SFLA, which are inspired by the inertia weight strategies of the PSO.
To better analyze, assuming the number of frog memeplex is m = 1, then the U b =U g .At the same time, assum- ing the maximum step size S max of frog-leaping can be allowed infinite as long as it does not exceed the domain of definition.Then the updating formula of the worst frog MSFLA (Eq.( 4)) and Eq. ( 2) can be combined and simplified to the following form where k is the iteration number of global search.We can obtain Eq. ( 6) by simplifying the Eq. ( 5): Suppose the MSFLA is convergent, then with the iteration number k increasing, U w (k) and U b (k) are formed as two discrete time sequences with global convergence.Now their z-transform exist and can be noted as U w (z) and U b (z) .Perform z-transform onto both sides of Eq. 6 under zero initial condition.Therefore, the system (MSFLA) described by Eq. ( 7) can be considered as a discrete time dynamic causal system whose reference input is U b (z) and system output is U w (z) .Therefore, the system transfer function is shown below And the precondition of the system convergence is that the system must be stable.The necessary and sufficient condition of system stability is that the poles of H(z) are all in the unit circle.That is satisfied with the following condition: For 0 < α < 1 , the inequality ( 9) is clearly established.That is because when 0 < r < 1 , the inequality 0 < |α − r| < 1 is satisfied.So the original hypothesis is established, that is, MSFLA must be convergent.
(4) www.nature.com/scientificreports/For 1 ≤ α < 2 , the inequality ( 9) has the possibility of existence, that is, there is the possibility of convergence of MSFLA, but this will add a number of unstable factors to the stability of MSFLA; But for α ≥ 2 , the inequal- ity |α − r| > 1 is satisfied and the system H(z) is unstable.it means that the MSFLA will be no longer converge.

Random inertia weight
In the solution process of the actual question, the required value of inertia weight could be different in each memetic generation.Usually, α can come from a certain function distribution, such as the uniform distribution, random distribution, and normal distribution.A random value of inertia weight is used to enable the MSFLA to track the global optima.The formula is as follows 27 : where r is a random number in [0, 1] and it is the same in Eq. ( 2); α is then a uniform random variable in the range [0.5, 1].The modified SFLA model with the random inertia weight strategy is denoted as the MSFLA-R.

Linear time-varying inertia weight
In most of the PSO variants, the inertia weight value is determined by the iteration number, which is called the time-varying inertia weight strategy.A linear decreasing time-varying inertia weight was first introduced in Shi's and Eberhart's studies 28 and experimental results show that the strategy is an effective approach.In view of this, the same strategy of inertia weight is applied to the MSFLA model according to the following equation: where iter is the current iteration of local exploration within each memeplex; α max and α min are the maximum value and the minimum value of the inertia weight α.In this method, the inertia weight value is linearly decreased from the initial value ( α max ) to the final value ( α min ) according to the local iteration number within each meme- plex.The modified SFLA model with the linear time-varying inertia weight strategy is denoted as the MSFLA-L.

Nonlinear time-varying inertia weight
The memetic evolution (or search) process is very complex and nonlinear in most intelligent algorithms.Some researchers proposed nonlinear adjustment strategies of inertia weight in the PSO variants.A typical nonlinear strategy of inertia weight is used in the MSFLA model as the following quadratic function 29 : where α 1 and α 2 are the initial and final values of inertia weight.In each local exploration process, the inertia weight starts from α 1 and ends at α 2 .The modified SFLA model with the quadratic weight strategy is denoted as the MSFLA-Q.
Based on the above formula and relevant theories, the flow charts about the global exploration and local exploration (memetic evolution) of 3 MSFLAs are shown in Fig. 3

Time complexity analysis
For SFLA, the number of individuals in each iteration is unchanged.Assuming that the number of individuals in the algorithm is m, the number of global iterations is G max , the time required for the last update of a single individual in one dimension is T, and the spatial dimension of an individual is D, the time complexity of SFLA can be obtained as O (m × G max ×T × D).For MSFLAs, the inertia weight w is a fixed value in one iteration, and no repeated calculation is required.Therefore, the effect of introducing w on the time T required for individual renewal is small and can be ignored.Therefore, the time complexity of MSFLAs s is also O(m × G max ×T × D).To sum up, the three algorithms of SFLA and MSFLA are the same in terms of time complexity, but MSFLAs obtain better optimization performance due to the optimization and improvement in update strategy.

Experiment and discussions
In order to evaluate the performance of the MSFLA models, seven well-known benchmark functions are used for testing to assure a reliable comparison 30 .The functions f 1 -f 3 and f 7 belong to the unimodal functions which are used to evaluate the exploitation capability of MSFLAs.The f 4 -f 6 simulate multi-modal functions to test the exploration performance of MSFLAs.Table 1 shows the basic information of the benchmark functions.
All the experiment are performed on a machine with a Core i7 1065G7 CPU, 8-GB memory, and 64 bits Windows 10 operating system.Each algorithm repeats 30 runs independently for eliminating random discrepancy.The algorithm is written based on MATLAB 2019b.For a fair comparison, the base parameters of SFLA and MSFLAs are selected as the same as follows.The number of memeplexes m = 25, the number of frog individuals in each memeplex n = 25, the number evolved individuals selected from each memeplex q = 20, the local iteration number within each memeplex L max = 50.The solution accuracy ɛ, as one of two stop criteria of algorithms above, are the same in each problem and is less than 1.00E−6 (except f 7 is 30), and another G max is equal to 3000 ( G max =100D).These parameters are set to make a tradeoff between computation time and accuracy.At the same time, the parameters α max and α min are set to 0.9 and 0.4 in MSFLA-L, α 1 and α 2 are set to 0.9 and 0.2 in MSFLA-Q respectively.The internal parameters of each algorithm are set as shown in Table 2. Table 1.The benchmark functions.

Name Function Range Dim Optimal value
Sphere www.nature.com/scientificreports/ The Fig. 4 shows the mean convergence curves (30 independent runs) based on four different algorithms to seven benchmark functions.As can be seen from Fig. 4g, the precision and the convergence speed of the solution based on the three MSFLAs are much better than those of original SFLA.In the solving process of the benchmark function except f 6 and f 7 , the values of fitness function using three MSFLAs have been completely converged to the global optimal point when the global iteration number is far less than G maxx , but the errors of solution based on the original SFLA is relatively large under the same condition.Among of three MSFLAs, the performance of MSFLA-Q and MSFLA-L are similar and both are better than MSFLA-R.There is no notable different in the coordinate values of the tipping points B and C. On the early phase of solution to the f6, the convergence curve based on MSFLA-L coincides with that based on MSFLA-Q, while on the later phase it coincides with that based on MSFLA-R.
To make a comprehensive comparison for the 3 MSFLAs' performance, the calculation results of 30 independent runs are summarized in Tables 3 and 4. In the two tables, the abbreviation "Std Dev" stands for standard deviation and it can be used to measure the stability of algorithms.
Table 3 shows the calculation speed of four algorithms to those benchmark functions under the same solution precision ɛ.Even for the simple unimodal benchmark functions, the original SFLA also needs at least hundreds of operations (global shuffling or iteration numbers) to achieve the required precision.For example, in the process of the solution to the f 1 , the fastest speed (the least global iteration number) is 342, while the slowest one is 436, and the mean is 369.4.For some complex or multimodal benchmark functions, they need more global iteration numbers and most of them are even more than 3000.However, these modified SFLAs are used to solve these benchmark functions, the actual global iteration number is often no more than 10.At the same time, the stability of three MSFLAs is far better than the original SFLA.
Table 4 shows the experimental results of four SFLA algorithms in dimension D = 30, 50, 100.It can be seen from Table 3 that for f 1 -f 5 , MSFLAs can reach the theoretical optimal value in three different evaluation indexes and dimensions, while SFLA's convergence accuracy decreases with the increase of dimensions, which indicates the effectiveness of the inertia weight strategy and the suitability of MSFLAs for high-latitude unimodal functions.However, for f 6 and f 7 , finding the global optimal solution is quite challenging.The Ackley function f6 is a classical continuous, rotated and non-separable multimodal function.The topological structure feature of f6 is that it is almost everywhere flat on the outer region, but has a non-smooth hole or peak in the middle.f 6 has many local optimal values, which can easily cause the algorithm to stall.The most sought advantage is generally 8.88E-16.With the increase of dimensions, the best and std of MSFLA-Q remain unchanged, and the performance is stable.Secondly, MSFLA-L and MSFLA-R are easy to fall into Local optimization.SFLA performed the worst.The Rosenbrock's valley function f 7 is a typical ill-conditioned, nonconvex and unimodal function that is difficult to minimize, and there is an obvious correlation between variables.It is a classic optimization problem also known as the banana function.Because this function provides little information for search, it is difficult for many algorithms to identify the search direction when solving, and there is little chance to find the global best.Therefore, this function is also commonly used to evaluate the execution efficiency of optimization algorithms.f 7 is a fixed peak function, when D = 30, MSFLAs is better than SFLA, and when D = 50, the result is opposite, but when D = 100, the results of the four algorithms tend to be similar.This shows that the improved algorithm is less effective in the environment of fixed peak function.In general, for other different types of test functions, MSFLAs is at the bottom of the iteration curve most of the time.The results show that the algorithm has high convergence efficiency, which verifies the effectiveness of the algorithm optimization strategy.
Table 5 shows a comparison of the accuracy of the seven benchmark functions based on other optimization algorithms.The experimental data of these algorithms are derived from references, and the data obtained may vary slightly due to different computer configurations.The data comparison in Table 5 shows that the three MSFLAs proposed in this paper have better robustness and generalization abilities.Even for the function f 6 , the precision of solution based on three modified SFLAs are 4-6 orders of magnitude higher than that of the original SFLA and 14-15 orders higher than the three algorithms (ASFLA, FSFLA, and DSFLA), which is basically equivalent to the accuracy of BFCEA algorithm.For the ill-conditioned and nonconvex unimodal function f 7 , the accuracy of the improved algorithms is basically the same as that of the original SFLA except the BFCEA and LSHADE algorithm, which shows that they fall into difficulties in solving.Although the LSHADE algorithm www.nature.com/scientificreports/adopts a more complex construction form to improve the solution results, there are still many gaps compared with the theoretical value.for nonconvex multimodal and even ill-conditioned functions such as f7, although the convergence accuracy of the four algorithms is almost the same, the number of global iterations when meeting the specified accuracy requirements is significantly reduced and the convergence speed is accelerated.Compared with other intelligent optimization algorithms such as MPA and WOA, three improved SFLAs have obvious advantages in solution accuracy for both unimodal and multi-mode functions.Even the actual results of the three www.nature.com/scientificreports/improved SFLAs are exactly the same as the theoretical values (except f6 and f7).This should be attributed to the good ability of local search and global exploration and the potential parallelism of SFLAs algorithm themselves.The Wilcoxon Signed-Rank test is the most popular non-parametric test in statics and it can be applied to determine if two sets of solutions (population) are different statistically significant or not 35 .Each set of pairs in both populations are compared to calculate and analyze their numerical differences based on this method.In short, the Wilcoxon Signed-Rank test returns a numerical result called p-value.The p-value determines the significance level of two different algorithms.An algorithm is statistically significant if and only if it results in the p-value less than 5%.The p-values in Table 6 also show that this superiority is statistically significant since the all p-values are much less than 5%, which further reflect the robustness of the proposed MSFLA algorithms.
In general, the foregoing simulation results reveal that three proposed algorithms with different inertia weight strategy are superior over original SFLA in terms of adaptability, stability, and the rapid global search ability.

Engineering design problems
To verify the feasibility of MSFLAs in solving constrained optimization problems in engineering design, three MSFLAs and the SFLA algorithm are applied to the case for the optimal design of tension/compression spring and cantilever beam.They are both multi-constrained and single-objective functions.The algorithm parameters and population size are constant, and the maximum number of iterations is 1000.Each algorithm was run 30 times independently.

Tension/compression spring design problem
The goal of tension/compression spring optimization is to minimize the weight of the spring in Fig. 5.The variable is the average diameter of the spring coil d (x 1 /cm), the diameter of the spring wire D (x 2 /cm) and the effective number of coils of the spring N (x 3 ).The constraint conditions are the minimum deflection (g 1 ), shear stress (g 2 ), impact frequency (g 3 ) and outer diameter limit (g 4 ) 36 .The specific mathematical model is as follows: Function:     www.nature.com/scientificreports/

Cantilever beam design problem
The cantilever beam design is shown in Fig. 6, which consists of five hollow members.The objective is to reduce the weight of the cantilever beam.The variable is the cross-sectional width x i (i = 1,2,…, 5/cm).The constraint is the deflection of the cantilever beam 37 .The mathematical model is as follows: Function: Subject to: It can be seen from Table 8 that MSFLAs provide the best value in the cantilever beam design problem, and the variable solutions of MSFLAs are reduced sequentially, while the gap between the variable solutions of SFLA is too small for practical design difficulties.The result shows that the search performance of MSFLAs is more powerful than the original algorithm.

Conclusions
In this paper, a modified shuffled frog leaping algorithm (MSFLA) has been developed by introducing the inertia weight.According to different inertia weight strategies, three improved SFLAs are formed.The global convergence of the original SFLA has been proved through establishing the Markov chain model, as long as the global iteration (shuffling) number is large enough in the literature 38 .In the proposed MSFLAs, the update strategies min f (x) = 0.0624(x 1 + x 2 + x 3 + x 4 x 5 )

Figure 4 .
Figure 4. Convergence curves of algorithms on benchmark functions.

Table 3 .
Speed comparison of solution to 7 benchmark functions based on four algorithms.The symbol '-' indicates that the actual global iteration number is more than 3000 if the accuracy of solution is achieved the specified ɛ.

Table 7
records the comparison experiments of the optimal values of the tension/compression spring design problem.The data in the table are average values.The results of all three MSFLAs are better than the basic SFLA, which indicates that MSFLAs have better optimization accuracy in solving this problem (Supplementry information).

Table 4 .
The results of the algorithm in test functions in different dimensions.

Table 5 .
Precision comparison of solution to 7 benchmark functions based other optimization algorithms.

Table 6 .
Thirty times P-values of Wilcoxon Signed-Rank test.

Table 7 .
Comparison of results for tension/compression spring design problem.

Table 8 .
A comparison of results for the cantilever beam design problem.