Hybrid leader based optimization: a new stochastic optimization algorithm for solving optimization applications

In this paper, a new optimization algorithm called hybrid leader-based optimization (HLBO) is introduced that is applicable in optimization challenges. The main idea of HLBO is to guide the algorithm population under the guidance of a hybrid leader. The stages of HLBO are modeled mathematically in two phases of exploration and exploitation. The efficiency of HLBO in optimization is tested by finding solutions to twenty-three standard benchmark functions of different types of unimodal and multimodal. The optimization results of unimodal functions indicate the high exploitation ability of HLBO in local search for better convergence to global optimal, while the optimization results of multimodal functions show the high exploration ability of HLBO in global search to accurately scan different areas of search space. In addition, the performance of HLBO on solving IEEE CEC 2017 benchmark functions including thirty objective functions is evaluated. The optimization results show the efficiency of HLBO in handling complex objective functions. The quality of the results obtained from HLBO is compared with the results of ten well-known algorithms. The simulation results show the superiority of HLBO in convergence to the global solution as well as the passage of optimally localized areas of the search space compared to ten competing algorithms. In addition, the implementation of HLBO on four engineering design issues demonstrates the applicability of HLBO in real-world problem solving.

and global optimal solutions to find better solutions have paved the way for the design and development of numerous optimization algorithms.
Exploration and exploitation are capabilities that enable optimization algorithms to be efficient in finding solutions. Exploration is the ability to search globally in different areas of the search space while exploitation is the ability to search locally near the solutions obtained because there may be better solutions near those solutions. Balancing exploration and exploitation play a key role in the success of optimization algorithms in achieving optimal solutions 5 . The main research question in the study of optimization algorithms is whether there is still a need to introduce new optimization algorithms despite the fact that countless algorithms have been introduced so far. The No Free Lunch (NFL) theorem 6 answers this question. The concept of the NFL theorem explains that there is no guarantee that an algorithm with optimal performance in solving a set of objective functions and problems will be able to perform the same performance in all optimization applications. It is not possible to ensure that a particular algorithm is the best optimizer in all optimization topics. The NFL theorem encourages researchers to develop new algorithms to find better solutions to optimization problems. The NFL theorem has motivated researchers in this paper to develop a new optimization algorithm for optimization applications.
Innovation of this study is in introducing and designing a new evolutionary algorithm called Hybrid Leader Optimization (HLBO). The main contributions of this paper are as follows: • A new stochastic-based optimization algorithm is presented, whose fundamental idea is to guide the population algorithm based on a hybrid leader generated by three different members. • The stages of HLBO are described in two phases of exploration and exploitation and are mathematically modeled. • The efficiency of HLBO has been benchmarked by optimizing twenty-three objective functions of a variety of unimodal and multimodal types. • To evaluate the capability of HLBO, its performance has been compared with ten well-known algorithms.
In this section and in the following section, the related works are presented. The Hybrid Leader Optimization (HLBO) algorithm is introduced and modeled in the section "Hybrid Leader-Based Optimization". Simulation studies are included in the section "Simulation Studies and Results". The discussion of HLBO results is provided in the section "Results and Discussion". HLBO performance test on IEEE CEC 2017 is presented in "Evaluation of the Effectiveness of HLBO in Handling Complex IEEE CEC 2017 Objective Functions". Conclusions and several research subjects are provided for further study in the last section.

Related works
Optimization algorithms are stochastic techniques to solve optimization applications that are based on the concepts of stochastic mechanisms, e.g., concretely on random methods of trial and error, modeling of natural processes, animal behavior, physical sciences, biology sciences, rules of games and other evolutionary processes 7 . The main idea applied in the design categorizes the optimization algorithms into five groups: evolutionary-based, swarm-based, physics-based, game-based, and human-based optimization algorithms.
Evolutionary-based algorithms have been developed using the concept of natural selection, the concepts of biological and genetic sciences, and random operators such as selection, crossover, and mutation. Genetic Algorithm (GA) 8 and Differential Evolutionary (DE) 9 are the most significant evolutionary algorithms whose main inspiration is modeling of the reproductive process. Simulation of the human immune system against diseases has paved the way for the design of an Artificial Immune System (AIS) algorithm 10 .
Swarm-based algorithms are inspired by the behaviors and strategies of animals, insects, birds, and other swarming activities in nature. The most widely used and famous techniques of this group are Particle Swarm Optimization (PSO) 11 , Ant Colony Optimization (ACO) 12 , Artificial Bee Colony (ABC) 13 , Firefly Algorithm (FA) 14 . The strategy of birds and fish in finding food sources using individual and collective information has been the basic inspiration in designing PSO. The ACO's main idea has been the ability of ant colonies to find the shortest path between the nest and the food source, taking advantage of its pheromone properties and accumulation. Utilizing the collective intelligence and smart behavior of the bee colony to search and find food has been the fundamental inspiration in ABC design. The light emitted by fireflies can be used for a variety of reasons, such as attracting prey and hunting, attracting other members of the group (attracting the opposite sex), and as a communication strategy. This fascinating light of fireflies has been a remarkable and interesting phenomenon, the inspiration of which has led to the development of the FA.
Searching strategies and behaviors of animals, birds, and insects to find food sources or prey hunting have been the main ideas in the design of various techniques such as Grey Wolf Optimization (GWO) algorithm 15 , Pelican Optimization Algorithm (POA) 16 , Marine Predator Algorithm (MPA) 17 , Orca Predation Algorithm (OPA) 18 , Whale Optimization Algorithm (WOA) 19 , which numerous efforts to improve it have led to "enhanced WOA" versions 20,21 , Reptile Search Algorithm (RSA) 22 , and Tunicate Search Algorithm (TSA) 23 .
Physics-based algorithms have been developed on the base of using some physical processes and modeling of physical forces and laws. Simulated Annealing (SA) is the name of the most familiar physics-based algorithm based on simulation of the cooling of a molten metal in the refrigeration process 33 . The use of gravity force along with Newton's laws of motion have been the basic principles employed in Gravitational Search Algorithm (GSA) design 34 35 . Mathematical modeling of the nuclear reaction process in two stages of nuclear fusion and nuclear fission is employed in the design of Nuclear Reaction Optimization (NRO) 36 . The application of three concepts in cosmology, including wormholes, black holes, and white holes, has been the basis of the Multi-Verse Optimizer (MVO) design 37 . Game-based algorithms are inspired by player behaviors, rules governing individual and group games. The strategy used by different players to put the puzzle pieces together and solve it has been the idea of designing the Puzzle Optimization Algorithm (POA) 38 . Simulation of the coaching process, holding competitions, and teams interacting with each other during a competitive season of volleyball has led to the design of the Volleyball Premier League (VPL) optimization method 39 . Mathematizing the competition between teams and groups playing a tug-of-war game and trying to win has been the main idea in the development of Tug of War Optimization (TWO) approach 40 .
Human-based algorithms are developed based on the simulation of human activities and behaviors in performing various tasks. Among the approaches of this group can be mentioned Teaching-Learning-Based Optimization (TLBO) based on modeling the interactions of a teacher and learners in the classroom 41 , Poor and Rich Optimization (PRO) based on the modeling of the efforts of the rich and poor groups to improve their economic situation 42 , and Human Behavior-Based Optimization (HBBO) based on the modeling of human thoughts and behaviors 43 .

Hybrid leader-based optimization
In this section, the concepts of the proposed Hybrid Leader-Based Optimization (HLBO) approach are stated and the HLBO mathematical formulation is presented.

Inspiration and main idea of HLBO.
In population-based algorithms, each member of the population is a searcher in the problem-solving space and therefore a candidate solution. Based on the algorithm steps and information transfer, the population members are able to improve their position to provide better solutions. The dependence of the algorithm population update process on specific members (such as the best member of the population and the worst member of the population) may prevent the algorithm from searching globally in the problem-solving space. These conditions can lead to the rapid convergence of the algorithm towards the local optimal solution and as a result, the algorithm fails to identify the main optimal area in the search space. Therefore, overreliance on the process of updating the algorithm population to certain members reduces the exploration ability within the algorithm. In the proposed HLBO method, a unique hybrid leader is employed to update and guide each member of the algorithm population in the search space. This hybrid leader is generated based on three different members including the best member, one random member, and the corresponding member.
Mathematical model of HLBO. The HLBO population is similar to other population-based algorithms that can be mathematically modeled using a matrix according to Eq. (1).
where X is the HLBO population, X i is the ith candidate solution, x i,j is the value of jth variable determined by the ith candidate solution, N is the size of HLBO population, and m is the number of problem variables.
The position of each member X i , i = 1, 2, . . . , N , of the population X is initially initialized randomly by considering the constraints of the problem variables based on Eq. (2).
where r is a random real number from the interval [0, 1], lb j and ub j are the lower and upper bound of the jth problem variables respectively.
The objective function of the problem is evaluated based on each of the candidate solutions determined by the members of the population X, which is specified in Eq. (3) using a vector.
where F represents the vector of the objective functions and F i denotes the objective function value delivered from the ith candidate solution. www.nature.com/scientificreports/ The values obtained for the objective function are a measure of the quality of the candidate solutions. The member that provides the best value for the objective function is known as the best member (X best ) and the member that provides the worst value for the objective function is known as the worst member (X worst ) . These values are updated in each algorithm iteration. What distinguishes optimization algorithms from each other is the process used to update the algorithm population. Two important and influential indicators in the performance of optimization algorithms that should be considered in the process of updating and changing the position in the search space are exploration (global search) and exploitation (local search).

Phase 1: Exploration (global search).
Exploration is a feature that enables members of the algorithm population to accurately scan different areas of the search space to be able to find the original optimal area. Excessive reliance on specific members of the population (such as the best member) in the process of updating the algorithm population position prevents the global search of the algorithm in the search space and reduces the algorithm's ability to explore. This dependence in the update process can lead to early convergence of the algorithm to the local optimal and as a result the algorithm fails to identify the main optimal area in the search space. However, some population members, like the best member, have useful information that should not be overlooked. HLBO uses a hybrid leader to update members of the population. This hybrid leader is produced for each member of the population at each repetition. In constructing a random leader, three members of the population, including (i) the corresponding member (the same member to be led by this hybrid leader), (ii) the best member, (iii) a random member of the population is influential.
The participation coefficient of each of these three members in the production of the hybrid leader is based on the quality of that member in providing a better value for the objective function. The quality of each member of the population in presenting the candidate solution is calculated using Eq. (4).
Then, using the results of Eq. (4), the participation coefficients for each member are calculated using Eq. (5).
where i, k ∈ {1, 2, . . . , N} , k = i , q i is the quality of the ith candidate solution, F worst is the value of the objective function of the worst candidate solution, PC i , PC best , PC k are the participation coefficients of the ith member, the best member, and the kth member (k is an integer determined randomly from the set {1, 2, . . . , N} ), respectively, in producing the hybrid leader.
After determining the participation coefficients, the hybrid leader is generated for each member of the population using Eq. (6).
where HL i is the hybrid leader for the ith member and X k is a randomly selected population member which the index k is the row number of this member in the population matrix. The new position for each member of the population in the search space under the guidance of the hybrid leader is calculated using Eq. (7). This new position is acceptable to the corresponding member if the value of the objective function is improved from the previous position, otherwise it remains in the previous position. These update conditions are modeled in Eq. (8).
i is its objective function value based on the first phase of HLBO, r is a random real number from the interval [0, 1], I is an integer which is selected randomly from the set {1, 2} , and F HL i is the value of the objective function obtained from hybrid leader of the ith member.

Phase 2: Exploitation (local search).
Exploitation is an ability for members of the algorithm population that enables them to search locally for finding better solutions near the obtained solutions. Therefore, in HLBO a neighborhood around each member of the population is considered that allows that member to change position by searching locally in that area and finding a position with a better value for the objective function. This local search is modeled to improve and increase HLBO exploitation ability using Eq. (9). In this phase, the newly calculated position is also acceptable if it improves the value of the objective function, which is simulated in Eq. (10). i is its objective function value based on the second phase of HLBO, R is the constant equal to 0.2, t is the iteration counter, and T is the maximum number of iterations.
Repetition process, pseudo-code, and flowchart of HLBO. By implementing the first and second phases, all HLBO members are updated and an iteration of the algorithm is completed. The algorithm enters the next iteration and the HLBO population update process continues based on the exploration and exploitation phases according to Eqs. (4)- (10). This process continues until the end of the algorithm, and finally the best candidate solution experienced during the iterations is introduced as the solution to the problem. The HLBO pseudocode is presented in Algorithm 1 and its flowchart is presented in Fig. 1.

Computational complexity of HLBO
The HLBO initialization and preparation process has a computational complexity equal to O(N m) , where N refers to the number of population members and m is the number of variables in the problem. In each iteration, for each member, a hybrid leader must be generated, resulting in the computational complexity of generating the hybrid leaders equal to O(N m T) , where T is the maximum number of iterations of the algorithm. The HLBO update process has two phases of exploration and exploitation, which in both phases the objective function is evaluated. As a result, the computational complexity of HLBO update process equals O(2N m T) . Thus, the total computational complexity of HLBO is equal to O(N m(1 + 3T)).

Input information of the optimization problem.
Create the initial population.
Set parameters and .
Calculate the objective function.

== ?
Save the best candidate solution so far.
. Output of the best quasi-optimal solution of the objective function found by HLBO.

Phase 2:
Calculate , 2 using Equation (9). proposed HLBO performance in optimization. HLBO has been implemented to provide optimal solutions of twenty-three standard benchmark functions of three main types (complete definitions, domains, and tables of suitable values of parameters of functions F 1 to F 23 can be found in the paper 54 ) unimodal function (functions F 1 to F 7 ), high-dimensional multimodal functions (functions F 8 to F 13 ), and fixed-dimensional multimodal functions (functions F 1 to F 7 ). The optimization results obtained from HLBO are compared with the performance of ten well-known algorithms including PSO, MPA, HGS, SMA, GA, WOA, TLBO, TSA, GSA, and GWO. The HLBO and the ten mentioned algorithms in twenty independent implementations are employed in optimizing the benchmark functions while each iteration contains 1000 iterations. The optimization results are reported using four statistical indicators: mean, best, standard deviation, and median. Moreover, the rank of each algorithm in providing a better solution for each benchmark function as well as for each group of objective functions is specified. Table 1 lists the adjusted values of the control parameters of the ten competitor algorithms.

Evaluation of unimodal benchmark functions.
The results of optimization of F 1 to F 7 benchmark functions using HLBO and competitor algorithms are released in Table 2. Experimental results show that HLBO provides the global optimal for F 1 and F 6 . HLBO is the best optimizer against competitor algorithms in optimizing F 2 , F 4 , and F 7 . HLBO ranks as the second in F 3 optimization and the third in F 5 optimization. What can be deduced from the analysis of the reported results is that HLBO is highly efficient in addressing unimodal optimization problems compared to ten competitor algorithms.
Evaluation of high-dimensional multimodal benchmark functions. The employment results of HLBO and ten competitor algorithms in optimizing F 8 to F 13 benchmark high-dimensional multimodal functions are reported in Table 3. HLBO has managed to find the global optimum by optimizing the functions F 9 and F 11 . HLBO is the first best optimizer for handling the function F 10 . In the case of the functions F 12 and F 13 Table 1. Adjusted values of the control parameters of ten competitor algorithms. www.nature.com/scientificreports/ the algorithm HGS is the first best optimizer, respectively, while HLBO is the fourth best optimizer for these functions. Analysis of simulation results shows HLBO capability in solving high-dimensional multimodal optimization problems.
Evaluation of fixed-dimensional multimodal benchmark functions. The results of implementing HLBO and competitor algorithms on benchmark F 14 to F 23 benchmark functions are presented in Table 4. What is evident from the simulation results is that HLBO is the first best optimizer in solving F 14 to F 23 benchmark functions compared to competitor algorithms. The presented experimental results show that HLBO has a superior performance over similar algorithms in dealing with multimodal optimization problems. The behavior of convergence curves of HLBO and competitor algorithms in achieving solutions for objective functions F 1 to F 23 is presented in Fig. 2. Statistical analysis. In this subsection, by using statistical analysis on the obtained optimization results, the superiority of HLBO over competitor algorithms is examined from a statistical point of view to determine whether this superiority is significant or not. Wilcoxon sum rank test 55 is employed to address this goal. In this www.nature.com/scientificreports/ test, an index called p-value indicates and determines the superiority of the target algorithm over the competitor alternative algorithm. The Wilcoxon simulation results are released in Table 5. What can be deduced from the simulation findings is that HLBO has a significant statistical superiority over the competitor algorithm in cases with p-values less than 0.05.

Evaluation of the effectiveness of HLBO in handling complex IEEE CEC 2017 objective functions
In the previous section, the performance of HLBO in handling the objective and multimodal target functions was examined, indicating the satisfactory results of the proposed approach. In this section, the effectiveness of HLBO in solving complex IEEE CEC 2017 benchmark functions 56 is evaluated. The implementation results of HLBO as well as ten competitor algorithms on objective functions C 1 to C 30 are presented in Tables 6 and 7. What emerges from the simulation results is that HLBO ranks first in optimizing C 1 , C 2 , C 4 , C 5 , C 11 to C 21 , C 24 , C 26 , C 27 , C 29 , and C 30 functions by providing the best performance compared to competitor algorithms. The general analysis of the simulation results of C 1 to C 30 functions shows that HLBO has an acceptable efficiency in handling IEEE CEC 2017 objective functions.

Results and discussion
Optimization algorithms by utilizing exploration for global search and exploitation for local search, have the ability to handle optimization problems. To analyze the exploitation ability of HLBO in local search, the unimodal objective functions are favorable with only one main peak. In this type of optimization issues, the main challenge is the convergence towards the global optima. The optimization results of unimodal functions using  www.nature.com/scientificreports/ HLBO indicate the exploitation ability of the proposed method in converging to the global optimal solution. In particular, HLBO has demonstrated its high local search ability by converging to the global optimal in handling the functions F 1 and F 6 . High-dimensional multimodal functions due to the existence of multiple local optimal solutions are a suitable option for measuring the exploratory ability of optimization algorithms for global search and finding the main optimal area. The main challenge in solving these problems is to accurately scan the search space and prevent the algorithm from getting stuck in some of the optimal local areas. The results of implementing HLBO on high-dimensional multimodal functions show that the proposed approach has an acceptable exploration ability in scanning the search space and finding the optimal area. The exploratory power of HLBO in identifying the optimal region, especially in the F 9 and F 11 functions, is evident that it has been able to provide the global optimal. In addition to having the right quality of exploration and exploitation, having the right balance between these two indicators is the key to the success of optimization algorithms. Fixed dimensional multimodal functions have been selected to evaluate the ability of HLBO to strike a balance between exploration and exploitation. In this type of problem, it is important to simultaneously find the main optimal area based on global search and converge as much as possible to the global optimal based on local search. The optimization results of this type of function using the proposed approach show the high capability of HLBO in balancing exploration and exploitation to discover the optimal area and converge towards the global optimal.

Conclusion and future works
In this paper, a new optimization algorithm called Hybrid Leader Optimization (HLBO) is introduced. The use of a hybrid leader generated by three different members was HLBO's idea in updating the algorithm population in the search space. The HLBO implementation process was mathematically modeled in two phases of exploration and exploitation. Twenty-three objective functions were employed to evaluate the performance of HLBO in achieving optimal solutions for optimization problems. The results of the unimodal functions indicated the high exploitation ability of HLBO to search locally and converge towards global optima. The results of optimizing multimodal functions showed the high exploration ability of HLBO to search globally and discover the optimal area without getting caught up in local optimal. For further analysis of HLBO, its efficiency in handling complex IEEE CEC 2017 objective functions was studied. The results showed that HLBO is capable of solving such optimization problems. The results of HLBO compared with the performance of ten well-known algorithms showed that HLBO has a superior performance by providing appropriate solutions in most cases due to the appropriate balance between exploration and exploitation. The proposed HLBO opens up several research subjects for further work in the future. Specific research potentials are the development of binary and multimodal versions of HLBO. The HLBO employment on optimization topics in various sciences as well as real-world applications are other suggestions for future studies. Similar to any stochastic-based optimization algorithm, there are concerns and limitations for the use of the proposed HLBO approach. Of course, we do not claim that HLBO is generally the best optimizer because according to the NFL theorem, there is no presupposition for the effective performance of an algorithm in dealing with optimization issues. It is also possible that there may be other algorithms or that new algorithms may be developed by researchers in the future that work better in some concrete applications.