Introduction

The technique of finding the best feasible solution among all existing ones is known as optimization. Optimization is used in designing and maintaining many engineering, economic, and even social systems to minimize the necessary costs or maximize profits. Due to the wide application of optimization in different sciences, this topic has grown a lot, so it is studied in management, mathematics, industry, and many branches of science1. If we want to solve a real optimization problem, we must first build the corresponding mathematical model. Setting up a model, of course, means creating a complete description of the problem with variables and mathematical relationships so that all the details of the optimization problem are simulated2.

Deterministic optimization methods can be divided into gradient-based and non-gradient methods, which effectively solve linear, convex, and derivable optimization problems and have a continuous search space. On the other hand, many real-world optimization problems have features such as nonlinear, non-convex objective functions, discrete search spaces, non-differentiable, high dimensions, and high complexity3.

The inability of deterministic methods to address such optimization challenges has led to the emergence of effective stochastic approaches in such cases. Metaheuristic algorithms, as the most prominent stochastic method, are capable of tackling optimization problems based on a random search, random operators, and trial-and-error processes4. The simplicity of concepts, easy implementation, efficiency in nonlinear and non-convex environments, and independence of the type of problem are the features that have led to the widespread use and popularity of metaheuristic algorithms5.

The primary source in the design of metaheuristic algorithms is inspiration from various natural phenomena, swarm intelligence, animal life, biological sciences, physical laws, rules of the game, and so on. Among the most famous metaheuristic algorithms are the genetic algorithm (GA)6, inspired by biology, the particle swarm optimization (PSO)7, the ant colony optimization (ACO)8, the Artificial bee colony (ABC)9, and the Northern Goshawk optimization8, inspired by animal life.

The critical issue with metaheuristic algorithms is that these methods do not guarantee that they will be able to find the optimal global solution. However, the solutions obtained from metaheuristic algorithms are close to the global optimal. The desire to achieve better solutions has led to the development of numerous metaheuristic algorithms.

Given the development of numerous metaheuristic algorithms, the main research question is, is there still a need to design newer algorithms? In answer to this question, the No Free Lunch (NFL) theorem10 states that the success of an algorithm in handling a set of optimization problems cannot be a reason for the successful performance of this algorithm in dealing with other optimization problems. There is no presumption of the success or failure of a method in optimizing a problem. The NFL theorem explains that no particular algorithm can be introduced as the best optimizer in all optimization applications. The NFL theorem is a source of motivation for researchers come up with better solutions to optimization problems by designing newer metaheuristic algorithms.

The innovation and novelty of the proposed chef-based optimization algorithm (CBOA) are:

  • This paper introduces a new metaheuristic algorithm based on the description of a training process.

  • Every educational process in different types of schools has certain usual properties, forms and stages. In this paper, we were concretely motivated by all specifics of the process of learning the cooking skills of a new chief.

  • The paper provides a mathematical model of two-phase description of the preparation of a new chef, according to the principles of a real cooking school.

  • Both phases are typical of all art schools (including cooking courses) because every student wants to learn from the best chef, but on the other hand, the greatest chefs will not want to prepare weak students. So, in the first phase, the chefs compete with each other so that a table of their quality ranking can be created. Similarly, in the second phase, students compete with each other so that their qualitative ranking can be created according to their cooking abilities.

  • In the mathematical modeling of the first phase, we implemented two master chef strategies. These strategies model the fact that even chefs learn new cooking recipes by observing the teaching of other chefs (Strategy 1), and then they try to improve these observed recipes even more through their autonomous experimentation (Strategy 2).

  • In the mathematical modeling of the first phase, we implemented three student strategies. The first strategy of each student is to choose a chef and learn all of his/her skills. The second strategy of each student is to choose another chef and learn from him/her one skill (one concrete recipe). In the third strategy, students try to improve all their skills by self-experimentation.

  • CBOA ability to handle optimization problems is tested on fifty-two standard benchmark functions and compared with twelve well-known meta-heuristic algorithms. In doing so, CBOA achieves much better results than these competing programs.

The rest of the structure of the paper is as follows; the literature review is presented in the “Lecture review’’ section. The proposed CBOA is introduced and modeled in the “Chef-based optimization algorithm’’ section. The simulation studies and results are presented in the “Simulation studies and results’’ section. A discussion of results and performance of the proposed CBOA is presented in the “Discussion’’ section. CBOA implementation on CEC 2017 test suite is presented in “Evaluation CEC 2017 test suite” section. The efficiency of CBOA in handling real-world applications is evaluated in “CBOA for real world applications” section. Conclusions and several suggestions for future research are provided in the “Conclusions and future works’’ section.

Lecture review

Metaheuristic algorithms, according to the primary source of design inspiration, are classified into five groups: (i) swarm-based, (ii) evolutionary-based, (iii) physics-based, (iv) game-based, and (v) human-based methods.

Theorizing on swarming activities and behaviors in the lives of birds, animals, aquatic animals, insects, and other living things in nature has been the main source of inspiration in the development of swarm-based algorithms. PSO, ACO, and ABC are among the most widely used and popular swarm-based algorithms. The natural behavior of the crowds of birds or fish in search of food have been the main idea of the PSO. Discovering the shortest path between the nest and the food source based on the collective intelligence of ants has been main idea of ACO. Hierarchical efforts and activities of bee colonies in search of food has been the main idea of the ABC. The idea of the ability of living organisms to find food sources in nature has led to the design of several swarm-based metaheuristic algorithms, such as: the tunicate swarm algorithm (TSA)11, the African vultures optimization algorithm (AVOA)12, and the snake optimizer (SO)13. The strategy of living things in nature when hunting and trapping prey has been the main idea in designing algorithms such as the grey wolf optimizer (GWO)14, the Golden Jackal optimization (GJO)15, the whale optimization algorithm (WOA)16, the reptile search algorithm (RSA)17, the marine predator algorithm (MPA)18.

The concepts of natural selection, Darwin’s theory of evolution, and stochastic operators such as selection, crossover, and mutation have been used in the design of evolutionary algorithms. GA and differential evolution (DE)19 are among the most famous evolutionary algorithms whose main design idea is the reproduction process and its concepts.

The laws, concepts, and phenomena of physics have been a source of inspiration in designing of numerous methods that fall into the category of physics-based algorithms. Simulated annealing (SA) is the most significant physics-based algorithm produced based on the physical phenomenon of metal annealing20. Physical forces and Newton’s laws of motion have been the main idea behind the design of methods such as the gravitational search algorithm (GSA) based on gravity force21 and the spring search algorithm (SSA) based on spring force22. Mathematical modeling of the natural water cycle in nature has led to the design of the water cycle algorithm (WCA)23. Cosmological studies and space holes have been the inspiration in designing the multi-verse optimizer (MVO)24. Archimedes principle concepts have been the main idea in the design of the archimedes optimization algorithm (AOA)24.

The rules of the game, the behavior of the players, the coaches, and the referees have been a source of inspiration for designing game-based algorithms. Football game based optimization (FGBO)24 and the volleyball premier league (VPL)25 are two game-based approaches designed based on the modeling of football and volleyball league, respectively. The strategy of the players to put the pieces together has been the design idea of the puzzle optimization algorithm (POA)26.

Human activities and behaviors in individual and social life have become the idea of designing approaches that fall into the category of human-based algorithms. Teaching–learning-based optimization (TLBO) is one of the most famous human-based algorithms that has been developed based on the simulation of interactions between a teacher and students in the classroom27. The treatment process that the doctor performs to treat patients has been the main idea in the design of the doctor and patient optimization (DPO)28. The cooperation of the members of a team to achieve success and the common goal of that team has been the main idea in the design of the teamwork optimization algorithm (TOA)29. The City Councils Evolution (CCE) is a human-based approach that is produced based on modeling the evolution of city councils30. The strategic movement of army troops during the war has been the idea employed in the design of the war strategy optimization (WSO)31.

Based on the best knowledge gained from the literature review, no metaheuristic algorithm inspired by the culinary education process has been designed. However, teaching cooking to people who attend training courses is an intelligent process that can be a motivation to design a new metaheuristic algorithm. Consequently, in this study, a new optimization approach has been developed by mathematical modeling the cooking education process, which is discussed in the next section.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Informed consent was not required as no human or animals were involved.

Chef-based optimization algorithm

This part is devoted to the introduction and mathematical modeling of the proposed algorithm called the Chef-based optimization algorithm (CBOA).

Inspiration of CBOA

Cooking students and young cooks participate in training courses to improve their cooking skills and become chefs. This concept is analogous to metaheuristic algorithms, where several candidate solutions are initialized and then improved through an iterative process to determine the best candidate solution as the solution to the problem at the end of the algorithm implementation. Thus, the process of transforming a cooking student into a chef in a culinary school is a source of inspiration for the design of the proposed CBOA.

It is assumed that a certain number of chef instructors are present in a culinary school. Each chef instructor is responsible for teaching a class. Each cooking student can choose which of these classes to attend. The chef instructor teaches students cooking skills and techniques. However, chef instructors also try to improve their skills based on the instructions of the best chef instructor in the school and individual exercises. Cooking students try to learn and imitate the skills of the chef instructor. In addition, cooking students try to improve the skills they have learned through practice. At the end of the course, cooking students become skilled chefs under the training they have received.

Mathematical modeling of the above concepts is used in designing the CBOA, which is discussed in the following subsections.

Algorithm initialization

The proposed CBOA approach is a population-based algorithm whose members consist of two groups of people, namely cooking students and chef instructors. Each CBOA member is a candidate solution that contains information about the problem variables. From a mathematical point of view, each member of the CBOA is a vector, and the set of CBOA members can be modeled using a matrix according to Eq. (1).

$$ X = \left[ {\begin{array}{*{20}c} {X_{1} } \\ \vdots \\ {X_{i} } \\ \vdots \\ {X_{N} } \\ \end{array} } \right]_{N \times m} = \left[ {\begin{array}{*{20}c} {x_{1,1} } & \cdots & {x_{1,j} } & \cdots & {x_{1,m} } \\ \vdots & \ddots & \vdots & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \vdots \\ {x_{i,1} } & \cdots & {x_{i,j} } & \cdots & {x_{i,m} } \\ \vdots & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \vdots & \ddots & \vdots \\ {x_{N,1} } & \cdots & {x_{N,j} } & \cdots & {x_{N,m} } \\ \end{array} } \right]_{N \times m} , $$
(1)

where \(X\) is the CBOA population matrix, \({X}_{i}=\left({x}_{i,1},{x}_{i,2},\dots ,{x}_{i,m}\right)\) is the \(i\)th CBOA member (candidate solution), \({x}_{i,j}\) is its \(j\)th coordinate (i.e., the value of the \(j\)th problem variable for the \(i\)th CBOA member), \(N\) is the population size, and \(m\) is the number of problem variables of the objective function (dimension of the problem).

The position of the CBOA members at the beginning of the algorithm implementation is randomly initialized for \(i=\text{1,2}, \dots , N \text{and} j=\text{1,2}, \dots ,m\) using Eq. (2).

$${x}_{i,j}=l{b}_{j}+r\cdot \left(u{b}_{j}-l{b}_{j}\right),$$
(2)

where \(r\) is a random number in the interval \(\left[\text{0,1}\right]\), \(l{b}_{j}\) and \(u{b}_{j}\) are the lower and the upper bounds of the \(j\)th problem variable, respectively.

By inserting the suggested values of each CBOA member into the variables, a corresponding objective function value is evaluated. As a result, the objective function is evaluated in \(N\) turns (where \(N\) is the number of CBOA members) and \(N\) values are calculated for the objective function. These values can be represented using a vector corresponding to Eq. (3).

$$F={\left[\begin{array}{*{20}l}{F}_{1}\\ \vdots \\ {F}_{i}\\ \vdots \\ {F}_{N}\end{array}\right]}_{N\times 1}={\left[\begin{array}{*{20}l}F({X}_{1})\\ \vdots \\ F({X}_{i})\\ \vdots \\ F({X}_{N})\end{array}\right]}_{N\times 1},$$
(3)

where \(F\) is the vector of values of the objective function and \({F}_{i}\) is the value of the objective function obtained for the \(i\)th member of CBOA, where \(i=\text{1,2}, \dots , N.\)

The values of the objective functions provide essential information about the quality of the candidate solutions. The value of the objective function is the decision criterion for selecting the best candidate solution. Among CBOA members, the member with the best value for the objective function is recognized as the best member of the population and the best candidate solution. During the running of the algorithm, in each iteration, the members of the CBOA are updated, and the corresponding values of the objective function are calculated. It is, therefore, necessary to update the best member in each iteration based on comparing the values of the objective function.

Mathematical modeling of CBOA

After the algorithm is initialized, the CBOA steps are gradually applied to the candidate solutions to improve them. CBOA members consist of a group of instructing chefs and a group of cooking students. The update process for each of these groups is different. Based on comparing the values of the objective function, some CBOA members with better values of the objective function are selected as the chef instructor. Therefore, if the rows of the CBOA population matrix are sorted in ascending order according to the value of the objective function (thus, the member in the first row is the best member), then the group of the first \({N}_{C}\) members is selected as the group of chef instructors and the rest group of \({N-N}_{C}\) members is chosen as the group of cooking students. The CBOA sorted population matrix and the sorted objective function vector are specified in Eqs. (4) and (5).

$$ XS = \left[ {\begin{array}{*{20}c} {XS_{1} } \\ \vdots \\ {XS_{{N_{C} }} } \\ {XS_{{N_{C} + 1}} } \\ \vdots \\ {XS_{N} } \\ \end{array} } \right]_{N \times m} = \left[ {\begin{array}{*{20}c} {xs_{1,1} } & \cdots & {xs_{1,j} } & \cdots & {xs_{1,m} } \\ \vdots & \ddots & \vdots & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \vdots \\ {xs_{{N_{C} ,1}} } & \cdots & {xs_{{N_{C} ,j}} } & \cdots & {xs_{{N_{C} ,m}} } \\ {xs_{{N_{C} + 1,1}} } & \cdots & {xs_{{N_{C} + 1,j}} } & \cdots & {xs_{{N_{C} + 1,m}} } \\ \vdots & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \vdots & \ddots & \vdots \\ {xs_{N,1} } & \cdots & {xs_{N,j} } & \cdots & {xs_{N,m} } \\ \end{array} } \right]_{N \times m} , $$
(4)
$$FS={\left[\begin{array}{*{20}c}{FS}_{1}\\ \vdots \\ F{S}_{{N}_{C}}\\ F{S}_{{N}_{C}+1}\\ \vdots \\ {FS}_{N}\end{array}\right]}_{N\times m},$$
(5)

where \({N}_{C}\) is the number of chef instructors, \(XS\) is the sorted population matrix of CBOA, and \(FS\) is a vector of ascending objective function values. In the matrix \(XS\), members from \({XS}_{1}\) to \(X{S}_{{N}_{C}}\) represent the group of chef instructors, and members from \(X{S}_{{N}_{C}+1}\) to \(X{S}_{N}\) represent the group of cooking students. The vector \(FS\) i includes successively the values of the objective functions corresponding to \({XS}_{1}\) to \(X{S}_{N}\).

Phase 1: the updating process for group of chef instructors (update of \({XS}_{1}\) to \(X{S}_{{N}_{C}}\))

In a culinary school, it is assumed that several chef instructors are responsible for teaching cooking skills to students. Chef instructors follow two strategies to improve their cooking skills. In the first strategy, they emulate the best chef instructor and try to learn the chef instructor techniques. This strategy demonstrates the global search and CBOA exploration capabilities.

The advantage of updating the chef instructors based on this strategy is that the top chefs (top population members) improve their skills based on the best chef (best population member) before they start teaching students. Hence, there is no direct dependence on updating the students’ position only on the base of the best member of the population in CBOA design. Furthermore, this approach prevents the algorithm from getting stuck in local optima and causes different areas of the search space to be scanned more accurately and effectively. Based on this strategy, a new position for each chef instructor is first calculated for \(i=\text{1,2}, \dots , {N}_{C} \,\text{and}\, j=\text{1,2}, \dots ,m\) using the following equation

$${xs}_{i,j}^{C/S1}={xs}_{i,j}+r\cdot \left({BC}_{j}-I\cdot {xs}_{i,j}\right),$$
(6)

where \({XS}_{i}^{C/S1}\) is the new calculated status for the \(i\)th sorted member of CBOA (that is \({XS}_{i}\)) based on the first strategy (\(C/S1\)) of updating the chef instructor, \({xs}_{i,j}^{C/S1}\) is its \(j\)th coordinate, \(BC\) is the best chef instructor (denoted as \({XS}_{1}\) in the matrix \(XS\)), \({BC}_{j}\) is the \(j\)th coordinate of the best chef instructor, \(r\) is a random number from the interval \(\left[\text{0,1}\right]\), and \(I\) is a number that is selected randomly during execution from the set \(\left\{\text{1,2}\right\}\). This new position is acceptable to the CBOA if it improves the value of the objective function. This condition is modeled using Eq. (7).

$${XS}_{i}=\left\{\begin{array}{*{20}l}{XS}_{i}^{C/S1}, & {FS}_{i}^{C/S1}<{F}_{i};\\ {XS}_{i}, & else,\end{array}\right.$$
(7)

where \({FS}_{i}^{C/S1}\) is the value of the objective function of the member \({XS}_{i}^{C/S1}.\)

In the second strategy, each chef instructor tries to improve his cooking skills based on individual activities and exercises. This strategy represents the local search and the CBOA’s exploitation ability. If each problem variable is considered a cooking skill, a chef instructor will try to improve all of those skills to achieve a better objective function value.

The advantage of updating based on individual activities and exercises is that each member, regardless of the position of other population members, seeks to discover better solutions near the position where it is located. There is a possibility that better solutions can be obtained based on local search and exploitation, with minor changes in the position of population members in the search space. According to this concept, around each chef instructor in the search space, a random position is generated for \(j=\text{1,2}, \dots ,m\) using Eqs. (8) to (10). If this random position improves the value of the objective function, it is acceptable for updating, which this condition is modeled using Eq. (11).

$$l{b}_{j}^{local}=\frac{l{b}_{j}}{t} ,$$
(8)
$$u{b}_{j}^{local}=\frac{u{b}_{j}}{t} ,$$
(9)

where \(l{b}_{j}^{local}\) and \(u{b}_{j}^{local}\) are the lower and upper local bound of the \(j\)th problem variable, respectively, and the variable \(t\) represents the iteration counter.

$${xs}_{i,j}^{C/S2}={xs}_{i,j}+l{b}_{j}^{local}+r\cdot \left(u{b}_{j}^{local}-l{b}_{j}^{local}\right), i=\text{1,2}, \dots , {N}_{C}, j=\text{1,2}, \dots ,m,$$
(10)
$${XS}_{i}=\left\{\begin{array}{*{20}l}{XS}_{i}^{C/S2}, & {FS}_{i}^{C/S2}<{F}_{i};\\ {XS}_{i}, & else,\end{array}\right.$$
(11)

where \({XS}_{i}^{C/S2}\) is the new calculated status for the \(i\)th CBOA sorted member (i.e., \({XS}_{i}\)) based on the second strategy (\(C/S2\)) of chef instructors updating, \({xs}_{i,j}^{C/S2}\) is its \(j\)th coordinate, and \({FS}_{i}^{C/S2}\) is its value of the objective function.

Phase 2: the updating process for the group of cooking students (update of \({XS}_{{N}_{C}+1}\) to \(X{S}_{N}\))

Cooking students attend culinary school to learn cooking skills and become a chef. In the design of CBOA, it is assumed that cooking students follow three strategies to learn cooking skills. According to the first strategy, each cooking student randomly chooses a class taught by one of the chefs, and then he is taught cooking skills by this chef instructor. The advantage of updating cooking students based on this strategy is that there are different chef instructors available to lead them, resulting in cooking students learning different skills (i.e., population members moving to other areas of the search space) based on the guidance of the chosen chef instructor. On the other hand, if all cooking students learn only from the best chef-instructor (all members of the population moved towards the best member), then an efficient global search in the problem-solving space would not be possible. This strategy is simulated in the CBOA in such a way that first for each cooking student, a new position is calculated based on the training and guidance of the chef instructor, for \(i={N}_{C}+1, {N}_{C}+2, \dots , N, j=\text{1,2}, \dots ,m,\) using Eq. (12).

$${xs}_{i,j}^{S/S1}={xs}_{i,j}+r\cdot \left({CI}_{{k}_{i},j}-I\cdot {xs}_{i,j}\right),$$
(12)

where \({XS}_{i}^{S/S1}\) is the new calculated status for the \(i\)th sorted member of CBOA (i.e., \({XS}_{i}\)) based on the first strategy (\(S/S1\)) of the updating of cooking students, \({xs}_{i,j}^{S/S1}\) is its \(j\)th coordinate, and \({CI}_{{k}_{i},j}\) is the selected chef instructor by the \(i\)th cooking student, where \({k}_{i}\) is randomly selected from the set \(\left\{\text{1,2}, \dots , {N}_{C}\right\}\) (where \({CI}_{{k}_{i},j}\) denotes the value \({xs}_{{k}_{i},j}\)).

This new position replaces the previous position for each CBOA member, if it improves the value of the objective function. This concept is modeled for \(i={N}_{C}+1, {N}_{C}+2, \dots , N\) by Eq. (13).

$${XS}_{i}=\left\{\begin{array}{*{20}l}{XS}_{i}^{S/S1}, & {FS}_{i}^{S/S1}<{F}_{i};\\ {XS}_{i}, & else,\end{array}\right.$$
(13)

where \({FS}_{i}^{S/S1}\) is the value of the objective function of \({XS}_{i}^{S/S1}.\)

In the second strategy, since each problem variable in the CBOA is assumed to be a cooking skill, each cooking student tries to learn one of the skills of the chef instructor completely and fully imitate the chef instructor (therefore, by “skill’’, we mean a recipe for one great meal). This strategy enhances the global search and exploration capabilities of the CBOA. The advantage of this strategy is that instead of updating all candidate solution variables (i.e., all cooking student skills), only one variable (one skill, i.e., one recipe) changes. It may not be necessary to update all member position coordinates to achieve better solutions.

In the design of CBOA, this “skill’’ represents a certain component of a vector of cooking skills of a randomly selected chef instructor \({CI}_{k}\) (\(k\in \left\{\text{1,2},\dots ,{N}_{c}\right\}\)). Hence, the second strategy is mathematically simulated in such a way that for each cooking student \({XS}_{i}\) (members of CBOA with \(i={N}_{C}+1, {N}_{C}+2, \dots , N\)), first one chief instructor, which is represented by the vector \({CI}_{{k}_{i}}=\left({CI}_{{k}_{i,1}}, \dots , {CI}_{{k}_{i,m}}\right),\) is randomly selected (a member of CBOA with the index \({k}_{i}\), which is randomly selected from the set \(\{1,..., {N}_{C}\}\)), then it is randomly selected his \(\mathcal{l}\)th coordinate (thus a number \(\mathcal{l}\) from the set \(\left\{1,... m\right\},\) which represents a “skill’’ of this selected chief instructor) and by this value \({CI}_{{k}_{i,\mathcal{l}}}\) we replace the \(\mathcal{l}\)th coordinate of the vector of the \(i\)th cooking student \({XS}_{i}\) (thus, \({xs}_{i,\mathcal{l}}\)).

According to this concept, a new position is calculated for each CBOA cooking student member using Eq. (14).

$${xs}_{i,j}^{S/S2}=\left\{\begin{array}{*{20}l}{CI}_{{k}_{i},j}, & j=l;\\ {xs}_{i,j}, & else,\end{array}\right.$$
(14)

where \(\mathcal{l}\) is a randomly selected number from the set \(\left\{\text{1,2}, \dots ,m\right\},\) \(i={N}_{C}+1, {N}_{C}+2, \dots , N,\) \(j=\text{1,2}, \dots ,m.\) Then, it is replaced with the previous position based on Eq. (15) if it improves the target value of the objective function.

$${XS}_{i}=\left\{\begin{array}{*{20}l}{XS}_{i}^{S/S2}, & {FS}_{i}^{S/S2}<{F}_{i};\\ {XS}_{i}, & else,\end{array}\right.$$
(15)

where \({XS}_{i}^{S/S2}\) is the new calculated status for the \(i\)th sorted member of CBOA (i.e., \({XS}_{i}\)) based on the second strategy (\(S/S2\)) of updating cooking students, \({xs}_{i,j}^{S/S2}\) is its \(j\)th coordinate, \({FS}_{i}^{S/S2}\) is its objective function value.

In the third strategy, each cooking student tries to improve his cooking skills based on his individual activities and exercises. In fact, this strategy represents the local search and the CBOA’s exploitation ability. The advantage of updating cooking students based on the strategy of individual activities and exercises is that it increases the power of local search and exploitation of the algorithm in achieving better possible solutions near the discovered solutions. In this strategy, similar to the local search strategy of chef instructors, cooking students try to converge to better solutions with small and precise steps. If each problem variable is considered a cooking skill, a cooking student will try to improve all of those skills to achieve a better objective function value.

According to this concept, around each cooking student in the search space, a random position is generated by Eqs. (8), and (9) and a new position is calculated using Eq. (16).

$${xs}_{i,j}^{S/S3}=\left\{\begin{array}{*{20}l}{xs}_{i,j}+l{b}_{j}^{local}+r\cdot \left(u{b}_{j}^{local}-l{b}_{j}^{local}\right), & j=q; \\ {xs}_{i,j}, & j\ne q,\end{array}\right.$$
(16)

where \({XS}_{i}^{S/S3}\) is the new calculated status for the \(i\)th sorted member of CBOA (that is \({XS}_{i}\)) based on the third strategy (\(S/S3\)) of updating cooking students, \({xs}_{i,j}^{S/S3}\) is its \(j\)th coordinate, and \(q\) is randomly selected number from the set \(\left\{\text{1,2}, \dots ,m\right\}\), \(i={N}_{C}+1, {N}_{C}+2, \dots , N\), and \(j=\text{1,2}, \dots ,m.\) If this new random position improves the value of the objective function, it is acceptable for updating of \({XS}_{i}\), which is modeled by Eq. (17).

$${XS}_{i}=\left\{\begin{array}{*{20}l}{XS}_{i}^{S/S3}, & {FS}_{i}^{S/S3}<{F}_{i};\\ {XS}_{i}, & else,\end{array}\right.$$
(17)

where \({FS}_{i}^{S/S3}\) is the value of the objective function of \({XS}_{i}^{S/S3}.\)

Repetition process, pseudocode, and flowchart of CBOA

A CBOA iteration is completed by updating all members of the population. The CBOA enters the next iteration with these new statuses, and the groups of chef instructors and cooking students are respecified. The population members are updated based on the implementation of the CBOA steps according to Eqs. (4) to (17) until the last iteration of the algorithm. After reaching the maximum value of the iteration variable CBOA, the best candidate solution obtained during the implementation process is presented as the solution to the problem. Various steps of CBOA implementation are presented in the form of a flowchart in Fig. 1 and its pseudocode in Algorithm 1.

Figure 1
figure 1

Flowchart of CBOA.

figure a

Computational complexity of CBOA

In this subsection, the computational complexity of the CBOA is analyzed. Preparing and initializing the CBOA for an optimization problem, with the number of decision variables \(m\), has a computational complexity of \(O(Nm)\), where \(N\) is the number of CBOA members. Updating the group of chef instructors in two strategies has a computational complexity equal to \(O(2{N}_{C}mT)\), where T is the maximum number of CBOA iterations and \({N}_{C}\) is the number of chef instructors. Updating the student cooking group in three strategies has a computational complexity equal to \(O(3 (N-{N}_{C}) mT)\). Thus, the total computational complexity of CBOA is equal to \(O(m(N + 2{N}_{C}T + 3 \left(N-{N}_{C}\right)T ))\).

Simulation studies and results

This section presents simulation studies and an evaluation of the ability of CBOA to solve optimization problems and real practice tasks. For this purpose, a set of 23 standard benchmark objective functions has been employed. For this purpose, a set of 23 standard benchmark objective functions has been employed. The reason for choosing this collection is as follows. Seven unimodal functions \({F}_{1}\) to \({F}_{7}\), which have only one main extremum and lack local optimal solutions, have been selected. Therefore, unimodal functions are employed to challenge the exploitation and local search ability of the proposed CBOA algorithm in convergence to global optimal. The six functions in this set, \({F}_{8}\) to \({F}_{13}\), are the high-dimensional multimodal type, which, in addition to the main extremum, has several local extremums and local optimal solutions. Thus, high-dimensional multimodal functions are employed to test the CBOA’s exploration and global search capability in accurately scanning the search space, passing local optimal areas, and discovering the main optimal area. The ten functions in this set, \({F}_{14}\) to \({F}_{23}\), are selected from the fixed-dimensional multimodal type, whose dimensions and the number of local extremes are less than those of the high-dimensional multimodal functions. These functions are employed to analyze the ability of the proposed CBOA algorithm to strike a balance between exploration and exploitation. The information on this set of benchmark functions is specified in Tables 1, 2 and 3.

Table 1 Information about unimodal objective functions.
Table 2 Information about high-dimensional multimodal objective functions.
Table 3 Information about fixed-dimensional multimodal objective functions.

The performance of the proposed CBOA approach in optimization is compared with the results of 12 well-known metaheuristic algorithms. The criterion for selecting these 12 competitor algorithms is as follows. PSO, GA, and DE are three prevalent algorithms that have been employed in many optimization applications. CMA, GSA, TLBO, GWO, MVO, and WOA are the six most cited algorithms that always have interested researchers. Finally, the three algorithms, MPA, TSA, and HBO, are the algorithms that have been released recently and have received a lot of attention and application in this short period. The values adopted for the control parameters of the competitor algorithms are specified in Table 4.

Table 4 Adopted values for control parameters of competitor metaheuristic algorithms.

The CBOA and each of the competing algorithms are tested on benchmark functions in twenty independent implementations while each execution contains 1000 iterations. Optimization results are reported using six indicators: mean, best, standard deviation (std), median, execution time (ET), and rank.

The CBOA and each competing algorithm are tested on benchmark functions in twenty independent implementations, while each execution contains 1000 iterations. Optimization results are reported using six indicators: mean, best, standard deviation (std), median, execution time (ET), and rank.

Evaluation unimodal objective function

The optimization results of the unimodal functions \({F}_{1}\) to \({F}_{7}\) using CBOA and competitor algorithms are given in Table 5. The optimization results show that the CBOA has performed very well in optimizing \({F}_{1}\), \({F}_{2}\), \({F}_{3}\), \({F}_{4}\), and \({F}_{6}\) and has been able to converge to the global optimal of these functions. In optimizing the functions \({F}_{5}\) and \({F}_{7}\), the CBOA has been able to deliver good results and rank the best optimizer among the compared algorithms. The simulation results show that CBOA has a self-evident superiority over competitor algorithms and, with high exploitation ability, has converged to very suitable solutions.

Table 5 Results of optimization of CBOA and competitor metaheuristics on the unimodal function.

Evaluation high-dimensional multimodal objective function

Results of CBOA and all competitor algorithms on high-dimensional multimodal functions of \({F}_{8}\) to \({F}_{13}\) are reported in Table 6. CBOA has achieved precisely the global optimal solution for \({F}_{9}\) and \({F}_{11}\), which shows us the high exploration power of CBOA. In optimizing the function \({F}_{10}\), the proposed CBOA has performed well, and for this function is ranked as the first best optimizer in competition with the compared algorithms. The simulation results indicate the high exploration power of CBOA in identifying the best optimal region and the superiority of CBOA compared to competitor algorithms.

Table 6 Results of optimization of CBOA and competitor metaheuristics on the high-dimensional multimodal function.

Evaluation fixed-dimensional multimodal objective function

The results of the CBOA and competitor algorithms for the fixed-dimensional multimodal functions \({F}_{14}\) to \({F}_{23}\) are presented in Table 7. The optimization results show that the CBOA, based on the “mean index’’, alone is the best optimizer to tackle the functions \({F}_{14}\), \({F}_{20}\), and \({F}_{18}\).

Table 7 Results of optimization of the CBOA and competitor metaheuristics on fixed-dimensional multimodal function.

In the other cases where the CBOA has the same conditions in terms of the “mean index’’, it performs more efficiently than the alternative algorithms due to better values for the “std index’’. Analysis of the simulation results shows that the CBOA performs better than competitor algorithms and has a remarkable ability to strike a balance between exploration and exploitation.

In other cases where the CBOA has the same conditions in terms of the “mean index’’, it has more efficient performance than the alternative algorithms due to better values for the “std index’’. Analysis of the simulation results shows that the CBOA performs better than competitor algorithms and has a remarkable ability to strike a balance between exploration and exploitation.

The performance of CBOA and competitor algorithms in evaluating the benchmark functions \({F}_{1}\) to \({F}_{23}\) is shown in Fig. 2 using the box plot diagrams.

Figure 2
figure 2

The boxplot diagram of CBOA and competitor algorithms performances on \({F}_{1}\) to \({F}_{23}\).

Statistical analysis

In this subsection, a statistical analysis of the performance of the CBOA compared to competitor algorithms is provided to determine whether the superiority of the CBOA is statistically significant. To provide this analysis, the Wilcoxon test of rank sums32 with the significance level \(5\%\) has been used. In this test, the values of the “\(p\)-value’’ indicate whether there is a significant difference between the means of the two data samples (thus, if the “\(p\)-value’’ is less than 0.05, then between two data samples is significant difference). The results of the Wilcoxon test of rank sums for the CBOA and competitor algorithms are released in Table 8. Consequently, since all values obtained for the \(p\)-value are less than 0.05, the CBOA has a significant statistical superiority over all twelve compared algorithms.

Table 8 Results of Wilcoxon test of rank sums.

Sensitivity analysis

The proposed CBOA is a stochastic optimizer that can achieve the optimal solution by using its members’ search power in an iteration-based process. Therefore, the values of the parameters \(N\) and \(T\), which represent the number of CBOA members and the total number of iterations of the algorithm, respectively, affect the performance of the CBOA. To study this effect, we analyze the sensitivity of CBOA to changes in values of the \(N\) and \(T\) parameters in this subsection.

In the first study, to analyze the sensitivity of CBOA to the parameter \(N\), the proposed algorithm in independent performance for different values of the parameter \(N\) equal to 20, 30, 50, and 100 is used to optimize the functions of \({F}_{1}\) to \({F}_{23}\). Results of this analysis are presented in Table 9, and CBOA convergence curves to optimize these objective functions under the influence of the changes of the parameter \(N\) are shown in Fig. 3. Based on simulation results obtained from the sensitivity analysis of the parameter \(N\), it is clear that CBOA presents similar results in most objective functions when the parameter \(N\) is changed, indicating that the CBOA is less affected by the parameter \(N\). In other cases of the objective functions, it can be seen that when the value of the parameter \(N\) increases, then the values of objective functions decrease.

Table 9 Results of CBOA sensitivity analysis to parameter \(N\).
Figure 3
figure 3

CBOA convergence curves in the study of sensitivity analysis to parameter \(N\).

In the second study, to analyze the sensitivity of CBOA to the parameter \(T\), the proposed method is implemented in independent performances for different values of the parameter \(T\) equal to 200, 500, 800, and 1000 on the objective functions \({F}_{1}\) to \({F}_{23}\). The results of this analysis are reported in Table 10, and the CBOA convergence curves affected by this study are plotted in Fig. 4. What is clear from the results of the CBOA sensitivity analysis to changes in the parameter \(T\) is that by increasing the values of the parameter \(T\), the performance of CBOA is improved and as a result, the values of objective functions are decreased.

Table 10 Results of the CBOA sensitivity analysis to parameter \(T\).
Figure 4
figure 4

CBOA convergence curves in the study of sensitivity analysis to parameter \(T\).

Discussion

Metaheuristic algorithms are random approaches with which their main idea employed in the optimization process is random search in the problem-solving space. This random search at both local and global levels is the key to the success of metaheuristic algorithms. In optimization studies, the local search power, which indicates the potential for exploitation, causes the algorithm to look for better solutions around promising candidate solutions and move closer to the optimal global solution. The capability of the “exploitation phase’’ in metaheuristic algorithms is best tested when implemented on functions that have only one main solution. Unimodal functions with this feature are good options for evaluating the exploitation ability. The optimization results of unimodal functions \({F}_{1}\) to \({F}_{7}\) indicate the high exploitation capability of CBOA, especially in handling \({F}_{1}\) to \({F}_{4}\), and \({F}_{6}\). Therefore, the simulation finding from the results of the unimodal functions is that the CBOA has high efficiency in local search and a high potential for exploitation.

The power of global search, which demonstrates the exploration potential of metaheuristic algorithms, allows the algorithm to scan different areas of the search space to discover the optimal global area. The capability of the “exploration phase’’ in metaheuristic algorithms designed for optimization can best be evaluated using optimization problems with several local optimal solutions. Therefore, high-dimensional multimodal functions are a good choice for evaluating exploration ability. The implementation results of CBOA and competitor algorithms on functions \({F}_{8}\) to \({F}_{13}\) show the high exploration ability of CBOA to global search in various areas of the problem-solving space. This CBOA capability is especially evident in the optimization results of the functions \({F}_{9}\) and \({F}_{11}\). The finding from simulations of the CBOA and competitor algorithms on the fixed-dimensional multimodal functions \({F}_{8}\) to \({F}_{13}\) is that the CBOA, with its high power in global search and exploration, can avoid getting stuck in locally optimal solutions and identify the main optimal region.

The critical point in the capability of metaheuristic algorithms is that in addition to having the desired ability in exploitation and exploration, there must be a balance between these two capabilities so that the algorithm can find the main optimal region and converge towards the global optimal. Fixed-dimensional multimodal functions are good options for testing the ability of metaheuristic algorithms to strike a balance between exploitation and exploration. Optimizing the \({F}_{14}\) to \({F}_{23}\) functions shows that the CBOA has a high potential to strike a balance between exploitation and exploration. Based on the fixed-dimension multimodal function optimization results, CBOA, with its ability to balance exploitation and exploration, can first discover the main optimal region by global search without getting entangled in locally optimal solutions, then converge to the global optimum by local search. The execution time of CBOA and competing algorithms in optimizing each objective function shows that CBOA is faster than some competing algorithms. But some other competing algorithms, although faster, did not converge to the desired results. Therefore, CBOA has an acceptable execution time when optimizing the objective functions.

The simulation findings show that CBOA has a high quality in exploitation, exploration, and balance between them, which has led to its superior performance compared to similar competing algorithms.

Evaluation CEC 2017 test suite

To analyze the capability of the proposed CBOA approach in complex optimization problems, the proposed algorithm is implemented on the CEC 2017 test suite. This set includes three unimodal objective functions \({C}_{1}\) to \({C}_{3}\), seven multimodal objective functions \({C}_{4}\) to \({C}_{10}\), ten hybrid objective functions \({C}_{11}\) to \({C}_{20}\), and ten composition objective functions \({C}_{21}\) to \({C}_{30}\). Complete information and details of the CEC 2017 test suite are described in Ref.33. The \({C}_{2}\) function has been removed from the CEC 2017 set due to its unstable behavior. The implementation results of CBOA and competitor algorithms on the CEC 2017 test suite are published in Table 11. Based on the analysis of simulation results, it is clear that the proposed CBOA approach is the first best optimizer for \({C}_{1}\), \({C}_{3}\), \({C}_{4}\), \({C}_{6}\) to \({C}_{8}\), \({C}_{10}\) to \({C}_{20}\), \({C}_{22}\), \({C}_{24}\), \({C}_{25}\), \({C}_{27}\), and \({C}_{28}\) functions compared to competitor algorithms.

Table 11 Assessment results of the IEEE CEC 2017 objective functions.

CBOA for real world applications

In this section, we will show the effectiveness of CBOA in solving real-world problems. To this end, CBOA and competing algorithms are used in the optimization of four engineering applications: (i) pressure vessel design (PVD), (ii) speed reducer design (SRD), (iii) welded beam design (WBD), and (iv) structural tension/compression springs (TCSD). Mathematical models, details, and information about these technical challenges are expressed for PVD in Ref.34, for SRD in Refs.35,36, and for WBD and TCSD in Ref.16. The optimization results of these four engineering optimization problems are published in Table 12. Based on the analysis of the results, it is clear that the CBOA approach is the first best optimizer in solving all four studied problems compared to competing algorithms.

Table 12 Assessment results of engineering optimization applications.

Conclusions and future works

This paper introduced a new human-based metaheuristic algorithm called the chef-based optimization algorithm (CBOA) and designed it to address optimization issues. The process of learning cooking skills by people who attend training cooking courses inspired the implementation of the proposed CBOA. Different phases of the cooking training process were mathematically modeled to design the CBOA implementation. The CBOA’s performance was evaluated on fifty-two benchmark functions, including seven unimodal functions, six high-dimensional multimodal functions, ten fixed-dimensional multimodal functions, and 29 functions of the CEC 2017 test suite The optimization results showed that CBOA could be used effectively in solving optimization problems due to its ability to maintain a balance between exploration and exploitation. Moreover, the simulation results showed that CBOA is more efficient and competitive than the twelve compared algorithms because it usually provides better solutions.

In addition, the employment of the CBOA on four engineering optimization issues demonstrated the high ability of the proposed approach to address real-world applications.

The proposed CBOA algorithm is a stochastic approach and therefore has some shortages and limitations. As with all metaheuristic algorithms, there is no guarantee that the solutions obtained from the CBOA for optimization problems are equal to the global optima of those problems. Although the CBOA has provided reasonable solutions to most of the objective functions studied in this paper, according to the NFL theorem, there are no preconditions for its successful implementation in all optimization applications Therefore, of course, there is a shortage and limitation of the proposed CBOA that its application may fail in some optimization problems. Also, it is always possible that researchers will design newer metaheuristic algorithms to provide better solutions to real optimization problems than existing algorithms, such as the proposed CBOA method.

The introduction of the CBOA opens research directions and tasks for future work. The most specific research potential for the CBOA is the development of binary and multi-objective versions of this proposed approach. The employment of CBOA in optimization applications in various sciences and real-world challenges are other proposals in this paper.