Introduction

In the realm of science, problems that have multiple feasible solutions are referred to as optimization problems. Therefore, finding the best feasible solution among all the available solutions for a problem is called the optimization process1. Mathematically, any optimization problem can be represented using three key components: decision variables, constraints, and objective functions2. Problem-solving methods for addressing optimization problems can be categorized into two main groups: deterministic and stochastic techniques3. Deterministic methods effectively solve simple, linear, convex, continuous, differentiable, and low-dimensional optimization problems. However, they can become inefficient when dealing with complex optimization problems and may get stuck in local optima instead of finding the global optimum solution4. Optimization problems in science, engineering, and real-world applications often have nonlinear, nonconvex, discontinuous, nondifferentiable, and high-dimensional characteristics. The limitations and challenges of deterministic approaches have prompted researchers to develop stochastic methods for solving optimization problems. These stochastic approaches offer a more flexible and robust framework that can better handle the complexity and uncertainty of these types of issues5. It employs a random search in the problem-solving space and uses random operators to provide appropriate solutions for optimization problems. Metaheuristic algorithms have many advantages, including simple concepts, easy implementation, and the ability to efficiently solve nonlinear, nonconvex, discontinuous, nondifferentiable, high-dimensional, and NP-hard problems, as well as problems in nonlinear and unknown search spaces. These advantages have made metaheuristic methods popular among researchers6. In metaheuristic algorithms, the optimization process randomly generates a set of candidate solutions. These solutions are then improved iteratively based on specific update steps until the best solution is found. Finally, this best solution is used to solve the optimization issue7. One important thing to note about metaheuristic algorithms is that, unlike deterministic approaches, there is no guarantee that they will find the globally optimal solution. The reason for this is the stochastic nature of these algorithms, which rely on a random search to explore the problem space. However, even if the optimal global solution is not found, the solutions obtained from metaheuristic algorithms are usually still acceptable as quasi-optimal because they tend to be close to the optimal global solution. Metaheuristic techniques are used to solve optimization problems by searching the problem-solving space globally and locally8. Global search, or exploration, involves comprehensively scanning the search space to discover the main optimal area and prevent getting stuck in local optima. Local search, or exploitation, involves achieving better solutions around the obtained solutions. Metaheuristic algorithms must be able to balance exploration and exploitation during the search process, to bring usable solutions for optimization problems. This balance is the key to the success of metaheuristic algorithms in achieving suitable solutions for optimization problems9.

The difference in updating steps and the search process can lead to varying results when implementing metaheuristic algorithms on the same optimization problem. Hence, when comparing the performance of multiple metaheuristic algorithms on an issue, the one that performs the search process more effectively and provides a better solution will outperform others. Researchers have developed numerous metaheuristic algorithms to solve optimization problems more effectively. These methods have found applications in various fields such as dynamic scheduling10, construction of multi-classifier systems11, 12, clustering approach13,14,15, IoT-based complex problems16, 17, parameter estimation18,19,20, modeling of nonlinear processes21, 22, energy carriers and electrical engineering23,24,25,26,27, wave solutions28,29,30,31, and higher-order nonlinear dynamical equation32.

The central inquiry in investigating metaheuristic algorithms is whether the existing multitude of algorithms designed thus far is sufficient or if there is a continued need to develop newer algorithms. The No Free Lunch (NFL) theorem33 answers this open issue by stating that the superior performance of a particular metaheuristic algorithm in solving a specific set of optimization problems does not necessarily ensure that the same algorithm will perform similarly well in solving other optimization problems. One metaheuristic algorithm may succeed in converging to the optimal global solution for a particular optimization problem but may fail to do so for another issue. Therefore, it cannot be assumed that a given metaheuristic algorithm will successfully solve any optimization problem. The NFL states that no single metaheuristic algorithm is the best optimizer for all optimization problems. This theorem motivates researchers to develop new metaheuristic algorithms that effectively solve specific optimization problems. For instance, the authors of this paper were inspired by the NFL theorem to design a new metaheuristic algorithm that can solve optimization problems in various scientific and real-world applications.

The innovation and novelty of this paper are in introducing a new metaheuristic algorithm called mother optimization algorithm (MOA) to solve optimization problems in different sciences. This paper’s principal achievements are:

  • MOA is to simulate the interactions between a mother and her children in three phases: education, advice, and upbringing.

  • The MOA's performance is assessed by testing it on 52 standard benchmark functions, including unimodal, high-dimensional multimodal, fixed-dimensional multimodal, and CEC 2017 test suite.

  • MOA has demonstrated significantly better performance when solving various optimization problems from the CEC 2017 test suite compared to twelve commonly used metaheuristic algorithms.

  • MOA’s effectiveness in solving real-world optimization problems was tested by applying it to four engineering design problems.

The structure of the remaining sections in the paper is as follows: a literature review is presented in the “Literature review” section, followed by the introduction and modeling of the proposed MOA approach in the “Mother optimization algorithm” section. The discussion, advantages, and limitations of MOA are provided in the “Discussion” section. Simulation studies and results are summarized in the “Simulation analysis and results” section, while the efficiency of MOA in handling real-world applications is evaluated in the “MOA for real-world applications” section. Finally, conclusions are drawn, and suggestions for future work are provided in the “Conclusion and future works” section.

Literature review

Metaheuristic algorithms are designed and developed with inspiration from various natural phenomena, the behavior of living organisms, biological sciences, physical laws, rules of games, human interactions, and other evolutionary phenomena. Based on the main design idea, metaheuristic algorithms can be classified into five groups: swarm-based, evolutionary-based, physics-based, game-based, and human-based approaches.

Swarm-based metaheuristic techniques draw inspiration from the collective behavior of social animals, plants, insects, and other organisms to develop powerful optimization methods. Particle swarm optimization (PSO)34, ant colony optimization (ACO)35, artificial bee colony (ABC)36, and firefly algorithm (FA)37 are among the most widely recognized swarm-based metaheuristic algorithms.

PSO was inspired by the swarm movement of birds and fish in search of food, while ACO was inspired by the ability of ants to identify the shortest path between the nest and food sources. ABC algorithm is inspired by the foraging behavior of honey bees in the colony. In contrast, the flashing behavior of fireflies and their optical communication have served as a basis for creating the FA algorithm. Among the natural behaviors of living organisms, searching for food, foraging, hunting strategy, and migration are intelligent processes that inspired models of many swarm-based metaheuristic algorithms such as grey wolf optimization (GWO)38, emperor penguin optimizer (EPO)39, pelican optimization algorithm (POA)40, rat swarm optimization (RSO)41, marine predators algorithm (MPA)42, African vultures optimization algorithm (AVOA)43, mutated leader algorithm (MLA)44, coati optimization algorithm (COA)45, tunicate swarm algorithm (TSA)46, termite life cycle optimizer (TLCO)47, two stage optimization (TSO)48, artificial hummingbird algorithm (AHA)49, fennec fox optimization (FFA)50, white shark optimizer (WSO)51, and reptile search algorithm (RSA)52.

Metaheuristic algorithms based on evolutionary principles have drawn inspiration from biological sciences, genetics, and the idea of natural selection. Genetic algorithm (GA)53 and differential evolution (DE)54 are the most famous Evolutionary-based metaheuristic methods that have been used to solve many optimization problems. GA and DE are developed based on modeling the reproduction process, Darwin’s evolutionary theory, survival of the fittest, concepts of genetics and biology, and the application of random selection, crossover, and mutation operators. Some other evolutionary-based metaheuristic algorithms are artificial immune system (AIS)55, biogeography-based optimizer (BBO)56, cultural algorithm (CA)57, evolution strategy (ES)58, and genetic programming (GP)59.

Metaheuristic algorithms based on physics have been designed by drawing inspiration from concepts, phenomena, laws, and forces in physics. Simulated Annealing (SA), for example, is a well-known physics-based metaheuristic algorithm that was inspired by the annealing phenomenon of metals in which the metal is melted under heat and then slowly cooled to form an ideal crystal60. Algorithms such as gravitational search algorithm (GSA)61 have been designed based on inspiration from physical forces, particularly the gravitational force. The concept of abnormal oscillations in water turbulent flow was the basis for the turbulent flow of water-based optimization (TFWO)62. Concepts from cosmology have inspired algorithms such as multi-verse optimizer (MVO)63, black hole (BH)64, and galaxy-based search algorithm (GbSA)65. Some other physics-based algorithms are magnetic optimization algorithm (MOA)66, artificial chemical reaction optimization algorithm (ACROA)67, ray optimization (RO) algorithm68, and small world optimization algorithm (SWOA)69.

Metaheuristic algorithms inspired by the rules and behaviors of players, coaches, and referees in individual and group games have been proposed as game-based metaheuristic algorithms. League championship algorithm (LCA)70, football game based optimizer (FGBO)71, and volleyball premier league (VPL)72 are examples of game-based metaheuristic algorithms that simulate the rules and behavior of football and volleyball league matches, respectively.

The main inspiration behind the puzzle optimization algorithm (POA)73 design has been the skill and accuracy required to assemble puzzle pieces. The strategy used by players to throw darts and score points has been the primary source of inspiration for designing the Darts Game Optimizer (DGO)74.

Inspiration from human interactions, communication, thoughts, and relationships in personal and social life has led to the development of human-based metaheuristic algorithms. One such algorithm is teaching–learning based optimization (TLBO), which simulates educational interactions between teachers and students in the classroom75. Teaching–learning-studying-based optimizer (TLSBO)76 is a method that enhances TLBO by adding a new strategy called “studying strategy”, in which each member uses the information from another randomly selected individual to improve its position. Dynamic group strategy TLBO (DGSTLBO)77 is an improved TLBO algorithm that enables each learner to learn from the mean of his corresponding group. Distance-fitness learning TLBO (DFL-TLBO)78 variant that employs a brand-new distance-fitness learning (DFL) strategy to enhance searchability. Learning cooking skills in training courses has inspired the design of the chef-based optimization algorithm (CBOA)79. The election based optimization algorithm (EBOA) has been inspired by the concept of elections and voting, with the aim of designing an algorithm that mimics the voting process to find optimal solutions80. Driving training-based optimization (DTBO)81, coronavirus herd immunity optimizer (CHIO)82, political optimizer (PO)83, brain storm optimization (BSO)84, and war strategy optimization (WSO)85 are among the other human-based metaheuristic algorithms that have been proposed, inspired by various aspects of human behavior and social interactions.

As far as the literature review suggests, no metaheuristic algorithm has been developed so far that models the interactions among humans in the context of mothers’ care for children. The high level of intelligence involved in a mother's care of her children presents a promising opportunity for the design of a novel metaheuristic algorithm. This paper aims to fill the research gap by proposing a novel metaheuristic algorithm that models human interactions between mothers and their children. The details of this new algorithm will be presented in the following section.

Mother optimization algorithm

This section will introduce the mother optimization algorithm (MOA) and its mathematical model. This section aims to present MOA and its underlying mathematical framework comprehensively. By delving into the algorithm's details and mathematical representation, readers will gain insights into MOA's inner workings and principles.

Introducing the mother optimization algorithm (MOA)

The first place of education in society is undoubtedly the family, and the mother is the essential educational element in raising children86. A mother passes her meaningful life experiences and skills to her children, who develop their abilities based on her advice87.

Among the most significant types of interactions between a mother and her children are the three processes of (i) education, (ii) advice, and (iii) upbringing. Therefore, the proposed MOA uses mathematical modeling of caring and educational behaviors.

Mathematical model of MOA

The proposed MOA is a population-based metaheuristic algorithm that solves optimization problems through an iterative process. The algorithm’s population consists of candidate solutions represented as vectors in the problem space. The population is modeled as a matrix by Eq. (1) and initialized using Eq. (2) at the start of the optimization process. Each member of the population determines the values of decision variables based on its position in the problem search space, and the search power of the population is used to find the optimal solution.

$${\varvec{X}} = \left[ {\begin{array}{*{20}c} {X_{1} } \\ \vdots \\ {X_{i} } \\ \vdots \\ {X_{N} } \\ \end{array} } \right]_{N \times m} = \left[ {\begin{array}{*{20}c} {x_{1,1} } & \cdots & {x_{1,j} } & \cdots & {x_{1,m} } \\ \vdots & \ddots & \vdots & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \vdots \\ {x_{i,1} } & \cdots & {x_{i,j} } & \cdots & {x_{i,m} } \\ \vdots & {\mathinner{\mkern2mu\raise1pt\hbox{.}\mkern2mu \raise4pt\hbox{.}\mkern2mu\raise7pt\hbox{.}\mkern1mu}} & \vdots & \ddots & \vdots \\ {x_{N,1} } & \cdots & {x_{N,j} } & \cdots & {x_{N,m} } \\ \end{array} } \right]_{N \times m} ,$$
(1)
$${x}_{i,j}=l{b}_{j}+\mathrm{rand}\left(\mathrm{0,1}\right)\cdot \left(u{b}_{j}-l{b}_{j}\right), i=\mathrm{1,2}, \dots , N, j=\mathrm{1,2}, \dots , m,$$
(2)

where \({\varvec{X}}\) is the population matrix of the proposed MOA, \(N\) is the number of population members, \(m\) is the number of decision variables, \({X}_{i}=\left({x}_{i,1},\dots ,{x}_{i,j},\dots ,{x}_{i,m}\right)\) is the \(i\)th candidate solution, \({x}_{i,j}\) is its \(j\)th variable the function \(\mathrm{rand}(\mathrm{0,1})\) generates a random uniform number from the interval \(\left[0, 1\right].\) The \(j\)th decision variable's lower and upper bounds are respectively represented by \(l{b}_{j}\) and \(u{b}_{j}\).

Each member of the population in MOA is a potential solution to the problem being optimized, and the objective function of the problem can be computed based on the values proposed by each population member for the decision variables. In mathematical terms, the values of the objective function can be represented as a vector using Eq. (3).

$$F={\left[\begin{array}{c}{F}_{1}\\ \vdots \\ {F}_{i}\\ \vdots \\ {F}_{N}\end{array}\right]}_{N\times 1}={\left[\begin{array}{c}F({X}_{1})\\ \vdots \\ F({X}_{i})\\ \vdots \\ F({X}_{N})\end{array}\right]}_{N\times 1},$$
(3)

where \(F\) is the vector of values of the objective function and \({F}_{i}\) is the value of the objective function for the \(i\)th candidate solution.

The objective function values provide a measure of the quality of the solutions generated by the population members. The best and worst population members can be identified based on the best and worst values of the objective function, respectively. As the population members’ positions are updated in each iteration, the best population member also needs to be updated accordingly. Finally, at the end of the algorithm's iterations, the best population member solves the problem.

In the design of MOA, the algorithm population is updated in three phases based on the mathematical modeling of the interaction of raising children by the mother, which is discussed below.

Phase 1: education (exploration phase)

The first phase, called “Education,” of population update in the proposed MOA approach is inspired by children’s education. It aims to increase global search and exploration capabilities by making significant changes in the position of the population members. The mother in the MOA design is considered the best member of the population, and her behavior in training her children is modeled to simulate the education phase. In this phase, a new position for each member is created using Eq. (4). If the objective function value improves in the new position, it is accepted as the corresponding member’s position, as shown in Eq. (5).

$${x}_{i,j}^{P1}={x}_{i,j}+\mathrm{rand}(\mathrm{0,1}) \cdot ({M}_{j}-\mathrm{rand}(2) \cdot {x}_{i,j}),$$
(4)
$${X}_{i}=\left\{\begin{array}{ll}{X}_{i}^{P1}, &\quad {F}_{i}^{P1}\le {F}_{i},\\ {X}_{i}, & \quad else ,\end{array}\right.$$
(5)

where \({M}_{j}\) is its \(j\)th dimension of the position of the mother, \({x}_{i,j}\) is the \(j\)th dimension of the position of the \(i\)th population member \({X}_{i}\), \({X}_{i}^{P1}\) is the new position calculated for the \(i\)th population member based on the first phase of the MOA, \({x}_{i,j}^{P1}\) is its \(j\)th dimension, \({F}_{i}^{P1}\) is its objective function value, the function \(\mathrm{rand}(\mathrm{0,1})\) generates a random uniform number in the interval \(\left[0, 1\right]\), and \(\mathrm{rand}(2)\) is the random function that uniformly generates a random number from the set \(\left\{1, 2\right\}\).

Phase 2: advice (exploration phase)

One of the primary duties of mothers in raising their children is to counsel them and not enable them to misbehave. This action of the mother in the children’s advice is employed in the design of the second phase of population update in the MOA. The advice phase leads to an increase in the MOA’s capability in global search and exploration by making significant changes in the location of the population members. In MOA design, for each member of the population, the position of other population members with a greater value of the objective function than it has is considered deviant behavior that should be avoided. The set of bad behavior \({BB}_{i}\) for each member is determined by comparing the objective function value using Eq. (6). For each \({X}_{i}\), a member is uniformly randomly selected from the constructed set of bad behaviors \({BB}_{i}\). First, a new position is created for each member using Eq. (7) to simulate keeping the child away from bad behavior. Subsequently, if it improves the objective function’s value, this new position replaces the corresponding member’s previous position, by Eq. (8).

$${BB}_{i}=\left\{{X}_{k}, {F}_{k}>{F}_{i} \wedge k \in \left\{\mathrm{1,2},\dots ,N\right\} \right\} , \quad \mathrm{where} \; i=\mathrm{1,2},\dots ,N,$$
(6)
$${x}_{i,j}^{P2}={x}_{i,j}+\mathrm{rand}(\mathrm{0,1}) \cdot ({x}_{i,j}-\mathrm{rand}(2) \cdot SB{B}_{i,j}),$$
(7)
$${X}_{i}=\left\{\begin{array}{ll}{X}_{i}^{P2},&\quad {F}_{i}^{P2}\le {F}_{i} ;\\ {X}_{i},&\quad else ,\end{array}\right.$$
(8)

where \({BB}_{i}\) is the set of bad behavior for the \(i\)th population member, \(SB{B}_{i}\) is the selected bad behavior for the \(i\)th population member, \(SB{B}_{i,j}\) is its \(j\)th dimension, \({X}_{i}^{P2}\) is the new position calculated for the \(i\)th population member based on second phase of the proposed MOA, \({x}_{i,j}^{P2}\) is its \(j\)th dimension, \({F}_{i}^{P2}\) is its objective function value, the function \(\mathrm{rand}(\mathrm{0,1})\) generates a random uniform number in the interval \(\left[0, 1\right]\), and \(\mathrm{rand}(2)\) is the random function that uniformly generates a random number from the set \(\left\{1, 2\right\}\).

Phase 3: upbringing (exploitation phase)

Mothers use various forms of encouraging children to improve their skills in the education process. The upbringing leads to an increase in the ability of local search and exploitation in the MOA phase by making small changes in the position of the population members. To simulate the upbringing phase, first, a new position is created for each member of the population based on the modeling of children's personality development using Eq. (9). If the objective function value improves in the new position, the corresponding member's previous position is replaced with the new one, as specified in Eq. (10).

$${x}_{i,j}^{P3}={x}_{i,j}+\left(1-2\cdot \mathrm{rand}(\mathrm{0,1})\right)\cdot \frac{u{b}_{j}-l{b}_{j}}{t} ,$$
(9)
$${X}_{i}=\left\{\begin{array}{ll}{X}_{i}^{P3}, &\quad {F}_{i}^{P3}\le {F}_{i} ;\\ {X}_{i}, &\quad else,\end{array}\right.$$
(10)

where \({X}_{i}^{P3}\) is the new position calculated for the \(i\)th population member based on third phase of the proposed MOA, \({x}_{i,j}^{P3}\) is its \(j\)th dimension, \({F}_{i}^{P3}\) is its objective function value, the function \(\mathrm{rand}(\mathrm{0,1})\) generates a random number in the interval \(\left[0, 1\right]\), and \(t\) is the actual value of the iteration counter.

Description of the repetition process, pseudo-code, and flowchart of MOA

After completing each iteration of the MOA algorithm, all population members are updated based on Phases 1 to 3—this process of updating the population according to Eqs. (4) to (10) continues until the final iteration. Throughout the algorithm, the best candidate solution is continuously updated and saved. Once the full implementation of the algorithm is completed, MOA presents the best candidate solution as the solution to the problem. The steps of the proposed MOA are depicted in a flowchart in Fig. 1 and pseudocode in Algorithm 1.

Figure 1
figure 1

Flowchart of MOA.

figure a

Computational complexity of MOA

In this subsection, the MOA computational complexity analysis is discussed. MOA initialization for an optimization problem has a complexity equal to \(O(Nm),\) where \(N\) is the number of population members and \(m\) is the number of decision variables of the problem. In each iteration, MOA population members are updated in three phases. The MOA update process has a complexity equal to \(O(3NmT)\), where \(T\) is the maximal number of iterations of the algorithm. Therefore, the total computational complexity of MOA is equal to \(O (Nm (1+3T))\).

Simulation analysis and results

In this section, the proposed MOA’s performance in solving optimization problems is evaluated by testing its efficiency on fifty-two standard benchmark functions, including unimodal (F1 to F7), high-dimensional multimodal (F8 to F13), and fixed-dimensional multimodal (F14 to F23) types88, as well as the CEC 2017 test suite (C17–F1, and C17–F3 to C17–F30)89. The quality of the results obtained from MOA is compared with twelve well-known metaheuristic algorithms, including GA, PSO, GSA, GWO, MVO, WOA, TSA, MPA, AVOA, WSO, and RSA. The control parameters are adjusted as specified in Table 1. To optimize functions F1 to F23, MOA and each competitor algorithm are used in twenty independent runs with 50,000 function evaluations (i.e., \(\mathrm{FEs}=50,000\)). For solving the CEC 2017 test set, the proposed MOA and the competitor algorithms are employed in fifty-one independent runs, each containing 1\(\mathrm{10,000}\cdot m\) function evaluations (i.e., \(\mathrm{FEs}=10000\cdot m\)), where \(m\) is the number of problem variables set to 10. The population size of MOA is considered equal to 30 members. Six statistical indicators, including mean, best, worst, standard deviation, median, and rank, are used to report the optimization results. The mean index is used as a ranking criterion for metaheuristic algorithms in optimizing each benchmark function. Experiments have been implemented on the software MATLAB R2022a using 64-bit Core i7 processor with 3.20 GHz and 16 GB main memory.

Table 1 Assigned values to the control parameters of competitor algorithms.

Evaluation of unimodal benchmark functions

Table 2 presents the results of MOA and twelve competitor algorithms on seven unimodal functions F1 to F7, which are selected to evaluate the ability of metaheuristic algorithms in local search and exploitation. This evaluation aims to determine the algorithm’s ability to find the global optimum. The results show that MOA has achieved convergence to the global optimum for functions F1 to F6 with high exploitation ability. Additionally, MOA has performed the best among the competitor algorithms in solving F7. The analysis of the optimization results indicates that MOA has demonstrated superior performance in solving unimodal functions F1 to F7 due to its high ability in exploitation.

Table 2 Evaluation results of unimodal functions.

Evaluation of high dimensional multimodal benchmark functions

Table 3 reports the optimization results of six high-dimensional multimodal functions (F8 to F13) using MOA and other competitor algorithms. The aim of selecting these functions was to evaluate the ability of metaheuristic algorithms in global search and exploration. The results show that MOA has outperformed the other algorithms and has been able to provide the global optimal for F9 and F11 functions. Additionally, MOA is the best optimizer for benchmark functions F8, F10, F12, and F13. It is observed that the proposed MOA approach, which has high power in exploration, has provided better results and superior performance in solving high-dimensional multimodal functions compared to the competitor algorithms.

Table 3 Evaluation results of high-dimensional multimodal functions.

Evaluation of fixed-dimensional multimodal benchmark functions

The authors evaluated the performance of the proposed MOA and other metaheuristic algorithms on ten fixed-dimension multimodal functions (F14 to F23). The goal was to investigate the algorithms’ ability to balance exploration and exploitation during the search process. The optimization results obtained using MOA and the competitor algorithms are reported in Table 4. Based on the simulation results, MOA is the best optimizer for F14, F15, F21, F22, and F23 functions. For functions F16 to F20, MOA has a similar mean performance compared to some competing algorithms. However, MOA has more favorable values for the std index, indicating a more effective performance in solving these functions. Overall, the analysis of the simulation results indicates that MOA, with its high ability to balance exploration and exploitation, performs better in solving fixed-dimension multimodal functions compared to the competitor algorithms.

Table 4 Evaluation results of fixed-dimensional multimodal functions.

Figure 2 shows boxplots of the performance results of MOA and other competing algorithms on functions F1 to F23. The interpretation of the boxplot diagrams is as follows in the functions F1 to F6, F9, and F11. MOA has converged to the global optimum with a standard deviation equal to zero in different executions. This indicates that the proposed algorithm is robust in handling these functions. Also, MOA performed more effectively in dealing with other benchmark functions such as F7, F8, F10, F12, and F23. In addition to providing better values for statistical indicators, it can be seen that the boxplot diagrams of these functions have a smaller area, less dispersion of results in different executions, and a better mean value compared to competitor algorithms. Figure 3 shows the convergence curves of MOA and competitor algorithms in solving functions F1 to F23. The convergence curves show that MOA with a suitable convergence speed, during successive iterations of the algorithm, provided a convenient local search in functions F1 to F7 with the priority of converging to the optimal solution and also without stopping at the local optimum in multimodal functions F8 to F23, the process of optimization and search in the problem-solving space continues.

Figure 2
figure 2

Boxplot of performance of MOA and competitor algorithms in solving F1 to F23.

Figure 3
figure 3

Convergence curves of performance of MOA and competitor algorithms in solving F1 to F23.

CEC 2017 test suite evaluation

This subsection evaluates MOA’s efficiency in handling the CEC 2017 test suite, which consists of 30 standard benchmark functions (C17–F1 to C17–F30). Results of MOA and competitor algorithms on this suite are reported in Table 5. The boxplot diagrams are shown in Fig. 4 and the convergence curves of metaheuristic algorithms’ performance are drawn in Fig. 5. MOA is the top-performing optimizer for C17–F1, C17–F3 to C17–F6, C17–F8 to C17–F21, and C17–F23 to C17–F30, except for C17–F2 due to its unstable behavior. Overall, the analysis of the optimization results shows that MOA provides better outcomes for most of the benchmark functions and has superior performance compared to competitor algorithms in handling the CEC 2017 test suite. The boxplot diagrams are interpreted in this way, especially in functions C17–F1, C17–F3, C17–F4, C17–F6, C17–F9, C17–F11 to C17–F23, C17–F27, C17–F28, and C17–F30. That MOA with a very low standard deviation and a smaller box area in different implementations has been able to provide more effective and robust performance in handling these functions. The analysis of boxplot diagrams intuitively shows that MOA has provided superior performance compared to competitor algorithms by delivering better results for statistical indicators such as mean and standard deviation. The convergence curves show that in dealing with the unimodal functions C17–F1 and C17–F3, it has converged towards the global optimum with high ability in exploitation and local search at a suitable speed. In dealing with functions C17–F4 to C17–F30, it is evident that MOA moves towards better solutions based on the appropriate ability in exploration during successive iterations, and this process continues until the final iterations.

Table 5 Evaluation results of CEC 2017 test suite.
Figure 4
figure 4figure 4

Boxplot of performance of MOA and competitor algorithms in solving CEC 2017 test suite.

Figure 5
figure 5figure 5

Convergence curves of performance of MOA and competitor algorithms in solving CEC 2017 test suite.

Statistical analysis

This subsection presents a statistical analysis comparing the performance of MOA with competitor algorithms to determine the significance of MOA’s superiority. The Wilcoxon signed-rank test90, a non-parametric statistical analysis used to detect significant differences between the means of two data samples, is employed to achieve this. The test uses a “\(p\)-value” index to determine whether there is a significant difference between the two data samples or not.

Table 6 presents the results of the Wilcoxon signed-rank test conducted on the performance of MOA and its competitor algorithms. The test is used to determine if there is a significant difference between the means of two data samples. A \(p\)-value less than 0.05 indicates that MOA has statistically significant superiority over the corresponding algorithm.

Table 6 Wilcoxon signed-rank test results.

Discussion

This section discusses the proposed MOA approach’s results, performance, advantages, disadvantages, and other aspects. The MOA algorithm is a population-based metaheuristic algorithm that can provide suitable solutions for optimization problems based on random searches in the problem-solving space. This random search process must be managed at both local and global levels in a way so that by balancing them during the search process, the algorithm can: first, based on the global search, thoroughly scans the problem-solving space in all regions to avoid getting stuck in local optima, Second, based on local search, with careful scanning around promising solutions, converge towards better solutions.

Unimodal functions F1 to F7, as well as C17–F1 and C17–F3 from CEC 2017 test suite, because they do not have local optima, are suitable options to evaluate the ability of local search and exploitation of metaheuristic algorithms. These types of functions have only one extremum, and the primary goal of their optimization is to challenge the ability of metaheuristic algorithms to converge to the global optimum. The optimization results of these functions show that MOA with high exploitation ability has converged to the global optimum in functions F1 to F6, and MOA has converged to solutions very close to the global optimum in handling functions F7, C17–F1, and C17–F3. The high-dimensional multimodal functions F8 to F13 have many local extrema besides the original optimum. For this reason, these functions are suitable options for measuring the ability of metaheuristic algorithms in global search and exploration. The optimization results show that MOA can identify the main optimal area of these functions, especially in handling F9 and F11 functions, which is clearly evident by presenting the global optimum. Fixed-dimension multimodal functions F14 to F23 and functions C17–F4 to C17–F30 from the CEC 2017 test suite challenge the ability of metaheuristic algorithms to balance exploration and exploitation. The optimization results of these functions show that MOA, with a high ability to balance exploration and exploitation, has achieved suitable solutions for these benchmark functions. The analysis of the simulation results indicates the high ability of MOA in exploration, exploitation, and balancing during the search process. The significant statistical superiority of MOA's performance compared to competing algorithms in handling benchmark functions has been confirmed by the Wilcoxon signed-rank test statistical analysis.

The proposed MOA approach has several advantages for global optimization problems. The first advantage of MOA is that there is no control parameter in the design of this algorithm, and therefore there is no need to control the parameters in any way. The second advantage of MOA is its high effectiveness in dealing with various optimization problems in various sciences and complex high-dimensional problems. The third advantage of the MOA is its excellent ability to balance exploration and exploitation in the search process, which allows MOA high-speed convergence to provide suitable values for decision variables in optimization tasks, especially in complex problems. The fourth advantage of the MOA is its powerful performance in handling real-world optimization applications. Against these advantages, the proposed MOA approach also has limitations. The first limitation of MOA, similar to all metaheuristic algorithms, is that there is no guarantee of achieving the global optimum using it due to the random search nature. The second limitation of MOA is that, based on the NFL theorem, there is always a possibility that newer metaheuristic algorithms will be designed to perform better than MOA. The third limitation of MOA is that it cannot be claimed that MOA is the best optimizer for all optimization tasks.

MOA for real-world applications

This section evaluates the performance of MOA in solving real-world optimization problems. Specifically, the proposed MOA approach is implemented on four engineering design optimization problems: tension/compression spring (TCS) design, welded beam (WB) design, speed reducer (SR) design, and pressure vessel (PV) design. The mathematical model and full description of these real-world applications are provided for TCS and WB in Ref.91, for SR in Ref.92, 93, and for PV in Ref.94.

The TCS problem is a design challenge in real-world applications to minimize the weight of the tension/compression spring. The schematic of this design is shown in Fig. 6. Its mathematical model is as follows:

Figure 6
figure 6

Schematic of the TCS design.

$$Consider: X=\left[{x}_{1}, {x}_{2}, {x}_{3} \right]=\left[d, D, P\right],$$
$$Minimize: f\left(x\right)=\left({x}_{3}+2\right){x}_{2}{x}_{1}^{2}.$$

Subject to:

$${g}_{1}\left(x\right)= 1-\frac{{x}_{2}^{3}{x}_{3}}{71,785{x}_{1}^{4}} \le 0, \; {g}_{2}\left(x\right)=\frac{4{x}_{2}^{2}-{x}_{1}{x}_{2}}{12,566({x}_{2}{x}_{1}^{3})}+\frac{1}{5108{x}_{1}^{2}}-1\le 0,$$
$${g}_{3}\left(x\right)= 1-\frac{140.45{x}_{1}}{{x}_{2}^{2}{x}_{3}}\le 0, \; {g}_{4}\left(x\right)=\frac{{x}_{1}+{x}_{2}}{1.5}-1 \le 0.$$

With

$$0.05\le {x}_{1}\le 2, {0.25\le x}_{2}\le 1.3\text{ and }2\le {x}_{3}\le 15.$$

The WB problem is a real-world application in engineering to minimize the welded beam’s fabrication cost. The schematic of this design is shown in Fig. 7. Its mathematical model is as follows:

Figure 7
figure 7

Schematic of the WB design.

$$Consider: \,X=\left[{x}_{1}, {x}_{2}, {x}_{3}, {x}_{4}\right]=\left[h, l, t, b\right].$$
$$Minimize: \,f\left(x\right)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4} \left(14.0+{x}_{2}\right).$$

Subject to:

$${g}_{1}\left(x\right)= \tau \left(x\right)-13,600 \le 0, \; {g}_{2}\left(x\right)= \sigma \left(x\right)-30,000 \le 0,$$
$${g}_{3}\left(x\right)= {x}_{1}-{x}_{4}\le 0, \; {g}_{4}(x) = 0.10471{x}_{1}^{2}+0.04811{x}_{3}{x}_{4} (14+{x}_{2})-5.0 \le 0,$$
$${g}_{5}\left(x\right)= 0.125 - {x}_{1}\le 0, \; {g}_{6}\left(x\right)= \delta \left(x\right)- 0.25 \le 0,$$
$${g}_{7}\left(x\right)= 6000 - {p}_{c} \left(x\right)\le 0,$$

where

$$\tau \left(x\right)=\sqrt{{\left({\tau }^{^{\prime}}\right)}^{2}+\left(2\tau {\tau }^{^{\prime}}\right)\frac{{x}_{2}}{2R}+{\left(\tau "\right)}^{2} }, {\tau }^{^{\prime}}=\frac{6000}{\sqrt{2}{x}_{1}{x}_{2}}, \tau "=\frac{MR}{J},$$
$$M=6000\left(14+\frac{{x}_{2}}{2}\right), R=\sqrt{\frac{{x}_{2}^{2}}{4}+{\left(\frac{{x}_{1}+{x}_{3}}{2}\right)}^{2}},$$
$$J=2\sqrt{2}{x}_{1}{x}_{2}\left[\frac{{x}_{2}^{2}}{12}+{\left(\frac{{x}_{1}+{x}_{3}}{2}\right)}^{2}\right] , \sigma \left(x\right)=\frac{504000}{{x}_{4}{x}_{3}^{2}},$$
$$\delta \left(x\right)=\frac{65856000}{\left(30\times 1{0}^{6}\right){x}_{4}{x}_{3}^{3}} , {p}_{c} \left(x\right)=\frac{4.013\left(30\times 1{0}^{6}\right){x}_{3}{x}_{4}^{3}}{6\times 196}\left(1-\frac{{x}_{3}}{28}\sqrt{\frac{30\times 1{0}^{6}}{4(12\times 1{0}^{6})}}\right) .$$

With

$$0.1\le {x}_{1}, {x}_{4}\le 2 \text{ and }0.1\le {x}_{2}, {x}_{3}\le 10.$$

The SR problem is an engineering subject whose design goal is to minimize the weight of the speed reducer. The schematic of this design is shown in Fig. 8. Its mathematical model is as follows:

Figure 8
figure 8

Schematic of the SR design.

$$Consider: X=\left[{x}_{1,} {x}_{2}, {x}_{3}, {x}_{4}, {x}_{5}{ ,x}_{6} ,{x}_{7}\right]=\left[b, m, p, {l}_{1}, {l}_{2}, {d}_{1}, {d}_{2}\right].$$
$$Minimize: f\left(x\right)=0.7854{x}_{1}{x}_{2}^{2}\left(3.3333{x}_{3}^{2}+14.9334{x}_{3}-43.0934\right)-1.508{x}_{1}\left({x}_{6}^{2}+{x}_{7}^{2}\right)+7.4777\left({x}_{6}^{3}+{x}_{7}^{3}\right)+0.7854\left({x}_{4}{x}_{6}^{2}+{x}_{5}{x}_{7}^{2}\right).$$

Subject to:

$${g}_{1}\left(x\right)=\frac{27}{{x}_{1}{x}_{2}^{2}{x}_{3}}-1 \le 0, \; {g}_{2}\left(x\right)=\frac{397.5}{{x}_{1}{x}_{2}^{2}{x}_{3}}-1\le 0,$$
$${g}_{3}\left(x\right)=\frac{1.93{x}_{4}^{3}}{{x}_{2}{x}_{3}{x}_{6}^{4}}-1\le 0, \; {g}_{4}\left(x\right)=\frac{1.93{x}_{5}^{3}}{{x}_{2}{x}_{3}{x}_{7}^{4}}-1 \le 0,$$
$${g}_{5}\left(x\right)=\frac{1}{110{x}_{6}^{3}}\sqrt{{\left(\frac{745{x}_{4}}{{x}_{2}{x}_{3}}\right)}^{2}+16.9\times {10}^{6}}-1\le 0,$$
$${g}_{6}(x) = \frac{1}{85{x}_{7}^{3}}\sqrt{{\left(\frac{745{x}_{5}}{{x}_{2}{x}_{3}}\right)}^{2}+157.5\times {10}^{6}}-1 \le 0,$$
$${g}_{7}\left(x\right)=\frac{{x}_{2}{x}_{3}}{40}-1 \le 0, \; {g}_{8}\left(x\right)=\frac{{5x}_{2}}{{x}_{1}}-1 \le 0,$$
$${g}_{9}\left(x\right)=\frac{{x}_{1}}{12{x}_{2}}-1 \le 0, \; {g}_{10}\left(x\right)=\frac{{1.5x}_{6}+1.9}{{x}_{4}}-1 \le 0,$$
$${g}_{11}\left(x\right)=\frac{{1.1x}_{7}+1.9}{{x}_{5}}-1 \le 0.$$

With

$$2.6\le {x}_{1}\le 3.6, 0.7\le {x}_{2}\le 0.8, 17\le {x}_{3}\le 28, 7.3\le {x}_{4}\le 8.3, 7.8\le {x}_{5}\le 8.3, 2.9\le {x}_{6}\le 3.9, \text{ and } 5\le {x}_{7}\le 5.5 .$$

The PV problem is a real-world application to minimize the total cost of the design. This design is shown in Fig. 9. Its mathematical model is as follows:

Figure 9
figure 9

Schematic of the PV design.

$$Consider: X=\left[{x}_{1}, {x}_{2}, {x}_{3}, {x}_{4}\right]=\left[{T}_{s}, {T}_{h}, R, L\right],$$
$$Minimize: f\left(x\right)=0.6224{x}_{1}{x}_{3}{x}_{4}+1.778{x}_{2}{x}_{3}^{2}+3.1661{x}_{1}^{2}{x}_{4}+19.84{x}_{1}^{2}{x}_{3}.$$

Subject to:

$${g}_{1}\left(x\right)= -{x}_{1}+0.0193{x}_{3} \le 0, \; {g}_{2}\left(x\right)=-{x}_{2}+0.00954{x}_{3}\le 0,$$
$${g}_{3}\left(x\right)=-\pi {x}_{3}^{2}{x}_{4}-\frac{4}{3}\pi {x}_{3}^{3}+1,296,000\le 0, \; {g}_{4}\left(x\right)={x}_{4}-240 \le 0.$$

With

$$0\le {x}_{1},{x}_{2}\le 100 {\text{ and } 10\le x}_{3},{x}_{4}\le 200.$$

Table 7 presents the optimization results for four engineering design problems, namely tension/compression spring (TCS), welded beam (WB), speed reducer (SR), and pressure vessel (PV), using MOA and competitor algorithms. Figure 10 shows the boxplot diagrams resulting from the performance of MOA and competitor algorithms in solving these four problems. The simulation results show that MOA achieved the best objective function values for all four issues: \(2996.348\) for TCS, \(5882.901\) for WB, \(1.724852\) for SR, and \(0.012665\) for PV. The statistical indicators also support MOA’s superiority over competing algorithms. Thus, it can be concluded that the proposed MOA approach is an effective optimizer for real-world optimization problems.

Table 7 Evaluation results of real-world applications.
Figure 10
figure 10

Boxplots of MOA and competitor algorithms performances on the real-world application.

Conclusion and future works

The novelty and innovation of this article are in introducing a new metaheuristic algorithm called Mother Optimization Algorithm (MOA), inspired by the interactions between a mother and her children in three phases: education, advice, and upbringing. First, the implementation of MOA is explained, and its steps are mathematically modeled. Then, the proposed approach is evaluated on 52 benchmark functions, including unimodal, high-dimensional multimodal, fixed-dimensional multimodal, and CEC 2017 test suite. The optimization results of unimodal functions showed that MOA has high exploitation ability and local search in converging towards the global optimum. The optimization results of high-dimensional multimodal functions showed that MOA with high exploration and global search ability could discover the main optimal area in the problem-solving space by avoiding getting stuck in local optima. The optimization results of fixed-dimensional multimodal and CEC 2017 test set demonstrate the high efficiency of MOA in solving optimization problems by maintaining a balance between exploration and exploitation strategies. Furthermore, the performance of MOA is compared to twelve well-known metaheuristic algorithms, and it is shown to outperform most of them in terms of providing more appropriate solutions. Finally, MOA is tested on four engineering design problems, and the results indicate its effectiveness in handling real-world applications. The statistical analysis obtained from the Wilcoxon signed-rank test showed that MOA has a significant statistical superiority in the competition with twelve well-known compared metaheuristic algorithms in handling the optimization problems studied in this paper.

The proposed MOA approach opens up several research possibilities for further studies. One of the most promising research areas is the development of binary and multi-objective versions of the proposed approach. Another potential direction for future work is the application of MOA to optimization problems in various fields and real-world scenarios.