Introduction

Optimization is finding a suitable set of variable values to minimize (maximize) the value of some optimization objective under certain constraints. Optimization algorithms are widely used in engineering design1,2,3, engineering practice4,5,6, motion control7,8,9, and task scheduling in the real world10,11,12. At this stage, algorithms for solving optimization problems contain two main categories13. One is the traditional optimization method based on mathematical formulas14,15,16, and the other is the metaheuristic algorithm based on stochastic processes17,18,19.

Traditional optimization methods generally have rigorous mathematical proofs and a fixed set of computational formulas. Through the procedures, it is possible to solve low-dimensional problems effectively. However, for high-dimensional issues, using traditional methods is of high computational complexity, and it is easy to fall into local optimal solutions. Compared with conventional methods, heuristic algorithms do not have strict proofs, but through the design of stochastic processes, they can effectively solve complex real-world problems, so it has been widely researched and applied in various fields.

Currently, metaheuristic algorithms can be broadly classified into the following four categories including metaheuristic algorithms based on biogenetic information (BBAs), metaheuristic algorithms based on the behavior or organization of natural organisms (NBAs), metaheuristic algorithms based on physical or chemical phenomena (PCBAs), and metaheuristic algorithms based on mathematical methods (MBAs).

Metaheuristic algorithms based on biological genetic information use the changes in the genetic information of organisms during reproduction as inspiration for algorithm construction. The most typical of these algorithms are genetic algorithms (GA)20 and differential evolutionary algorithms (DE)21. These algorithms are mainly based on the evolutionary process of organisms in the natural world and adopt the "survival of the fittest" theory to realize the optimization of the search space. Cooperative co-evolutionary algorithms (CCEA)22, evolutionary mating algorithms (EMA)23, evolutionary field optimization algorithms (EFO)24, and quantum-based avian navigation optimizer algorithm (QANA)25 belong to this type of algorithm. The main search mechanisms of such algorithms are crossover and mutation. The crossover is the recombination of elements of different variables. The mutation is the random resetting of an element of a variable to another value. Benefiting from these powerful search mechanisms, as one of the originators of metaheuristic algorithms, the GA algorithm is still widely used in various fields due to its strong scalability and fast convergence26,27,28.

Metaheuristic algorithms based on the behavior or organization of natural organisms are the largest class of algorithms. The main idea is to use the behaviors of various types of organisms as the inspiration for building algorithms, such as animal predation behavior, plant reproduction process, and human social organization. Depending on the type of organisms, they can be further classified into categories such as animal-based metaheuristics, plant-based metaheuristics, and human behavior-based metaheuristics. Among them, animal-based metaheuristic algorithms were the first to be developed. For example, the particle swarm algorithm (PSO)29 performs optimization by simulating birds' feeding behavior, and the ant colony algorithm (ACO)30 conducts optimization by mimicking the foraging behavior of ants. It has been heavily studied in recent years. Starling murmuration optimizer (SMO)31, evolutionary crow search algorithm (ECSA)32, moth-flame optimization (MFO)33,34, and whale optimization algorithm (WOA)35 are very competitive algorithms in this category. Algorithms such as the dandelion optimizer (DO)36, the forest optimization algorithm (FOA)37, and the invasive weed optimization algorithm (IWO)38 are representatives of plant-based algorithms. As the preeminent representatives of intelligent creatures, humans and their group behavior have been heavily studied and applied to human-based algorithms. The human urbanization search algorithm (HUS)39, the human evolutionary optimization algorithm (HEOA)40, the human behavioral optimization algorithm (HBBO)41, the focus group algorithm (FG)42, the human learning optimization algorithm (HLO)43, and the brainstorming optimization algorithm (BSO)44 belong to this category. The main search mechanism of the NBAs is the linear combination, which is the formation of new variables by the linear combination of multiple variables, with different combinations and specific coefficients depending on the algorithm.

Metaheuristic algorithms based on physical or chemical phenomena use the laws of physics or chemical phenomena as the main inspiration for constructing the algorithm. One of the most representative algorithms is the simulated annealing algorithm(SA)45, which performs the search by simulating the property changes during the annealing of metals. Other competitive algorithms include the simultaneous heat transfer search (SHTS)46, the special relativity search algorithm (SRS)47, Young's double-slit experiment optimization (YDSE)48, the Fick's law algorithm (FLA)49, and the Franklin's law algorithm (FLIA)50. The main search mechanisms of these algorithms are weight-based combination and domain search. The weight-based combination computes the corresponding weights based on the current variables' function values and recombines the variables based on the weights. And the calculation of the weights is usually related to the laws of physics. Domain search is a random search in a small area around the current variable.

Metaheuristics based on mathematical methods are a relatively new class of metaheuristics. The main feature of this class of algorithms is the use of specific mathematical methods as inspiration for building the algorithm. Some of these algorithms are more competitive, but their search mechanisms are still essentially linear combinations or weight-based combinations, such as gradient-based optimizer (GBO)51, generalized normal distribution optimization (GNDO)52, geometric mean optimizer (GMO)53, arithmetic optimization algorithm (AOA)54, subtractive averaging base optimizer (SABO)55. And some algorithms contribute more unique search mechanisms. The quadratic interpolation optimization algorithm (QIO)56, for example, proposes to use interpolation to generate new variables, and the Triangulation topology aggregation optimizer algorithm (TTOA)57 creates new variables by rotation.

The specific algorithm classifications and the main search mechanisms are summarized in Table 1.

Table 1 Comparison of the main search mechanisms.

Table 1 compares the various types of search mechanisms regarding global search capability, local search capability, robustness, convergence speed, use of known information, and computational complexity methods. As seen from the table, each search mechanism has advantages and limitations. Therefore, when designing meta-heuristic algorithms, the ability of the algorithm is improved by using multiple search mechanisms together. One of the more specific indicators is the use of known information, mainly the use of the values of the fitness function. In most search mechanisms, this is used only as an indicator of the merit of the variables and is not involved in generating new variables. Weight-based combination and interpolation are two search mechanisms that use known information entirely. The results show that the full use of known information helps enhance the search capability and convergence speed, but the computational complexity of both search mechanisms is high.

Therefore, this paper profoundly researches the search mechanism and proposes one that can fully use the known information with low computational complexity. Fourier series are introduced into metaheuristic algorithms, the properties of the Fourier series are analysed, and a search mechanism using three symmetry points to realize the optimal position search within a specific projection plane is proposed. Based on the search mechanism, a symmetric projection optimizer (SPO) is constructed with strong search capability, fast convergence speed, and high robustness. The proposed search mechanism does not rely on a complex search process and can realize global and local search modes through the same computational formulas, making the SPO algorithm's search process concise and efficient.

Two sets of experiments are designed to validate the SPO algorithms and the search mechanisms for them. One set of experiments selects eight powerful mathematics-based algorithms for comparative validation; the other set selects nine powerful algorithms from other classes. These two sets of algorithms are compared and experimented with in seven test suites, including CEC2017 with 30-dimensional, CEC2017 with 50-dimensional, CEC2017 with 100-dimensional58, CEC201959, CEC202060, CEC2022 with 10-dimensional, and CEC2022 with 20-dimensional61. The effectiveness of the SPO algorithm is also verified on four engineering problems and a spacecraft trajectory optimization problem. The results show that the SPO algorithm finds results closer to the optimum than the other algorithms under the same conditions.

The main contributions of this paper can be summarized as follows:

  1. 1.

    Introducing fitting into the search process enhances the purposefulness and efficiency of the search. Using the fitness function value as the output and the distance within the projected plane as the input, a fast fitting of the projected plane using the Fourier function is realized by two symmetry points, which enables the meta-heuristic algorithm to find the extreme points that can exist within the projected plane based on the fitting results.

  2. 2.

    A new optimizer called symmetric projection optimizer is constructed. Two search strategies based on the SP search mechanism are presented: the exploration and exploitation strategies. The SPO algorithm's overall performance is improved by combining the two sets of strategies. In the exploration strategy, two individuals far apart are used to perform the SP search, thus realizing a global search of the entire projective surface. Two closely spaced individuals implement the local search using the SP mechanism in the exploitation strategy.

  3. 3.

    The effectiveness of the SPO algorithm is confirmed by seven test suites, including CEC2017, CEC2019, CEC2020, and CEC2022. The results were evaluated using the Wilcoxon test, the Friedman test, and three metrics and compared with two groups of 19 recent competitive algorithms.

  4. 4.

    The practicality of the SPO algorithm is verified by four classical engineering cases and a real-world spacecraft trajectory optimization problem.

The remainder of the paper is structured as follows: in “Symmetric projection optimizer” section, the Fourier series and symmetric projection search method are analysed and derived in detail, and the specific procedure of the SPO algorithm is given. “Performance tests” section explains two sets of comparison algorithms and test parameters, and the experimental results of the two comparison algorithms in the seven test suites are presented and analysed. “Engineering problems tests” section validates the SPO algorithm through four practical engineering problems. A real spacecraft trajectory optimization problem is solved in “Spacecraft trajectory optimization using SPO” section and compared with 11 more recent competitive algorithms. Finally, in “Conclusion and outlook” section, the research is summarized, the specific advantages of the proposed algorithm are analysed, and future research directions are given.

Symmetric projection optimizer

The Fourier series

The French mathematician Fourier proposed that any periodic function satisfying the Dirichlet conditions could be made by superimposing a sequence of sine and cosine functions of different frequencies. These infinite series composed of sine and cosine functions are called Fourier series. Simultaneously, for nonperiodic functions with finite intervals, it is also possible to make them decomposable using Fourier series through period extensions62. Fourier series is widely used as an essential mathematical tool in signal processing and mathematical analysis63. In data analysis, Fourier series are used to fit and predict trends and cyclical variations in data to support decision-making and forecasting. The fitting equation is

$$ f(x) = \frac{{a_{0} }}{2} + \sum\limits_{1}^{\infty } {\left[ {a_{n} \cos (n\omega x) + b_{n} \sin (n\omega x)} \right]} $$
(1)

In Eq. (1), the waveform produced by the superposition of the sine and cosine functions when n = 1 is called the fundamental wave or 1st harmonic, and the waveform they have when n > 1 is called the nth harmonic. As can be seen from the formula, fitting a function using the Fourier series makes it as close as possible to the original function when superimposed by adjusting the amplitude, frequency, and phase of these fundamental and harmonics. The process of fitting is shown in Fig. 1.

Figure 1
figure 1

The process of fitting a curve using Fourier series. (a) Observed curves from different dimensions. (b) Fitting a curve using the fundamental wave. (c) Fitting a curve using 1-3th harmonic. (d) Fitting a curve using 1-9th harmonic. (e) Fitting a curve using 1-27th harmonic.

From Fig. 1a, it is clear that the Fourier series decomposes the function from the frequency domain. Figure 1b–e demonstrates the fitting effect using different order harmonics, where f1-f27 represents the fit results after the superposition of varying order harmonics, and s1-s27 means the nth harmonic. As can be seen from the figure, the higher the order harmonic used, the better the fit to the curve. It is worth noting that the fundamental wave has sufficiently captured the function's broad trend, with the higher orders of the Fourier series serving only to fine-tune the fitted curve's specifics—a feature that has minimal bearing on the trend. Therefore, this paper proposes to estimate the curve's trend only using the fundamental wave of the Fourier series. Then, the formula is calculated as

$$ f(x) = p_{0} + p_{1} \sin (\omega x) + p_{2} \cos (\omega x) $$
(2)

Meanwhile, using the relationship between the trigonometric functions, Eq. (2) can also be rewritten as

$$ f(x) = p_{0} + \sqrt {p_{1}^{2} + p_{2}^{2} } \sin (\omega t + \varphi ),\tan \varphi = \frac{{p_{2} }}{{p_{1} }} $$
(3)

According to Eq. (3), the final fit can be rewritten as a sine function. Furthermore, its extreme points can easily be found for a sine function limited to one period. The extreme points of the sin function are shown in Fig. 2.

Figure 2
figure 2

The extreme points of the sine function.

For Eq. (3), the extreme points are

$$ \left\{ {\begin{array}{*{20}c} {x_{min} = \frac{1}{\omega }\left[ {\arctan \left( {\frac{{p_{2} }}{{p_{1} }}} \right) - \frac{\pi }{2}} \right]} \\ {x_{\max } = \frac{1}{\omega }\left[ {\arctan \left( {\frac{{p_{2} }}{{p_{1} }}} \right) + \frac{\pi }{2}} \right]} \\ \end{array} ,} \right.T = 2\pi $$
(4)

More excitingly, for a given period ω, there are only three unknowns in Eq. (2), meaning we need only three points to estimate the overall trend of the function. Then, p1 and p2 can be obtained from the three known points by

$$ \left\{ {\begin{array}{*{20}c} {p_{0} + p_{1} \sin (\omega x_{0} ) + p_{2} \cos (\omega x_{0} ) = f_{0} } \\ {p_{0} + p_{1} \sin (\omega x_{1} ) + p_{2} \cos (\omega x_{1} ) = f_{1} } \\ {p_{0} + p_{1} \sin (\omega x_{2} ) + p_{2} \cos (\omega x_{2} ) = f_{2} } \\ \end{array} } \right. $$
(5)

Solving the above system of equations yields

$$ \left[ {\begin{array}{*{20}c} {p_{0} } \\ {p_{1} } \\ {p_{2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} 1 & {\sin (\omega x_{0} )} & {\cos (\omega x_{0} )} \\ 1 & {\sin (\omega x_{1} )} & {\cos (\omega x_{1} )} \\ 1 & {\sin (\omega x_{2} )} & {\cos (\omega x_{2} )} \\ \end{array} } \right]^{ - 1} \left[ {\begin{array}{*{20}c} {f_{0} } \\ {f_{1} } \\ {f_{2} } \\ \end{array} } \right] $$
(6)

With the above formula, in the case of three points all the time, we can predict the trend of the curve and get its possible extreme points.

Symmetric projection search method

In the previous subsection, the Fourier series and its fundamental wave were analysed, and a method for predicting the curve's trend and finding the extreme points using three points was discussed and given. However, there are two problems if one wants to use it in the metaheuristic algorithm: (1) In real optimization problems, the number of independent variables is usually tens or hundreds. If each dimension of the independent variables is dealt with separately, that will dramatically increase the computational complexity. (2) In the computational process, each prediction requires solving the inverse matrix of a third-order matrix, which is less computationally intensive each time but can take much time, considering that this process will be heavily used in the search process. Therefore, this paper proposes a concise and easy-to-compute method, namely the symmetric projection search method.

First, select any two points on the optimization function and use the direction where the first point points to the second point as the base direction, notated as

$$ \left\{ \begin{aligned} & X_{0} = [x_{0}^{0} ,x_{0}^{1} ,...,x_{0}^{n} ] \hfill \\ & X_{1} = [x_{1}^{0} ,x_{1}^{1} ,...,x_{1}^{n} ] \hfill \\ & R = X_{1} - X_{0} \hfill \\ \end{aligned} \right. $$
(7)

Then, the point of symmetry of X1 about X0 is

$$ X_{2} = X_{0} - R $$
(8)

The distance between these two points from the first point is

$$ d_{10} = \sqrt {\sum\limits_{i = 1}^{n} {(x_{1} - x_{0} )^{2} } } = - d_{20} = - \sqrt {\sum\limits_{i = 1}^{n} {(x_{2} - x_{0} )^{2} } } $$
(9)

Then, regarding the first point as the origin of the coordinates, the curve is fitted with a fundamental wave of period ω to obtain

$$ \left\{ {\begin{array}{*{20}l} {p_{0} + p_{1} \sin (0) + p_{2} \cos (0) = f_{0} } \hfill \\ {p_{0} + p_{1} \sin (\omega d_{10} ) + p_{2} \cos (\omega d_{10} ) = f_{1} } \hfill \\ {p_{0} + p_{1} \sin (\omega d_{20} ) + p_{2} \cos (\omega d_{20} ) = f_{2} } \hfill \\ \end{array} } \right. $$
(10)

Considering the relationship between the trigonometric functions, the above equation can be written as

$$ \left[ {\begin{array}{*{20}c} 1 & 0 & 1 \\ 1 & a & b \\ 1 & { - a} & b \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {p_{0} } \\ {p_{1} } \\ {p_{2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {f_{0} } \\ {f_{1} } \\ {f_{2} } \\ \end{array} } \right] $$
(11)

where

$$ \left\{ \begin{gathered} a = \sin (\omega d_{10} ) \hfill \\ b = \cos (\omega d_{10} ) \hfill \\ \end{gathered} \right. $$
(12)

Using the Gaussian elimination method, one can obtain

$$ \left[ {\begin{array}{*{20}l} {p_{0} } \\ {p_{1} } \\ {p_{2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}l} {f_{0} - p_{2} } \\ {\frac{{p_{0} + bp_{2} - f2}}{a}} \\ {\frac{{{{(f_{1} + f_{2} )} \mathord{\left/ {\vphantom {{(f_{1} + f_{2} )} 2}} \right. \kern-0pt} 2} - f0}}{b - 1}} \\ \end{array} } \right] $$
(13)

Then, the coordinate of the optimal position can be obtained by

$$ X_{new} = t_{min} \cdot R/d_{10} + X_{0} $$
(14)

The above case requires that the point of symmetry x1 about x0 is within the valid range. If x2 is outside the valid range, the midpoint of x0 and x1 can be taken as x2

$$ X_{2} = \frac{{X_{1} - X_{0} }}{2} $$
(15)

In this case, x1 and x0 are points of symmetry concerning each other and are symmetric about x2. Then, the coordinate of the optimal position can be obtained by

$$ X_{new} = t_{min} \cdot R/d_{10} + X_{2} $$
(16)

To further illustrate the effectiveness of the symmetric projection search method, the search results of a standard two-dimensional optimization function are shown. The function equation is shown below.

$$ f(x,y) = x^{2} - y^{2} ,x \in [ - 100,100],y \in [ - 100,100] $$
(17)

Some points were randomly selected, and the objective function was searched using the symmetric projection search method. The specific information on these points and the search situation is shown in Table 2. Meanwhile, the fitting and search results are shown in Fig. 3.

Table 2 Search result of the symmetric projection search method.
Figure 3
figure 3figure 3

Search behaviors of the symmetric projection search method. (a) Case 1: search behaviors on f(x). (b) Case 1: search behaviors on the projection direction. (c) Case 2: search behaviors on f(x). (d) Case 2: search behaviors on the projection direction. (e) Case 3: search behaviors on f(x). (f) Case 3: search behaviors on the projection direction. (g) Case 4: search behaviors on f(x). (h) Case 4: search behaviors on the projection direction. (i) Case 5: search behaviors on f(x). (j) Case 5: search behaviors on the projection direction. (k) Case 6: search behaviors on f(x). (l) Case 6: search behaviors on the projection direction.

Cases 1 and 2 show the result of the symmetric projection search method when the projection direction is a convex function. From Fig. 3b and d, it can be seen that the symmetric projection search method can fit the function in the projection direction a lot and find its minima. From Fig. 3a,c and Table 1, it can be seen that the points found are indeed the minima of the optimization function in the current projection direction.

Cases 3 and 4 show the result of the symmetric projection search method approach under the projection direction, which is a concave function. From Fig. 3f and h, it can be seen that the fundamental wave gives a better estimation of the variation of the function and finds its minima. Figure 3e,g and Table 2 show that the points found are also the minima of the optimized function in the current projection direction.

Cases 5 and 6 show the search results of the symmetric projection search method when the projection plane is a monotone function. Figure 3i–l shows that the symmetric projection method enables an efficient search in such situations.

It is particularly noteworthy that in cases 4–6, the extreme points searched for in the projection direction are likewise the optimal points of the optimized function in the domain of definition. The above results show that when the search direction is correct, only one search is needed to find the most valuable point of the function by the projective symmetry method.

Search strategy under the symmetric projection search method

During the search process, many algorithms categorize search processes into multiple types and use different update formulas to update the position of each individual. For example, a typical animal-based algorithm may contain various update procedures such as hunting, moving, exploring, and attacking64. Some algorithms divide the search process into two phases, exploration and exploitation, but still use different formulas for the update65. The exploration phase searches in the global scope, thus preventing the algorithm from converging to a local optimal solution. The exploitation phase is used to perform a local search around the already found solution, thus obtaining the local optimal solution. It has been proved by a large number of algorithms that dividing the algorithm into these two cases can effectively enhance the efficiency of the search. Therefore, in symmetric projection optimization, we also split the search into two phases, but different from the conventional method, we use the same set of formulas to update them and only differentiate in the selection of points. The specific update formulas are:

$$ \left\{ \begin{aligned} & X_{3} = SP(\omega ,X_{i}^{loop} ,X_{rand}^{\prime} ,X_{2} ,fit(X_{i} ),fit(X_{rand}^{\prime} ),fit(X_{2} )) \hfill \\ & fit_{i}^{loop + 1} = \min (fit(X_{i} ),fit(X_{2} ),fit(X_{3} )) \hfill \\ & X_{i}^{loop + 1} = X_{i} ,fit_{i}^{loop + 1} { = = }fit(X_{i} ) \hfill \\ & X_{i}^{loop + 1} = X_{2} ,fit_{i}^{loop + 1} { = = }fit(X_{2} ) \hfill \\ & X_{i}^{loop + 1} = X_{3} ,fit_{i}^{loop + 1} { = = }fit(X_{3} ) \hfill \\ \end{aligned} \right. $$
(18)

where

$$ X_{1} { = }\left\{ {\begin{array}{*{20}l} {X_{rand} + r \cdot (rand - 0.5) \cdot (ub - lb)} & {rand \le ep} \\ {X_{i} + r \cdot (rand - 0.5) \cdot (ub - lb)} & {rand > ep} \\ \end{array} } \right. $$
(19)

and

$$ \omega = \frac{\pi }{{\sqrt {\sum\limits_{i = 1}^{n} {(ub_{i} - lb_{i} )^{2} } } }} $$
(20)
$$ r = \frac{1.6}{{loop}} \cdot \frac{{1 + \sqrt {Dim} }}{{1 + e^{{10 \cdot (\frac{loop}{{Maxloop}} - \frac{1}{4})}} }} $$
(21)
$$ ep = \frac{0.92}{{1 + e^{{1.6 \cdot (loop - \frac{1.4}{{Maxloop}}) \cdot Maxloop}} }} $$
(22)

For the xi, whose coordinates are to be updated, an arbitrary known point is chosen in the exploratory phase, and then a random point is selected around this point by Eq. (19). If in the exploitation phase, start with xi and pick a random point around this. The parameter ep controls the selection of modes. As shown in Fig. 4a, as the number of iterations increases, the percentage of points in the exploratory phase gradually decreases, while the rate of points in the development phase gradually increases. In this way, the efficiency and avoiding local optimum can be balanced well. Since more regions or projection surfaces are not explored in the pre-iteration period, more individuals need to be devoted to exploring unknown regions or projection surfaces to find more promising regions. As the number of iterations increases, the more suitable regions that have been identified need to be explored to find the optimal result. Therefore, as the number of iterations increases, the proportion of individuals using the exploration strategy gradually decreases, and the proportion of individuals using the exploitation strategy gradually increases.

Figure 4
figure 4

Convergence curve graphs with increasing number of iterations. (a) ep value. (b) r value.

The parameter r controls the range of selected points. From Fig. 4b, it can be seen that the chosen range converges rapidly with the number of iterations and then maintains a slower rate to continue the convergence. After determining the second point x1, the third point can be calculated using Eqs. (8) and (15). Then, the fundamental wave with period ω fits the function. Much practice has proved that choosing the period parameter through Eq. (21) is more appropriate, as it can effectively estimate the global and local variations.

The pseudocode of SPO is provided in Algorithm 1.

Algorithm 1
figure a

Symmetric projection optimizer

Computational complexity analysis

For heuristic algorithms, the complexity is mainly related to the size of the population Np, the dimensions of the independent variable dim, the number of iterations maxloop, and the number of its parameters. The computational complexity of the SPO method is significantly reduced compared to other algorithms since the optimal position is updated only by a 1-dimensional projection at each update.

  1. 1.

    Time complexity

    The time complexity of the SPO method consists of three main components: the initialization of the population, the fitness calculation, and the position update. In the initialization process, its time complexity is mainly related to the population size, which is O(Np). The fitness calculation's time complexity is O(3·Np·Maxloop) because three new points are calculated for each iteration. Only the functions of the three positions must be compared in position updating, so the time complexity is O(3·Np·Maxloop). In summary, the time complexity of the SPO method is

    $$ \begin{aligned} O(SPO) & = O(initialization) + O(function{\kern 1pt} {\kern 1pt} evaluation) + O(position{\kern 1pt} {\kern 1pt} updating) \\ & = O\left( {Np} \right) + O\left( {3 \cdot Np \cdot Maxloop} \right) + O\left( {Np \cdot Maxloop} \right) \\ & \approx O\left( {4 \cdot Np \cdot Maxloop} \right) \\ \end{aligned} $$
    (23)
  2. 2.

    Space complexity

    The SPO method must use O(Np·Dim) space to save the current population's position. At the same time, it needs to use O(Np·Maxloop) space to save the optimal position during the whole iteration process. Therefore, the space complexity of the SPO method is

    $$ \begin{aligned} O(SPO) & = O(current{\kern 1pt} {\kern 1pt} population) + O(optimal{\kern 1pt} {\kern 1pt} position) \\ & = O\left( {Np \cdot dim} \right) + O\left( {dim \cdot Maxloop} \right) + O\left( {Np \cdot Maxloop} \right) \\ & = O\left( {\left( {Np + Maxloop} \right) \cdot dim} \right) \\ \end{aligned} $$
    (24)

Performance tests

In this section, the optimization performance of SPO is verified and evaluated using extensive test suites and compared with the same and different classes of algorithms, respectively. The test suites used in the experimentation and the evaluation criteria are given first. Then, the selected algorithms in each set are introduced separately. Finally, the experiments' results are given, and the convergence performance and stability of the SPO algorithm are quantitatively and qualitatively analysed from several perspectives.

Experimental design

To verify the performance of the SPO method more comprehensively, the CEC2017, CEC2019, CEC2020, and CEC2022 test suites are selected for comprehensive testing in this section, and the CEC2017 suite is tested with 30, 50, and 100 dimensions, respectively, and the CEC2022 is also tested with 10 and 20 dimensions.

The CEC2017 test suite includes 30 single-objective optimization functions, including three unimodal functions, seven simple multimodal functions, ten hybrid functions, and ten composition functions. Compared with the standard test functions, the CEC2017 test functions are more complex and test the algorithms' optimization capabilities well. At the same time, the challenge of solving these functions rises gradually as the dimensionality increases. The CEC2019 test suite contains ten single-objective test functions, and the dimensionality of the variables in each function is fixed. CEC2020 and CEC2021 test suites use the same test functions, and CEC2020 is chosen for testing in this paper. CEC2020 also contains ten unimodal functions, and 20 dimensions have been selected for the testing. CEC2022 has 12 unimodal functions. There are one unimodal function, four basic functions, three hybrid functions, and four composition functions. Compared to other test suites, the CEC2022 test functions are more traditional.

As seen from the above presentation, the seven types of test suites chosen encompass the vast majority of cases, thus allowing for a comprehensive evaluation of the algorithms from various perspectives. In order to minimize the random factor in the tests, 50 rounds of tests were conducted in the experiment.

The mathematical-based algorithms selected for comparisons (MBAs)

Firstly, some mathematical-based algorithms with excellent results in recent years were selected to test the above test suite. The specific parameter settings are shown in Table 3. The selected algorithms include:

  • Sine Cosine Algorithm (SCA)66: The sine and cosine functions are introduced into the metaheuristic algorithm, and the search for the fitness function is realized by fluctuating outward through the mathematical models of the sine and cosine functions. The experiment proved that the search effect of SCA on 19 basic functions outperforms the other six algorithms.

  • RUN beyond the metaphor (RUN)67: The Runge Kutta algorithm for integral operations is introduced into the metaheuristic algorithm, which searches for the fitness function by changing the slope. RUN has been experimentally proven to be more efficient than the other eight algorithms on the CEC2017 test suite.

  • Arithmetic Optimization Algorithm (AOA)54: Four basic operations are introduced into the metaheuristic algorithm, and the search for fitness functions is realized by four operations: addition, subtraction, multiplication, and division. Experiments prove that the search effect of AOA on 29 test functions outperforms the other 11 algorithms.

  • Weighted mean of vectors (INFO)68: Weighted mean of vectors is introduced into the metaheuristic algorithm, and it searches for fitness functions by composition functions between vectors. Experiments demonstrate that INFO outperforms the other six algorithms on the CEC2017 test suite.

  • Sinh Cosh Optimizer (SCHO)69: The hyperbolic sine and hyperbolic cosine functions are introduced into the metaheuristic algorithm, and the properties of hyperbolic sine and hyperbolic cosine functions realize the search for the fitness function. Experiments prove that the search effect of SCHO on the CEC2014 test suite is better than the other eight algorithms.

  • Exponential distribution optimizer (EDO)70: The exponential probability distribution model is introduced into the metaheuristic algorithm, and the exponential distribution model simulation searches the optimization strategy. Experiments prove EDO has some advantages over the other ten algorithms in CEC2014, CEC2017, CEC2020, and CEC2022.

  • Triangulation topology aggregation optimizer (TTAO)71: A similar triangle topology from mathematics is introduced into the metaheuristic algorithm. Construct multiple topological units through generalized aggregation and local aggregation to enable the search of fitness functions. Experiments demonstrate that TTAO has the best results on average on the CEC2017 test suite compared to ten other algorithms.

  • Quadratic Interpolation Optimization (QIO)56: The quadratic interpolation to find the minimum value method is introduced into the metaheuristic algorithm, and the search for the optimal position is realized by interpolating three points in each direction separately. Experiments prove that the search effect of QIO is better than the other 12 algorithms on the CEC2014 test suite.

Table 3 Parameter settings of MBAs.

The other-based algorithms selected for comparisons (OBAs)

Two widely used metaheuristic algorithms and seven recently introduced algorithms were chosen for testing in the second set of experiments. The specific parameter settings are shown in Table 4. The chosen algorithms include:

  • Particle Swarm Optimization (PSO)29: The animal-based metaheuristic algorithm. The algorithm has been widely used in practical engineering, which proves its reliability and practicality. The test results of the PSO algorithm can be used as a benchmark for comparison.

  • Artificial gorilla troops optimizer (GTO)72: The animal-based metaheuristic algorithm. The optimization space is searched by simulating collective life among gorillas. Experiments prove that the performance of GTO outperforms eight other algorithms on 52 test functions, such as CEC2017. It has been widely used in practical engineering in recent years.

  • Dandelion Optimizer (DO)36: The natural-based metaheuristic algorithm. The search for the optimization space is realized by simulating the flight process of dandelion seeds. Experiments demonstrate that the DO outperforms the other nine algorithms on the CEC2017 test suite.

  • Snake Optimizer (SO)73: The animal-based algorithm. The search is achieved by simulating the behaviors of snakes, such as predation and mating. Experiments demonstrate that SO is superior to the other nine algorithms on the CEC2017 test suite.

  • Fick's Law Algorithm (FLA)49: The physics-based metaheuristic algorithm. The search of the optimization space is implemented using Fick's diffusion law. Experiments demonstrate that the search performance of FLA outperforms the other 12 algorithms on the CEC2017 test suite.

  • Human Evolutionary Optimization Algorithm (HEOA)40: The human-based metaheuristic algorithm. The optimization space is searched by simulating human behavior during global search. Experiments demonstrate that the search efficiency of HEOA outperforms the other ten algorithms on 23 test functions.

  • Kepler Optimization Algorithm (KOA)74: The physics-based metaheuristic algorithm. The optimization space is searched by updating the candidate solutions using Kepler motion laws. Experiments show that KOA is more efficient than the other 12 algorithms on four test suites, including CEC2014, CEC2017, CEC2020, and CEC2022.

  • Young's double-slit experiment optimizer (YDSE)48: The physics-based metaheuristic algorithm. The search of the search space is achieved by simulating the behavior of light in Young's double-slit experiment. Experiments prove that the optimization performance of YDSW outperforms the other 12 algorithms on CEC2014, CE2017, and CEC2022.

  • Genghis Khan shark optimizer (GKSE)75: The animal-based metaheuristic algorithm. The search for the optimal position is achieved by simulating the predation process of Genghis Khan sharks. Experiments demonstrate that GKSE is more substantial than the other eight fish algorithms and the other nine algorithms on two test suites, CEC2019 and CEC2022.

Table 4 Parameter settings of OBAs.

Performance indices

  1. 1.

    Meani refers to the mean fitness of the algorithm after 50 tests. Since heuristic algorithms are mostly randomly initialized, certain exceptional cases make the algorithm's performance demonstrate better or worse than the actual performance. Therefore, the mean fitness is usually taken to evaluate the algorithm's capability. Its calculation formula is

    $$ Mean_{i} = \frac{1}{50}\sum\limits_{n = 1}^{50} {fitness_{i}^{n} } $$
    (25)
  2. 2.

    Stdi refers to the standard deviation of all best results of the algorithm after 50 tests. The smaller the standard deviation, the better the stability of the algorithm. Its calculation formula is

    $$ Std_{i} = \sqrt {\frac{1}{49}\sum\limits_{n = 1}^{50} {\left( {fitness_{i}^{n} - Mean_{i} } \right)^{2} } } $$
    (26)
  3. 3.

    MeanRanki refers to the mean ranking of the algorithm on each test function of the current test suite. The mean rank measures the algorithm's overall performance on the test suite. Its calculation formula is

    $$ MeanRank_{i} = \frac{1}{k}\sum\limits_{k = 1}^{K} {rank_{i}^{k} } $$
    (27)
  4. 4.

    Wilcoxon's rank sum test76 is a non-parametric hypothesis testing method mainly used to test whether the distributions of two data sets are the same. In this paper, the 50 experimental results of the SPO algorithm are rank-sum tested against the 50 test results of other algorithms. If the two fitness sets do not satisfy the rank sum test, that proves the SPO algorithm has a significant advantage over the comparison algorithm.

  5. 5.

    The Friedman test is a multiple comparison test that compares the performance of several algorithms simultaneously by using various functions. Its formula is shown in Eq. (28).

    $$ FT = \frac{12N}{{K(K + 1)}}\left( {\sum\limits_{k = 1}^{K} {R_{k}^{2} } - \frac{{K(K + 1)^{2} }}{4}} \right) $$
    (28)

    where N is the number of test functions, K is the number of algorithms, and Rk is the average ranking of the kth algorithm. For the Friedman test with degree of freedom 1, the smaller the final value obtained, the better the algorithm's performance.

General performance analysis

Table 5 and Fig. 5 show the cumulative rank of two sets of algorithms in each test suit. It can be seen that the SPO algorithm has the best cumulative ranking in all tests. On the CEC2017 test suite, the cumulative rank of SPO decreases as the dimensionality of the variables increases. The cumulative rank was 82 when the variable dimension was 30 dimensions and 69 when the variable dimension was 100 dimensions, a 15.8% decrease in cumulative rank. The same occurred on the CEC2022 test suite, where the cumulative rank decreased by 23.7%. The results indicate that as the dimensionality increases, the performance of the SPO algorithm improves significantly compared to the other algorithm.

Table 5 Cumulative rank across all tests.
Figure 5
figure 5

Comparison of the cumulative rank sum of all algorithms on all tests.

From the overall ranks, the SPO algorithm is ranked first in all test suites with a mean rank value of 2.68, meaning that in most cases, the SPO algorithm is ranked in the top three regarding search performance on all test functions. The algorithm with the second-best mean rank is QIO, which has a mean rank value of 4.72. The mean rank value of the SPO algorithm is 43.2% lower compared to QIO, which can be considered a significant advantage compared to the second place.

Quantitative analysis

Comparative analysis of the SPO algorithm and the other MBAs

Tables 6, 7, 8, 9, 10, 11,12 show the results of the SPO algorithm and other MBAs in different test suites. The best result has been marked in bold.

Table 6 Comparison of results between MBAs and SPO on CEC2017 with 30 dimensional.
Table 7 Comparison of results between MBAs and SPO on CEC2017 with 50 dimensional.
Table 8 Comparison of results between MBAs and SPO on CEC2017 with 100 dimensional.
Table 9 Comparison of results between MBAs and SPO on CEC2019.
Table 10 Comparison of results between MBAs and SPO on CEC2020.
Table 11 Comparison of results between MBAs and SPO on CEC2022 10 dimensional.
Table 12 Comparison of results between MBAs and SPO on CEC2022 20 dimensional.

As seen in Tables 6, 7, 8, on the 30-dimensional CEC2017 test, the SPO algorithm obtained a mean rank of 1.77 and ranked first overall. The SPO algorithm achieves the best results on 63.3% of all functions. On the 50-dimensional CEC2017 test, the SPO algorithm ranked first with a mean of 1.47. Furthermore, it achieved the best results on 73.3% of all functions. On the 100-dimensional CEC2017 test, the SPO algorithm also ranked best with a mean of 1.53 and achieved the best results on 73.3% of all functions. The above results show that the SPO algorithm has a dominant performance in all dimensions of CEC2017 compared to the other eight MBAs and is more dominant in high dimensions.

As seen in Tables 9 and 10, On the CEC2019 and CEC2020, the SPO algorithm got the mean rankings of 2.1 and 1.8 separately. It was also ranked first on both test suits as well. Furthermore, SPO had the best search results on 70% of cec2019 and 60% of cec2020 functions.

As evidenced in Tables 11 and 12, The SPO algorithm ranked first on the 10-dimensional and the 20-dimensional CEC2022 tests with a mean rank of 1.83 and 1.75, respectively. The SPO algorithm achieved the best results on 58.3% and 41.7% of CEC2022. Similar to the test on CEC2017, SPO had a better mean rank on the higher dimensions.

Overall, the SPO algorithm achieves the best results in 65.7% of the functions tested compared to the other MBAs and performs better in higher dimensional tests.

Comparative analysis of SPO algorithm and OBAs algorithm

Tables 13, 14, 15, 16, 17, 18, 19 show the results of the SPO algorithm and other MBAs in seven types of tests. The best result has been marked in bold.

Table 13 Comparison of results between OBAs and SPO on CEC2017 with 30 dimensional.
Table 14 Comparison of results between OBAs and SPO on CEC2017 with 50 dimensional.
Table 15 Comparison of results between OBAs and SPO on CEC2017 with 100 dimensional.
Table 16 Comparison of results between OBAs and SPO on CEC2019.
Table 17 Comparison of results between OBAs and SPO on CEC2020.
Table 18 Comparison of results between OBAs and SPO on CEC2022 with 10 dimensional.
Table 19 Comparison of results between OBAs and SPO on CEC2022 with 20 dimensional.

Tables 13, 14, 15 show that the SPO algorithm achieved first rank on all three dimensions of the CEC2017 test. Moreover, as the dimensionality increased from 30 to 100 dimensions, the mean rank of SPO decreased from 1.8 to 1.63, and the number of functions achieving the best position rose from 46.6% to 63.3%. From the test results on CEC2022 in Tables 17 and 18, again, as the dimensionality increases from 10 to 20, the mean rank of the SPO algorithm improves from 2.25 to 1.58, and the function that achieves the optimal position improves from 41.6 to 58.3%. Furthermore, the SPO algorithm is also ranked first in both sets of tests. These results demonstrate that the SPO algorithm has excellent advantages in high-dimensional testing.

Naturally, the SPO algorithm also achieved first place on the tests on CEC2019 and CEC2020. As shown in Tables 16 and 17, the mean rank of SPO on the two sets of tests is 1.8 and 2.1, respectively, and achieves the best results on 70% of the CEC2019 and 50% of the CEC2020 tests, respectively.

Overall, the SPO algorithm achieves the best results in 60% of the functions tested compared to the other OBAs.

Convergence analysis

The convergence and distribution of the SPO algorithm in different test suites are demonstrated in Figs. 6, 7, 8, 9, 10, 11, 12, 13.

Figure 6
figure 6

Boxplots of MBAs and SPO for solving 30-dimensional CEC2017 (portion).

Figure 7
figure 7

Convergence graphs of MBAs and SPO for solving 100-dimensional CEC2017 (portion).

Figure 8
figure 8

Convergence graphs of MBAs and SPO for solving 20-dimensional CEC2020 (portion).

Figure 9
figure 9

Boxplots of MBAs and SPO for solving 20-dimensional CEC2022.

Figure 10
figure 10

Convergence graphs of OBAs and SPO for solving 50-dimensional CEC2017 (portion).

Figure 11
figure 11

Boxplots of OBAs and SPO for solving 100-dimensional CEC2017 (portion).

Figure 12
figure 12

Boxplots of OBAs and SPO for solving 20-dimensional CEC2020.

Figure 13
figure 13

Convergence graphs of OBAs and SPO for solving 20-dimensional CEC2022 (portion).

From Fig. 7, 8, 10, and 13, it can be seen that the SPO algorithm can converge quickly in most of the test functions, including unimodal functions, multimodal functions, hybrid functions, and composition functions. The convergence curves in Figs. 8 and 13 show that on low-dimensional tests such as CEC2020 and CEC2022, SPO converges faster, essentially completing the global search within 200 rounds and being able to optimize the results locally in subsequent iterations. The convergence curves for the high-dimensional tests in Fig. 8 and 11 show that the SPO algorithm has a clear advantage with high-dimensional complex problems. F7, F13, F21, F25, and F26 in Fig. 8 and F8, F14, F16 and F22 in Fig. 11 show that SPO not only converges faster but also finds better results compared to other algorithms.

Figure 6, 9, 11 and 12 show the distribution of the results of each algorithm over 50 round tests. As can be seen from the figure, the SPO algorithm had a significantly smaller distribution compared to the other algorithms. This result indicates that SPO has higher stability than other algorithms and can better exclude the influence of random factors.

Statistical analysis

Wilcoxon's rank sum test

Tables 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33 demonstrate Wilcoxon's rank sum test results between the SPO algorithm versus the other algorithm at the 5% test level. A less than 0.05 indicates a significant difference between the two algorithms. Where the differences are not substantial, they are marked in bold.

Table 20 Wilcoxon's rank sum test results between MBAs and SPO on 30-dimensional CEC2017.
Table 21 Wilcoxon's rank sum test results between MBAs and SPO on 50-dimensional CEC2017.
Table 22 Wilcoxon's rank sum test results between MBAs and SPO on CEC2017 with 100 dimensional.
Table 23 Wilcoxon's rank sum test results between MBAs and SPO on CEC2019.
Table 24 Wilcoxon's rank sum test results between MBAs and SPO on CEC2020.
Table 25 Wilcoxon's rank sum test results between MBAs and SPO on CEC2022 with 10 dimensional.
Table 26 Wilcoxon's rank sum test results between MBAs and SPO on CEC2022 with 20 dimensional.
Table 27 Wilcoxon's rank sum test results between OBAs and SPO on CEC2017 with 30 dimensional.
Table 28 Wilcoxon's rank sum test results between OBAs and SPO on CEC2017 with 50 dimensional.
Table 29 Wilcoxon's rank sum test results between OBAs and SPO on CEC2017 with 100 dimensional.
Table 30 Wilcoxon's rank sum test results between OBAs and SPO on CEC2019.
Table 31 Wilcoxon's rank sum test results between OBAs and SPO on CEC2020.
Table 32 Wilcoxon's rank sum test results between OBAs and SPO on CEC2022 with 10 dimensional.
Table 33 Wilcoxon's rank sum test results between OBAs and SPO on CEC2022 with 20 dimensional.

As can be seen from Tables 20, 21, 22, 23, 24, 25, 26, the number of cases in which the SPO algorithm has a significant advantage over other MBAs algorithms in the 30-dimensional CEC2017,50-dimensional CEC2017, and 100-dimensional CEC2017 tests are 228/240,229/240,223/240, that means in more than 95.8% of the cases, the SPO algorithm has a significant advantage. In CEC2019, CEC2020, CEC2022 in 10 dimensions and CEC2022 in 20 dimensions, the numbers with significant benefits are 77/80,76/80,89/96,93/96, respectively, which means The SPO algorithm has substantial advantages in 95.2% of the cases.

From Tables 27, 28, 29, 30, 31, 32, 33, it can be seen that the SPO algorithm has a significant advantage over the other OBAs algorithms in the 30-dimensional CEC2017,50-dimensional CEC2017, and 100-dimensional CEC2017 tests in the number of substantial benefits are 253/270,253/270, 258/270, that means in more than 94.3% of the cases, the SPO algorithm has a significant advantage. In CEC2019, CEC2020, 10-dimensional CEC2022, and 20-dimensional CEC2022, the considerable benefits are 86/90,82/90,98/108,99/108, respectively, which means The SPO algorithm has significant advantages in 92.2% of the cases.

In general, the SPO algorithm has a significant advantage in 94.6% of cases compared to all algorithms.

The Friedman test

Figure 14 shows the Friedman test results of the SPO algorithm and 8 MBAs algorithms on all 131 test functions. As can be seen from the figure, the SPO algorithm won first place with an absolute advantage of 1.6947. The final ranking is SPO > QIO > INFO > RUN > TTAO > EDO > SCHO > SCA > AOA. Table 34 shows the specific Friedman test results on each test set. As the table shows, SPO has a clear advantage on all the test sets and is ranked first.

Figure 14
figure 14

The overall Friedman rank of SPO and MBAs.

Table 34 Friedman test results with MBAs and SPO.

Figure 15 illustrates the Friedman test results for the SPO algorithm and the nine OBAs algorithms on all 131 test functions. Similar to the test results for the MBAs algorithms, the SPO algorithm again takes the first place by a wide margin, the GKSO algorithm takes the second place, and the SO, GTO, FLA, and PSO algorithms are closer to each other in terms of performance. The final ranking is SPO > GKSO > SO > GTO > FLA > PSO > DO > YDSE > HEOA > KOA. Table 35 demonstrates the results of Friedman's test for the SPO algorithm and OBAs algorithm on the CEC test function. The table shows that the SPO algorithm has a significant advantage over the other OBAs algorithms and has a much smaller final ranking on each test set.

Figure 15
figure 15

The overall Friedman rank of SPO and MBAs.

Table 35 Friedman test results with OBAs and SPO.

Engineering problems tests

Several common engineering problems are selected in this paper to verify the performance of SPO algorithms and verify their effectiveness in real engineering. Among the comparison algorithms, some algorithms are selected from the two types of algorithms in the previous section, and some algorithms that have been verified for a long time in real engineering are added. In the testing process, 50 rounds of the same test were performed for each algorithm with the same parameters as in the previous section.

Tension/compression spring design problem

The tension/compression spring design problem optimizes the spring-mass under four constraints. The problem schematic is shown in Fig. 16, and its optimization variables include the diameter of the spring d, the diameter of the coils D, and the number of loops N. The mathematical model can be referred to in the paper77. The experiment results are shown in Table 36. The best result has been marked in bold. The convergence curve of the SPO algorithm in the experiment is shown in Fig. 17.

Figure 16
figure 16

Schematic of the tension/compression spring design problem.

Table 36 Results of the comparative algorithms for solving the tension/compression spring design problem.
Figure 17
figure 17

Convergence curves of the SPO algorithm for the tension/compression spring design problem.

The optimization results in Table 36 show that although most algorithms achieve the optimal value at the best time, the SPO algorithm is more stable than the others with the minimum mean best fitness. It can also be seen from the convergence curve in Fig. 16 that the SPO algorithm can converge quickly for the spring compression problem, and at the same time, it can perform a small range of optimization searches for the optimal position in subsequent iterations to continuously improve the optimal position.

Gear train design problem

The gear train design problem is a common problem in mechanical engineering. As shown in Fig. 18, its optimization variable is the number of gears of four gears. Its mathematical model can be referred to in the paper81. The experiment results are shown in Table 37. The best result has been marked in bold. The convergence curve of the SPO algorithm in the experiment is shown in Fig. 19.

Figure 18
figure 18

Schematic of the gear train design problem.

Table 37 Results of the comparative algorithms for solving the gear train design problem.
Figure 19
figure 19

Convergence curves of the SPO algorithm for the gear train design problem.

From the results in Table 37, all the algorithms search for the optimal location for this problem in the best case, but the SPO algorithm is far better than the other algorithms in terms of the mean best of 50 round tests, and the mean best fitness is more than two orders of magnitude lower than the other algorithms. From the convergence shown in Fig. 19, the SPO algorithm found the optimal problem using only a smaller number of iterations.

Pressure vessel design problem

Pressure vessel design problems are common in the actual manufacturing process. It mainly solves using the minimum cost to withstand a certain pressure. The schematic diagram of the problem is shown in Fig. 20, and the optimization variables mainly include four, which are shell thickness (Ts), head thickness (Th), diameter (R), and cylindrical length (L). Its mathematical model can be referred to in the paper82. The experiment results are shown in Table 38. The best result has been marked in bold. The convergence curve of the SPO algorithm in the experiment is shown in Fig. 21.

Figure 20
figure 20

Schematic of the pressure vessel design problem.

Table 38 Results of the comparative algorithms for solving the pressure vessel design problem.
Figure 21
figure 21

Convergence curves of the SPO algorithm for the pressure vessel design problem.

From the results in Table 38, the SPO algorithm is ranked first and much better than the other algorithms in the mean best fitness of 50 round tests, although it does not perform as well as the PSO and GTO algorithms in the best case. Meanwhile, from the convergence curve situation in Fig. 21, it can be seen that the SPO algorithm can search near the optimal position very quickly and keep optimizing the optimal position in subsequent iterations, proving the SPO algorithm's effectiveness in this problem.

Planetary-gear-train design optimization problem

The planetary-gear-train design optimization problem is a common problem in the automotive design process. The main objective is to reduce the maximum error of the transmission ratio during automobile use. The schematic diagram of the problem is shown in Fig. 22, and the optimization variables mainly include nine, six of which are the number of gears required to be integers. Its mathematical model can be referred to in the paper83. The experiment results are shown in Table 39. The best result has been marked in bold. The convergence curve of the SPO algorithm in the experiment is shown in Fig. 23.

Figure 22
figure 22

Schematic of the planetary-gear-train design optimization problem.

Table 39 Results of the comparative algorithms for solving the planetary-gear-train design optimization problem.
Figure 23
figure 23

Convergence curves of the SPO algorithm for the planetary-gear-train design optimization problem.

From the results in Table 39, the SPO algorithm is ranked first in best fitness and mean best fitness. It can be seen that both PSO and SSA algorithms have an enormous value of average fitness, which indicates that these two algorithms have not been able to optimize the sub-problem effectively in some cases. The convergence curves in Fig. 23 show that the SPO algorithm can still converge quickly compared to the other three engineering problems despite increasing the number of variables. This result shows that the SPO algorithm also performs well when facing complex problems.

Spacecraft trajectory optimization using SPO

With the continuous development of aerospace technology, spacecraft has become an essential part of the combination of production and life. Research on spacecraft trajectory is also increasingly prosperous, and spacecraft trajectory optimization is necessary84. The spacecraft trajectory optimization problem referred to in this section relates to the trajectory optimization problem of a single spacecraft for multiple spacecraft in the same orbital plane to leap, and its schematic diagram is shown in Fig. 24.

Figure 24
figure 24

Schematic of the spacecraft trajectory optimization problem.

As can be seen from the figure, the chaser needs to flyby the target sequentially, where the main problem is that all spacecraft are in motion, which means that the position of the target is different at each moment. At the same time, the main variable dt in the problem belongs to an extensive range of continuous variables, and the number of variables is large, which makes the spacecraft trajectory optimization problem highly complex.

The objective function of the spacecraft trajectory optimization problem can be expressed as

$$ \begin{array}{*{20}c} {{\text{Minimize}}} & f \\ \end{array} (x) = \sum\limits_{i = 0}^{n - 1} {dv_{2i} } $$
(29)

where dv2i denotes the amount of change in the velocity of the chaser spacecraft at the moment t2i, which can be calculated by solving the Lambert problem consisting of the time interval dt2i+1 and the positions of the chaser spacecraft at the moment t2i, and t2i+1.

The constraints of the problem consist of three main categories. The first category is time constraints, where all intervals should exceed a specified maximum time.

$$ \Delta t_{min} \le \Delta t_{i} \le \Delta t_{max} $$
(30)

The second type of constraint is the position constraint, where the position of the two spacecraft should be less than the minimum requirement when the chaser flyby the target.

$$ \left\| {R_{chaser} - R_{target} } \right\| \le \varepsilon_{r} $$
(31)

The third type of constraint is the velocity constraint, where each velocity increment should be less than the maximum velocity increment the chaser can apply.

$$ \left| {dv_{i} } \right| \le dv_{max} $$
(32)

The initial values for all spacecraft are shown in Table 40. To further validate the performance of SPO, 11 recently published competitive algorithms are additionally selected for comparison in this section, all of which were published after 2023 and 4 of which were just published in 2024. The parameters of all the algorithms were selected according to the criteria of the published papers; the population size was set to 60, and the number of iterations was set to 300. In order to verify the robustness of the algorithms, we tested all the algorithms with 50 randomized tests. The specific test results are shown in Table 41.

Table 40 Results of the comparative algorithms for solving the spacecraft trajectory optimization problem.
Table 41 Results of the comparative algorithms for solving the spacecraft trajectory optimization problem.

As can be seen in Table 41, the SPO algorithm outperforms the other algorithms in four dimensions: mean, standard deviation, optimal result, and worst result. The optimization results of the SPO algorithm are much better than the other algorithms in terms of mean value, and only the optimization results of the GO algorithm are closer to the results of the SPO algorithm. From the standard deviation point of view, the SPO algorithm has strong robustness. From the perspective of optimal and worst results, the worst results of the SPO algorithm are even better than the optimal results of some algorithms. The results above prove the SPO algorithm has strong search ability and robustness.

Figure 25 demonstrates the variation of the average fitness of all the functions. As can be seen from the figure, the SPO algorithm has the fastest rate of descent and convergence compared to the other algorithms. To analyze the convergence in-depth, we show the results of every 50 iterations in Table 42. The table shows that compared to different algorithms, the SPO algorithm can converge to better results faster and optimize the results continuously. All these show that the SPO algorithm has strong search capability.

Figure 25
figure 25

Mean convergence curves of each algorithm for the spacecraft trajectory optimization problem.

Table 42 Results of the comparative algorithms for solving the spacecraft trajectory optimization problem.

Conclusion and outlook

In this paper, the powerful mathematical tool Fourier series is successfully applied to the search process of metaheuristic algorithms. The search for the optimal position on a specific projective plane in space is realized by using the fundamental wave of the Fourier series, and the above process is completed quickly using three symmetry points. This search process is called the symmetric projection search method. Furthermore, a symmetric projection optimizer (SPO) is constructed. In SPO, both global and local search modes are accomplished using only one set of update procedures, which is achieved by controlling the distance between three points.

The SPO algorithm has been tested on seven types of CEC, including three dimensions of CEC2017, CEC2019, CEC2020, and two dimensions of CEC2022. It also has been tested on four real-world engineering projects. The powerful MBAs and OBAs, which have been proposed in recent years, were chosen for comparison experiments, respectively. The experiments show that the SPO algorithm ranks first in all tests compared to all other algorithms and performs even better in high-dimensional problems. Meanwhile, Wilcoxon's rank sum test results prove that the SPO algorithm has a significant advantage over all algorithms in 94.6% of all tests.

From the experimental results, it is necessary to explain the main findings of this article clearly:

  1. 1.

    Successful application of the fundamental wave of the Fourier series to the search process provides a new search mechanism for metaheuristic algorithms.

  2. 2.

    Using symmetric points in the same projection plane simplifies the computational process and improves computational efficiency.

  3. 3.

    The search process is simplified using the same formula to complete global exploration and local exploitation processes.

  4. 4.

    The SPO algorithm has few control parameters.

Although the SPO algorithm shows more excellent results in all aspects, it still has certain defects. The main limitation is in the local search. Although the SP search mechanism can complete the local search and work better, from the point of view of the optimal value of the search, the SPO algorithm still has room for improvement. Therefore, in future work, the primary research focuses on two aspects. One is to enhance the local search capability of the SPO algorithm by introducing other mechanisms, and the other is that the SPO algorithm will be fully applied in more spacecraft trajectory optimization problems.