Introduction

Optimization theory is a significant subdivision of computing, which focuses on how to decide the optimum solution from a pool of potential solutions. It offers a structure for defining and resolving complex optimization problems, particularly those with optimization models constrained by significant restrictions, having many objectives, or including complex multivariable systems. Applications of optimization theory can be found in disciplines: computer science1, engineering design problem2,3,4,5,filter design6,7,8, offshore drilling9, semi-submersible platform design10 and control parameter optimization11. Through practice, it has been demonstrated that optimization technologies can increase system effectiveness, appropriately allocate resources, and lower energy usage. A few of the established optimization algorithms are Alpine Skiing Optimization (APS)12,Coronavirus Mask Protection Algorithm(CMPA)13, and Arithmetic Optimization Algorithm14. The superiority of optimization technologies is even more noticeable as the complexity of the optimization problem rises, and precisely, the latter has become significantly more difficult during the last decades. Therefore, the focus of many scholars has shifted to optimization algorithms. Deterministic and meta-heuristic algorithms are the two primary categories into which the new advancements in optimization algorithms fall. Deterministic approaches utilize the problem's analytic characteristics for resolving optimization problems to reach an exact or approximate overall solution15. There are various types of deterministic optimization approaches for both convex (with only one optimal solution) and non-convex problems. Techniques for solving convex problems include Linear Programming (LP) and Non-linear Programming (NLP) models. Techniques for solving non-convex problems include Integer Programming (IP), Non-convex Non-linear Programming (NNLP), Mixed-Integer Non-linear Programming (MINLP), and Integer Programming Mixed Integer Linear Programming (IP MILP)16. Figure 1 presents the grouping of deterministic optimization algorithms.

Figure 1
figure 1

Taxonomy of deterministic optimization algorithms.

Deterministic optimization approaches are effective for a variety of problems, but they might be challenged in providing accurate solutions to problems that are sophisticated, have considerable nonlinearity, or have a significant number of variables17. In answer to these limitations, meta-heuristic algorithms (MAs) were introduced. MAs are particularly suitable for complex optimization problems. Since they are non-deterministic, they don't rely on a predetermined set of guidelines or steps to address a particular issue. Instead, they search the solution space and identify the best answers using randomization and probabilistic methods. Due to their adaptability, they can deal with ambiguities in problem formulation and complex challenges. MAs can be grouped into: Swarm Intelligence Algorithms (SI), Evolutionary-Based Algorithms(EB), and Physics-Based Algorithms (PB)18. Mathematics or Physics algorithms are created based on mathematical and physical natural principles. One of these is the Gravitational Search Algorithm (GSA)19,influenced by Newton's second law and the law of universal gravitation. This algorithm researches the best possible answer for a problem by repeatedly shifting the population's particle position within the search space using their mutual gravitational attraction.The ideal solution is discovered when the particle goes to the ideal spot. EAs are based on the natural biological evolution of species. One example is the Differential Evolution (DE) algorithm introduced by Storn and Price20. The DE has drawn a considerable amount of interest and has been applied successfully in a number of contexts. Although using the DE produced better results than using traditional approaches, it showed premature convergence to a local minimum in a complex search space21.SI draws its inspiration mostly from biological systems. It mimics the cooperative behavior of sociable animal groups in their attempts to survive. Ant Colony Optimization Algorithm is a popular SI algorithm that mimics how ant colonies behave22. The ants’ ability to find direct short routes between their colony and food sources by using chemical pheromone trails as a form of indirect communication is the basis of this behavior. Such an algorithm is typically capable of handling complex problems. Major applications of SI algorithms include the creation of smart strategies for the streamlined transportation of large products and determining the shortest path between two locations23and Impulse response filters24,25,26. Another example is the SCSO introduced by Seyyedabbasi and Kiani, which mimics the lifestyle and unique abilities of the sand cat27. Although SCSO has been employed to address engineering optimization challenges and several test functions, there is still the problem of being stuck in the local optimum, premature convergence, and delayed convergence because of sound frequency guiding each sand cat toward the prey. Based on the frequency intensity, the sand cat can either search or attack the prey; this serves as a prey-following mechanism that may trap the sand cat in local solution, causing poor convergence.

In response, this study proposes Sand Cat Swarm Optimization based on Dynamic Pinhole Imaging (DPI) and Golden Sine Algorithm (Gold-SA) called DGS-SCSO. While there is a plethora of existing metaheuristic algorithms and improved versions of this algorithm, the novelty of the DGS-SCSO algorithm lies in its unique combination of Dynamic Pinhole Imaging (DPI) and the Golden Sine Algorithm (Gold-SA) with the existing Sand Cat Swarm Optimization (SCSO) which is not found in previously modified versions of SCSO. Furthermore, the “No Free Lunch theorem states that no single algorithm can be suitable for every problem28”; hence, DGS-SCSO strategically utilizes DPI, a more precise version of opposition-based learning, to initialize a population with diverse solutions, improving the chances of locating the global optimal. Meanwhile, Gold-SA is used to change the location of the best sand cat to get closer to the optimal solution and encourage rapid convergence and exploitation. Gold-SA facilitates a continual decrease in the problem space, allowing the algorithm to concentrate on areas more likely to yield globally optimal solutions.The integration of DPI and Gold-SA into SCSO not only distinguishes DGS-SCSO from existing algorithms but also addresses the algorithmic deficiencies observed in the original SCSO. Making DGS-SCSO a valuable addition to the existing metaheuristic algorithms in the literature. This research makes several significant contributions, including the proposal of a new optimization algorithm named DGS-SCSO. The effectiveness of DGS-SCSO is assessed on 20 classic benchmark functions, 10 CEC2019 competition benchmark functions, and two engineering problems. To assess the effectiveness of DGS-SCSO, the algorithm is contrasted against seven recent metaheuristic algorithms. The results of DGS-SCSO are analysed and interpreted using several methods, ensuring a detailed assessment of the new optimizer's effectiveness.

The paper is thus structured: Sect. “Related work” provides an overview of relevant research, while the original SCSO algorithm is presented in Sects. “Original SCSO”. “Proposed DGS-SCSO” explains the improvement strategies and introduces the proposed DGS-SCSO algorithm. Sections “Analysis of complexity”, “Experiments and discussion” and “Application of engineering problem” describe the complexity analysis, experiments, and conclusions, respectively.

Related work

Despite the fact that the method was introduced recently, some studies have been done on improving SCSO and tackling the previously mentioned limitations. Arasteh et al. introduced a novel variant of the SCSO algorithm for software module clustering29. Their goal was to provide optimal clusters for source codes' dependency graphs. SCSO was revised to maximize its position-updating stage to obtain better results. Another major change was to add a controlled mutation technique, such as that seen in the Genetic algorithm (GA), to boost heterogeneity and efficiency. Ten common functions were used to rate how well the suggested method performed. In terms of overall success, convergence time, and modularization quality, the proposed algorithm outperformed the other algorithms compared to it. Li et al. introduced a Stochastic and Elite based SCSO (SE-SCSO)30. In the proposed SE-SCSO, Li et al. improved the convergence speed, local exploitation, and exploration of the traditional SCSO with a periodic non-linear adjustment process. Overall efficiency and capacity of convergence were enhanced by using the opposition and reflection learning processes. The validity of the suggested improvements was supported by the experimental results. Iraji et al. suggested a hybridized strategy based on chaotic SCSO and pattern search named CSCPS for their study31. The chaotic sequence was used to increase the SCSO approach’s exploring capability while also preventing untimely convergence. Mathematical test functions were used to assess the efficiency of the new CSCPS optimizer. CSCPS had overall better performance. Wu et al. introduced a modified version of SCSO (MSCSO). In the MSCSO algorithm, sand cats’ position updating is done by wandering techniques32. The triangular walking (TW) technique for searching and The Levy flight walking (LFW) technique to attack prey. Sand cats employ a Roulette Wheel selection algorithm to calculate their distance from their prey in order to determine the best trajectory before updating their position in accordance with the trigonometric function calculation theory. The MSCSO was evaluated using the CEC2014 functions and 23 additional functions, and it showed superior exploration capacity. The technical applicability of the suggested strategy was finally proven by applying it successfully to test seven engineering problems. Jovanovic et al. suggested a novel SCSO technique to improve the efficacy of the extreme learning machine (ELM) classifier33. The concept of “exhausted solutions” derived from the popular Artificial Bee Colony Algorithm is included in the algorithm. The suggested approach has been verified on two distinguished datasets, and the improvements in performance are shown by contrasting the outcomes with those of other optimizers that operate in a comparable manner. Lu et al. developed an Improved SCSO (ISCSO)34. They employed logistic mapping to initialize the population and obtained a population more equally distributed, which enhanced the algorithm convergence and optimization precision. In order to solve the SCSO algorithm's constraint and poor accuracy when addressing complex multivariate functions with numerous peaks, a water wave dynamic evolution component was incorporated. The utilization of water wave dynamics lessened the blindness of individuals who are trailing one another. Finally, the weighted adaptive algorithm was taken into consideration to smoothen the switch between global search and local exploitation. The ISCSO performed better overall when compared to other traditional algorithms in tests, and it required a few iterations to converge to a comparable precision.

Original SCSO

The SCSO, introduced in 2022 by Seyyedabbasi and Kiani, is a MA that takes inspiration from the hunting patterns and biological characteristics of sand cats27. These felines require 10% more food than domestic cats and have developed unique hunting mechanisms to satisfy their needs. With their exceptional hearing capabilities, they can perceive low-frequency sounds and detect prey movements underground. Additionally, they possess remarkable endurance, allowing them to cover long distances without rest. Drawing from these traits, SCSO imitates the two distinct phases of sand cats' hunting process: foraging and catching the prey. SCSO portrays the problem variables as represented by the attributes of sand cats, which are structured as vectors. In the problem space, a single sand cat is modeledas a 1 × \({\text{dim}}\) array that encodes the search space, where \({\text{dim}}\) denotes dimension. Notably, each variable value \(({x}_{1, }{x}_{2 }, \dots ,{x}_{{\text{dim}})}\) is denoted by a floating-point number that falls within the specified lower and upper bounds. To initialize the SCSO algorithm, a candidate matrix is constructed by assembling a population of sand cats with a size of \(N*dim\) where \(N\) denotes population of cats, in proportion to the dimensions of the problem.

Furthermore, the SCSO algorithm assesses the fitness cost of every sand cat using a designated fitness function that corresponds to the problem characteristics. The optimization process aims to identify the optimal values of the parameters (variables) via this function, which returns a corresponding solution for every single sand cat. Finally, the sand cat having the best fitness cost up to that point is selected as the best solution and the remaining sand cats adjust their positions accordingly in the following iteration. This mechanism imitates the behavior of sand cats, who tend to follow the most successful hunter in their group. Notably, the SCSO algorithm avoids excessive memory usage by only storing the best solution of every iteration, which can be thought of as the sand cat nearest to the prey. This iterative process is repeated until the desired level of optimization is achieved. The search technique of the SCSO algorithm was modeled after the low-frequency noise emission-based on the hunting method used by sand cats. The algorithm exhibits a single sand cat's solution as \({X}_{i }= ({x}_{i1, }{x}_{i2 }, \dots ,{x}_{{\text{idim}})}\), and leverages sand cats' low-frequency hearing ability to set every cat's sensitivity range. To aid the cats in approaching their goal without losing or passing it, the value of the sensitivity range (abbreviated as \(\overrightarrow{{r}_{G}}\)) linearly declines from 2 to 0 kHz as the iterations go on (According to Eq. 1). The \({S}_{M}\) number, which represents the hearing qualities of sand cats, is initially set to 2, but it can be adapted to the problem being solved to decide how quickly the agents will act. This demonstrates the algorithm's adaptability and flexibility. The vector \(\overrightarrow{{\text{R}}}\), derived in Eq. (2), is another important parameter for regulating the switch from exploration to exploitation. The adaptive approach optimizes the algorithm's performance by ensuring a smooth transition between the two stages.

$$\overrightarrow{{r}_{G}}={s}_{M}-\left(\frac{2\times {S}_{M}\times {\text{ it }}_{{\text{c}}}}{{\text{ it }}_{\text{Max }}+{\text{ it }}_{max}}\right)$$
(1)
$$\overrightarrow{{\text{R}}}=2 \times \overrightarrow{{r}_{G}}\times rand\,(\mathrm{0,1})-\overrightarrow{{r}_{G}}$$
(2)

where \({\text{it }}_{{\text{c}}}\) and \({\text{it }}_{\text{Max}}\) denotes respectively the current and the maximum iterations. The search space is initialized at random within the specified borders. A unique sensitivity range (\(\overrightarrow{{\text{r}}}\) ) is assigned to every sand cat in order to escape the local optimum, as seen in Eq. (3).

$$\overrightarrow{{\text{r}}}=\overrightarrow{{r}_{G}}\times rand\,(\mathrm{0,1})$$
(3)

The positions of each sand cat are upgraded based on its present position (\(\overrightarrow{{{\text{P}}}_{c}}\)), sensitivity range (\(\overrightarrow{{\text{r}}}\)) and best candidate position \(\overrightarrow{{({\text{P}}}_{bc})}\) as shown in Eq. (4).

$$\overrightarrow{{\text{P}}}\left(t+1\right)=\overrightarrow{{\text{r}}} \times ( \overrightarrow{{{\text{P}}}_{bc}}\left(t\right)-rand\,\left(\mathrm{0,1}\right)\times \overrightarrow{{{\text{P}}}_{c}}\left(t\right))$$
(4)

The distance (\(\overrightarrow{{{\text{P}}}_{rnd}}\) ) from the current location \(\overrightarrow{{{\text{P}}}_{c}}\) to the best candidate position \(\overrightarrow{{{\text{P}}}_{bc}}\) of each sand cat is determined by applying Eq. (5). An arbitrary angle \(\alpha \), is chosen using the Roulette Wheel selection algorithm to determine the trajectory’s orientation. The random angle ranges from 0° to 360° and has a value between -1 and 1. This allows every individual to move across the search space in a distinct circular pattern. \(\alpha \) is utilized to change the position of each sand cat, as indicated in Eq. (5), which guides them toward the prey. The prey is then caught with Eq. (6).

$$\overrightarrow{{{\text{P}}}_{rnd}}=\left|rand\,\left(\mathrm{0,1}\right)\times \overrightarrow{{{\text{P}}}_{bc}}\left(t\right)- \overrightarrow{{{\text{P}}}_{c}}\left(t\right)\right|$$
(5)
$$\overrightarrow{\mathrm{P }}\left(t+1\right)=\overrightarrow{{{\text{P}}}_{bc}}\left(t\right)- \overrightarrow{{\text{r}}} \times \overrightarrow{{{\text{P}}}_{rnd }} \times {\text{cos}}(\alpha )$$
(6)

The SCSO algorithm uses adaptive parameters \(\overrightarrow{{r}_{G}}\) and R to monitor the tradeoff necessary between the local and global search. To achieve this,\(\overrightarrow{{r}_{G}}\) linearly and progressively declines from 2 to 0 as with iterations. Meanwhile, the parameter R is generated arbitrarily from the interval [− 4, 4]. If \(|R|\) is inferior or equal to 1 the individual cat can catch the prey; if not, search continues as given Eq. (7).

$$\overrightarrow{\mathrm{P }}\left(t+1\right)=\left\{\begin{array}{cc}\overrightarrow{{P}_{{\text{b}}}}(t)-\overrightarrow{{{\text{P}}}_{rnd}}\times {\text{cos}}(\alpha )\times \overrightarrow{r}& |R|\le 1;\mathrm{ exploitation }\\ \overrightarrow{r}\cdot \left(\overrightarrow{{{\text{P}}}_{bc}}(t)-{\text{rand\,}}(\mathrm{0,1})\times \overrightarrow{{P}_{c}}({\text{t}})\right)& |R|>1;\mathrm{ exploration}\end{array}\right.$$
(7)

Proposed DGS-SCSO

Dynamic Pinhole Imaging strategy (DPI)

The overall performance of population-based metaheuristic algorithms is significantly influenced by their initialization phas35. A poor initialization may cause the algorithm to explore unpromising regions, subjecting it to the local solution. On the other hand, efficient population initialization can considerably increase precision and algorithm convergence speed. When the starting collection of solutions is located close to the best solution, there is a greater chance of locating the global optimum with a smaller search effort. Opposition-based Learning (OBL) is a technique that draws its inspiration from the opposite relationship between real-world entities36,37. The concept was first introduced in 2005, and it has piqued significant research interest. OBL has been successfully applied to enhance algorithms’ population initialization. The fundamental idea behind OBL is to jointly explore an arbitrary direction and its mirror image while seeking an unknown global optimum. The likelihood of two individuals being closer to the optimal solution is 50% if they are positioned at the opposite location of each other. As a result, only a few operations are needed to create a population of greater quality. This technique is analogous to the pinhole imaging theory in optics. Pinhole imaging is more precise than standard Opposite-based learning and can generate a wider range of opposing points38. A theoretical representation of pinhole imaging is shown in Fig. 2. The following equation is obtained by applying the model in Fig. 2 to the population's search space Eq. (8):

$$\frac{{\text{Xbest }}_{i,j}-\left(U{b}_{i,j}+L{b}_{i,j}\right)/2}{\left(U{b}_{i,j}+L{b}_{i,j}\right)/2-{X}_{i,j}}=\frac{{L}_{p}}{{L}_{-p}}$$
(8)

where the location of the best search agent is denoted as \({\text{Xbest }}_{i,j}\), while the opposite point is represented by \({X}_{i,j}\). The i-th agent in the j-th dimension has lower and upper bounds denoted as \(L{b}_{i,j}\) and \(U{b}_{i,j}\) respectively. Furthermore, \({L}_{p}\) stands for the size of the candle at the best location and \({L}_{-p}\) the size of the one at the opposite location. Although the candle’s location matches that of the search agent, the search agent’s point has no efficient length. As a result, \(K\) can be assigned as a variable to represent the two candles' ratio, as shown in Eq. (9).

Figure 2
figure 2

Dynamic pinhole imaging strategy.

$${X}_{i,j}=\frac{(K+1)\left(U{b}_{i,j}+L{b}_{i,j}\right)-2{\text{Xbest }}_{i,j}}{2K}$$
(9)

By analyzing Eq. (9), it is apparent that when both candles have equal lengths, the strategy becomes a simple reverse learning approach. Modifying effectively the value of \(K\) can alter the location of the opposing point, which leads to greater search opportunities for the individuals.

Golden Sine algorithm (Gold-SA)

The Gold-SA is based on the “sine function in mathematics,” and it also utilizes the golden ratio to seek a superior answer in the problem space. The sine function's range is within − 1 to 1, and it has a period of 2π. As \({x}_{1}\) changes, its associated variable \({y}_{1}\) also does. Through the golden ratio, the problem domain can be continually decreased, and the algorithm can focus on areas where the likelihood of producing the globally acceptable answer is higher, resulting in faster convergence Eq. (10).

$${X}_{i,j}(t+1)={X}_{i,j}(t)\times \left|{\text{sin}}\left({p}_{1}\right)\right|-{p}_{2}\times {\text{sin}}\left({p}_{1}\right)\times \left|{d}_{1}\times {X}_{{\text{best}},j}(t)-{d}_{2}\times {X}_{id}(t)\right|$$
(10)

The formula involves two arbitrary values \({p}_{1}\in \)[0, 2π], and \({p}_{2}\in \)[0, π], \({X}_{i,j}\) denote the current individual, \({X}_{{\text{best}},j}\) denoted the best individual and two coefficient factors \({d}_{1}\) and \({d}_{2}\) that is determined by Eqs. (11) and (12)

$${d}_{1}=a\times \tau +b\times \left(1-\tau \right)$$
(11)
$${d}_{2}=a\times (1-\tau )+b\times \tau $$
(12)

where \(a\) and \(b\) are initialized respectively to -π and π. The golden ratio, τ is \((\surd {\text{p}}5 - 1)/2\).

Implementation of proposed DGS-SCSO

The original SCSO algorithm appears to have a tendency to converge too quickly to local optima, which can limit its capacity to local the global optimal. Additionally, the algorithm's convergence speed may be slow, which could also hinder its effectiveness. To address these issues, two modifications have been proposed: DPI and Gold-SA. DPI is intended to expand the optimizers' global capacity to escape the trap of the local optimal. Gold-SA, on the other hand, is designed to enhance the algorithm's local search ability, enabling it to quickly find optimal solutions in the search area. By incorporating these modifications into the original SCSO algorithm, it is expected that the algorithm's performance will be significantly improved. Specifically, the modifications should help to strike the best transition from exploration to exploitation, thereby increasing the population’s diversity and making it more likely that the algorithm will converge to global optima. The pseudo-code for DGS-SCSO is provided in algorithm 1. The flow chart of DGS-SCSO is given in Fig. 3.

Algorithm 1
figure a

DGS-SCSO Algorithm Pseudo-Code.

Figure 3
figure 3

Flowchart of DGS-SCSO.

Analysis of complexity

The initialization phase has a computing cost of \({\text{O}}(N\times D)\), where \(N\) denotes the population size, and \(D\) is the dimension size. During this phase, SCSO generates the sand cats at random throughout the problem space. Following that, DGS-SCSO assesses each individual's fitness over the course of the entire iteration with a complexity of \({\text{O}}(T\times N\times D)\), with \(T\) denoting the number of iterations. Finally, to reach the best option, we employed Gold-SA and DPI. Therefore, these phases' computational complexity is \({\text{O}}(3\times T\times N\times D)\). In conclusion, the DGS-SCSO's overall computational complexity is \({\text{O}}(T\times N\times D)\).

Experiments and discussion

We assess the effectiveness of the suggested DGS-SCSO method by subjecting it to 20 commonly used benchmark functions and the 10-functions CEC 2019 competition test suite. Additionally, the effectiveness of the method is assessed by using it to solve two engineering problems. The experimental setup and benchmark function properties are elucidated in detail in the following section, followed by a comprehensive analysis and commentary on the statistical findings of the 30 benchmark functions. Finally, the benefits of utilizing DGS-SCSO are demonstrated through its application to the aforementioned engineering design problems.

Function definition

The study employed a total of 30 test functions, including 10 CEC 2019 test functions and 20 widely used benchmark functions. Based on their properties, the 20 classical functions were separated into three categories. The functions F1 through F7 are useful for assessing the exploitability of algorithms because they are unimodal, they possesses a single global optimum, and lack any local optima. The functions F8 through F13 were beneficial for assessing algorithms' exploration and local minimum avoidance capabilities. The fixed multipeaked functions F14 through F20 have different low-dimensional local optima, and they are used to assess the stability and algorithms’ capability of avoiding local optimum.

The study employed 10 functions (F21–F30) from the CEC 2019 benchmark suite in addition to the traditional functions. These functions have been shifted and rotated, adding complexity above the conventional functions. The specifics of each function are supplied in Tables 1 and 2, and the optimal fitness of each function is marked by fmin. The primary goal of this section is to assess the DGS-SCSO algorithm's search capability on a variety of complicated functions with various properties.

Table 1 Specifics of the 20 Classic Functions.
Table 2 Specifics of the 10 CEC 2019 functions.

Experimental setup

Thirty different test functions where used to assess how well the DGS-SCSO optimization algorithm performed. To confirm the accuracy of the outcomes, the proposed algorithm was contrasted against several other algorithms, including SCSO27, Artificial Electric Field Algorithm (AEFA)39, Honey Badger Algorithm (HBA)40, Hybrid Butterfly Optimization Algorithm with Particle Swarm Optimization (HPSOBOA)41, Quadratic interpolation Salp Swarm-Based local escape operator (QSSALEO)42, Time-Based Leadership Salp-Based Algorithm with Competitive Learning (TBLSBCL)43, Transient Search Algorithm (TSO)44. We established the maximum iteration to be 1000, the population size to be 30, and the dimension size to be as mentioned in Tables 1 and 2. Additionally, we conducted 30 independent runs for the experimental setup. The best results are indicated in bold. Table 3 presents the specific parameter settings for the algorithms used in the experiment.

Table 3 Parameter settings.

Statistical result analysis

The DGS-SCSO algorithm exhibits noteworthy results when compared to other metaheuristic algorithms across Tables 4, 5, and 6. In Table 4, the dimension of the function remained as detailed in Tables 1 and 2; in Tables 5 and 6, dimensions are set to 50 and 100, which increases the complexity of the test suite function. In Table 4, DGS-SCSO achieves superior average values (AVG) and remarkable stability with smaller standard deviation (STD) on various functions, indicating consistent and robust performance. In F1, F3, and F5, from the unimodal function, DGS-SCSO obtained the theoretically optimal solution. This is in contrast to SCSO, QSSALEO, and TSO, which obtained the near-ideal solution. In F2, F4, and F7, DGS-SCSO outperformed HBA and TBLSBCL, obtaining the best solution. For the multimodal functions, DGS-SCSO is shown to outperform AEFA, HBA, and TSO for F8, F9, and F11, indicating its superior ability to handle complex and challenging optimization problems with multiple local optima. Additionally, it performs better than SCSO for F15, F17, and F19. The results for the CEC 2019 functions show that DGS-SCSO produces better results than the compared optimizers in six of the functions (F23, F24, F25, F26, F28, and F29), suggesting its effectiveness in handling a diverse set of optimization problems.

Table 4 Result of different algorithms on 30 functions.
Table 5 Result of different algorithms on F1-F13 with dimension 50.
Table 6 Result of different algorithms on F1-F13 with dimension 100.

Furthermore, Table 4 displays the outcomes of various algorithms in tackling the 30-function test suite. It is clear that the improved DGS-SCSO algorithm has the best performance, achieving an overall efficiency (OE) of 79.66%, which considers the number of losses (L) and the total number of functions (NF). L is subtracted from NF, and the result of the subtraction is divided by NF to compute OE42,45. The table presents the OE of all the optimizers, denoting the number of “Wins, Losses, and Ties” as W, L, and T, respectively. In contrast to the traditional SCSO algorithm, which has an overall efficiency of 20% for all functions, DGS-SCSO has improved OE with a margin of 59.66% over SCSO. The integration of the both methods into SCSO algorithm has significantly improved the solution precision, resulting in better exploitation for unimodal functions, better exploration for multimodal functions, and a better tradeoff between the two in complex CEC 2019 functions.Additionally, in Tables 5 and 6, with increased dimension, DGS-SCSO maintains competitive AVG, low STD, and high overall efficiency (OE) across functions F1-F13 at dimensions 50 and 100, respectively, outperforming or performing comparably to other algorithms such as HBA, HPSOBOA, QSSALEO, TBLSBCL, and TSO in functions F1-F5, F7 and F8. The algorithm's ability to consistently achieve low AVG, low STD, and strong OE underscores its effectiveness, scalability, and reliability in addressing optimization challenges across diverse scenarios and dimensionalities. The results of the performance of each of the compared algorithms and DGS-SCSO on the scaled functions (F1-F13) from Tables 4, 5, and 6 are illustrated in Fig. 4 to visualize the consistency of each optimizer as complexity increases. As seen from the illustration, DGS-SCSO, QSSALEO, HBA, and TBLSBCL show relative consistency in their performance as the dimension increases; this demonstrates the robustness of DGS-SCSO.

Figure 4
figure 4

Result of different algorithms on 30 functions.

Nonparametric test analysis

The Wilcoxon Rank Test (WRT) is useful for analyzing data with complex distributions. Tables 4, 5, and 6 offer statistics on the average value and standard deviation of all the optimizers, but they do not allow for comparison between multiple algorithms. To verify and test the results, it is necessary to use the WRT. In Table 7, the outcomes of the DGS-SCSO algorithm and seven other algorithms are presented. These algorithms were run thirty times using thirty different benchmark functions with varying dimensions. A significance level (P-value) of 5% is used, and outcomes below this value indicate a "significant difference" among the two algorithms. Table 7 shows that most test results are below 5%, but few are above, meaning no significant difference. The QSSALEO and TSO algorithms have few results that are better than DGS-SCSO, as indicated by the "-" column. This suggests that these algorithms have good convergence on certain functions, which confirms the No-free-Lunch theorem, "stating that no single optimization algorithm can be applied to solve all types of optimization problems". However, it is worth noting that in the “ + ” column, which denotes the better performance of DGS-SCSO in comparison to the other algorithms, DGS-SCSO consistently outperforms them. The “ = ” column indicates equal performance. Table 8 presents another nonparametric test called the Friedman Test, which ranks compared methods from least to highest. DGS-SCSO ranked first in all test scenarios with the highest list value in the Friedman rank.

Table 7 Wilcoxon rank test.
Table 8 Friedman test.

Convergence curve analysis

Figure 5 depicts the average convergence profiles of various optimization algorithms across 30 independent runs using the dimensions of Tables 1 and 2. The efficacy and efficiency of an optimization algorithm can be assessed using the speed and accuracy of its convergence towards the optimal solution, as reflected in its convergence trajectory. In this regard, the DGS-SCSO algorithm performs better than the original SCSO algorithm, achieving faster convergence rates, particularly in the initial search phases. The proposed algorithm demonstrates notable improvement in convergence performance for most functions, indicating its effectiveness in enhancing the optimization process. Specifically, in unimodal functions F1-F5 and F7, the DGS-SCSO algorithm converges far more rapidly than other algorithms in the initial iterations, achieving the best convergence precision compared to other algorithms. On multimodal functions, the DGS-SCSO algorithm maintains superior convergence speed and accuracy across most functions. Notably, in F8-10, F15, and F20, the algorithm performs exceptionally well, reaching the proximity of the global optimum and surpassing other optimizers. The incorporation of Gold-SA techniques enables the algorithm to rapidly track the best solution to speed up the convergence in the initial search stages for unimodal functions. The DPI method facilitates the algorithm's breakout from the local optimum in multimodal functions, contributing to its outstanding performance. In terms of convergence accuracy on complex functions, the DGS-SCSO algorithm outperforms other algorithms. Specifically, in F23-F29, DGS-SCSO demonstrates superior performance compared to other novel optimization algorithms.

Figure 5
figure 5figure 5figure 5figure 5

Convergence curve for functions F1 to F30.

Box plot analysis

Through boxplot analysis, the distributional properties of the data may be shown. The data distribution is shown as quartiles in the boxplot. The algorithm's lowest and highest values are found at the lowest and highest points of the boxplot. The rectangle's ends serve as a boundary between the lower and upper quartiles. In this section of the study, the boxplot behaviour was used to demonstrate the distribution of the obtained value for each algorithm. The benchmark functions were run independently 30 times for each sample using the dimensions in Tables 1 and 2. From Fig. 6, it can be concluded that DGS-SCSO demonstrated better stability for most benchmark functions and outperformed the other algorithms. This indicates that DGS-SCSO is a more reliable and consistent algorithm for finding the global optimum. The boxplot for the proposed DGS-SCSO method was narrow in most cases for F1 to F20 and comparable to other algorithms. This indicates that the DGS-AEFA method performs well for less complicated functions and maintains performance where the global optimum is easier to find. DGS-SCSO had lower values in much more complex functions like F23, F24, F26, F28, and F29 than all other algorithms. This suggests that DGS-SCSO is also able to maintain stability and is able to handle more complex functions well where finding the global optimum is more challenging. Overall, DGS-SCSO shows an advantage in stability and robustness when taking into account the “length and median” of the box, which is the thin line inside the box. The addition of the two enhancement techniques led to greater harmony between the exploitation and exploration capacities, making the algorithm more efficient as a whole.

Figure 6
figure 6figure 6figure 6figure 6figure 6

Boxplot plots on benchmark functions F1 to F30.

Exploration and exploitation analysis

Too much exploration can lead to inefficient search and slow convergence, while too much exploitation can result in early convergence to local optima and a failure to discover better solutions. In this subsection, we observe the exploitation and exploration capability of the proposed method as proposed by Kashif et al46.

$${Div}_{j}=\frac{1}{n}\sum_{i=1}^{n} {\text{median}}\left({x}^{j}\right)-{x}_{i}^{j}$$
(13)
$$Div=\frac{1}{D}\sum_{j=1}^{D} {Div}_{j}$$
(14)

In Eq. (13), the population’s diversity in dimension \(j\) is measured. To compute the diversity of a single dimension \(j\), firstly we find the median value denoted as \({\text{median}}\left({x}^{j}\right)\) of that dimension across all individuals \(n\) in the swarm. Subsequently, we compute the distance of every individual \(i\) value for that dimension from the median value \(j\), and take the average of these distances across all individuals \({{\text{Div}}}_{j}\) in the swarm in Eq. (14). This gives diversity \({{\text{Div}}}_{j}\) for that dimension. To compute the overall diversity \(Div\) of the swarm, we repeat this process for each dimension \(j\), and then take the average of the diversities \({{\text{Div}}}_{j}\) across all dimensions. The purpose of this calculation is to measure how diverse the individuals in the swarm are in terms of their dimensional values. If all individuals have very similar values for all dimensions, then the diversity will be low. If there is a lot of variation in the values across dimensions and individuals, then the diversity will be high. Equations (14) and (15) determine the exploration and exploitation percentages in an iteration:

$$\mathrm{Exploration\%}=\frac{Div}{Di{v}_{max}}\times 100$$
(15)
$$\mathrm{Exploitation\%}=\frac{\mid \text{ Div }-{\text{ Div }}_{max}\mid }{{\text{ Div }}_{max}}\times 100$$
(16)

where, \(Div\) is the diversity of the swarm in the current iteration, \(Di{v}_{max}\) is the maximum diversity among all iterations, \(\mathrm{Exploration\%}\) is the percentage of exploration in the current iteration, and \(\mathrm{Exploitation\%}\) is the percentage of exploitation in the current iteration.

In Fig. 7, we used unimodal functions F1, F4, and F5 to depict how well the optimizer is able to explore. On the other hand, the multimodal functions F10, F11, and F12 in Fig. 6 depict how well the optimizer is capable to explore the search area. It can be observed that the method begins with a wide exploration and narrow exploitation of the functions examined. The appropriate balance between both optimization processes is seen as the iteration process progresses.

Figure 7
figure 7

Exploitation and exploration plot of DGS-SCSO.

Application of engineering problem

In this section, DGS-SCSO is compared to seven other algorithms on popular engineering problems, the experiment settings are the same as the previous experiment.

Tension/compression spring design problem (TCSD)

The TCSD problem evaluated in this subsection is a continuous constrained problem that minimizes the weight of a TCSD, as illustrated in Fig. 8. It includes three parameters: the number of “active coils (N), the mean coil diameter (D), and the wire diameter (d)”; with three constraining factors: "minimum deflection, shear stress, and urge frequency". We further proceeded to apply the DGS-SCSO algorithm and other metaheuristic algorithms to solve the TCSD problem. The results provided in Table 9 show that the DGS-SCSO algorithm outperformed the other algorithms in determining the optimum cost47.

Figure 8
figure 8

Tension/compression spring design problem parameters.

Table 9 Results of tension/compression spring design problem.

Considering the vector \(\overrightarrow{x}=\left[{x}_{1}{x}_{2}{x}_{3}\right]=[dDN]\)

We aim to minimize

$$f(\overrightarrow{x})=\left({x}_{3}+2\right){x}_{2}{x}_{1}^{2}$$
(18)

Constrained by

$${g}_{1}(\overrightarrow{x})=1-\frac{{x}_{2}^{3}{x}_{3}}{7178{x}_{1}^{4}}\le 0$$
(18)
$${g}_{2}(\overrightarrow{x})=\frac{4{x}_{2}^{2}-{x}_{1}{x}_{2}}{12566\left({x}_{2}{x}_{1}^{3}-{x}_{1}^{4}\right)}+\frac{1}{510{x}_{1}^{2}}-1\le 0$$
(19)
$${g}_{3}(\overrightarrow{x})=1-\frac{140.45{x}_{1}}{{x}_{2}^{2}{x}_{3}}\le 0$$
(20)
$${g}_{4}(\overrightarrow{x})=\frac{{x}_{1}+{x}_{2}}{1.5}-1\le 0$$
(21)

Possible boundaries of vector \(\overrightarrow{x}\):

$$0.05\le {x}_{1}\le 2.00$$
$$0.25\le {x}_{2}\le 1.30$$
$$2.00\le {x}_{3}\le 15.0$$

Three-bar truss design

Three bar truss design optimization problem's objective is to minimize the relevant weights related to the design illustrated in Fig. 9. The problem involves two optimization parameters (\({x}_{1}\), \({x}_{2}\)) and three constraining factors: buckling, deflection, and stress. The mathematical expression of the Three-bar truss design problem is presented below48,49:

Figure 9
figure 9

Three-bar truss design parameters.

$$Minimize:f({x}_{1},{x}_{2})=l\times \left(2\sqrt{2}{x}_{1}+{x}_{2}\right)$$
(22)

Constraining factors:

$${G}_{1}=\frac{\sqrt{2}{x}_{1}+{x}_{2}}{\sqrt{2}{x}_{1}2+2{x}_{1}{x}_{2}}P-\sigma \le 0$$
(23)
$${G}_{2}=\frac{{x}_{2}}{\sqrt{2}{x}_{1}2+2{x}_{1}{x}_{2}}P-\sigma \le 0$$
(24)
$${G}_{3}=\frac{1}{\sqrt{2}{x}_{2}+{x}_{1}}P-\sigma \le 0$$
(25)

where: \(l=100{\text{cm}};P=\frac{2kN}{{{\text{cm}}}^{2}};\sigma =\frac{2{\text{kN}}}{{{\text{cm}}}^{2}}\)

Interval: \(0\le {x}_{1},{x}_{2}\le 1\)

As seen in Table 10 DGS-SCSO obtained the best outcome for the optimal cost.

Table 10 Results of three bar truss design.

Conclusion

In conclusion, this paper introduces DGS-SCSO, a novel optimization algorithm that builds upon Sand Cat Swarm Optimization (SCSO) with the incorporation of Dynamic Pinhole Imaging (DPI) and Golden Sine Algorithm (Gold-SA). DPI improves global search capabilities and helps to avoid local optima, while Gold-SA addresses the drawbacks of SCSO, including early convergence and stagnation, thereby enhancing exploitation. The effectiveness of DGS-SCSO was assessed using 20 test functions and 10 CEC 2019 competition test functions, and the algorithm demonstrated superior optimization accuracy, convergence speed, robustness, and statistical significance when compared to other competitors. Furthermore, DGS-SCSO was evaluated on two real-world engineering design problems and significantly outperformed its peers. However, DGS-SCSO's time consumption is a potential concern due to its use of DPI and fitness evaluation to detect the best solutions, followed by the application of Gold-SA to improve the best solution. Future research will concentrate on reducing the computational time of DGS-SCSO while maintaining its performance, as well as exploring its applications to combinatorial optimization problems and coupling it with other optimizers to enhance its performance further. In addition to the aforementioned future directions, an online web server and an importable library will be developed to enhance the accessibility and usability of DGS-SCSO. Furthermore, our future efforts will focus on improving and advancing the constraint DGS-SCSO algorithm version, equipping it with enhanced techniques tailored for handling both equality and inequality constraints. These endeavours aim to strengthen the algorithm's applicability and performance across a broader range of real-world optimization problems.