Introduction

The rapid development of industry and the evolving landscape have given rise to an assortment of engineering applications. Our objective is to enhance the efficiency of these technical challenges within a designated period. Each engineering application presents unique solutions, and it is evident that no singular optimization technique possesses the capability to effectively tackle all optimization challenges effectively. Hence, this proliferation of applications has introduced additional challenges to the field of optimization. Historically, optimization methods have been widely employed across a diverse range of applications, such as path planning1,2, location problem3, production shop scheduling4,5, power generation prediction6 and a multitude of additional issues. In the present era, as engineering issues become more intricate and challenging to simulate, it is crucial to prioritize accelerating the creation of superior optimization algorithms. Hence, the process of optimizing algorithms is still far.

Scholars have developed a range of optimization algorithms, drawing inspiration from biology, nature, and society. These optimization techniques have been refined and extensively employed to address diverse intricate engineering challenges. Optimization algorithms can be divided: swarm, evolutionary, physical, and human. Optimization algorithms for populations aim to replicate the social behavior observed in populations. Examples of optimization algorithms that replicate the predatory behavior of creatures include the whale optimization algorithm (WOA)7, the Harris Eagle algorithm (HHO)8, and the chimpanzee optimization algorithm (COA)9. Additionally, the Black tern algorithm (STOA)10 emulates the migratory and aggressive life patterns observed in black tern groups. Darwinian evolution inspires for evolutionary optimization techniques. One of the algorithms that falls within this category is the backtracking search optimization algorithm (BSA)11 and the differential evolution algorithm (DE)12. Physical optimization techniques are derived based on fundamental principles of physics. The principle of simulated annealing is based on the simulated annealing algorithm (SA)13. The proposed technique, known as the gravitational search algorithm (GSA)14, draws inspiration from the law of gravitation. The method known as the black hole algorithm (BHBO)15 is derived from the inherent characteristics of black holes. The algorithms known as human optimization algorithms are derived from the study of human behavior. One instance of an optimization algorithm is the brainstorming optimization method (MSA)16, which utilizes of human behavior to address optimization problems. Another algorithm, Human Learning Optimization (HLO)17, originated from a simplified model of human learning. Optimization challenges possess a notable capacity to effectively address engineering problems.

With a growing variety of proposed optimization methods, numerous researchers have enhanced these algorithms. An enhanced self-adaptive beneficial factor-based SOS (SaISOS) with adaptive beneficial factors was proposed by Nama et al.18. The researcher incorporated a three-way mutualism phase into the model, along with the introduction of a random-weighted reflection coefficient and a novel control operator. In a sequence of tests, the enhanced algorithm demonstrates a significant superiority over its competitors. Nama et al.19 introduced a refined backtracking search technique known as GQR-BSA. The client updates the coordinate structure of BSA by implementing quasi-reflection, quantum Gaussian mutation, adaptive parameter execution, and leaping based on quasi-reflection. This permits the system to transition from the local optimal to the global optimal. In their study, Nama et al.20 presented an enhanced symbiosis algorithm known as I-SOS. The setting up of a balance between the core of exploration and activity is achieved by employing adjusted return factors, modified parasite stages, and search strategies that rely on random weights. The results of the benchmark function test demonstrate that the implementation of I-SOS translates to a boost in search performance. Nama et al.21 blended the SMA algorithm with the quasi-reflectomy-based learning mechanism (QRBL), resulting in the facilitation of population diversity early development, improved convergence, and elimination of local optimizations. Luo et al.22 proposed a multi-objective balance optimizer slime mold algorithm (MOEOSMA), which uses dynamic coefficients, an elite filing mechanism, a crowding distance method and an equalization pool strategy to enhance the algorithm's capability. The test findings illustrate MOEOSMA's intense competition. Yin et al.23 introduced a multi-objective EOSMA (MOEOSMA). The equilibrium optimizer's concentration update operator is applied. After optimizing the value using the greedy technique, the random difference mutation operator is added. MOEOSMA has a lower solution time and an improved convergence accuracy, according to the equivalent results.

Scholarly improvements to algorithms render them more suitable for complex engineering optimization issues, other than to be applicable to simple examples like trusses. Zhang et al.24 put forward a search algorithm for bald eagles based on bionic polar coordinates (PBES). To improve the algorithm, the initialization is modified, and parameters and polar coordinates are introduced. Upon conducting tests, it has been determined that the enhanced algorithm exhibits a favorable impact on the approximation of curves. With the goal to ascertain the active earth pressure of retaining walls supporting C-backfill, for instance, Nama et al.25 developed a novel improved backtracking search optimization algorithm (IBSA) based on adaptive control parameters. The findings demonstrated that it had a positive impact. Chakraborty et al.26 introduced an enhanced symbiotic search method called NMSOS. It proved that the algorithm improved on all outcomes and could use a multistage threshold method with varying thresholds to segment COVID-19 chest X-ray pictures. The original algorithm's search capability is altered by integrating the development potential of SOS with the searching potential of SQI. This method establishes the shallow strip foundation's seismic bearing capacity under pseudodynamic conditions and prepares it for numerical analysis.

In the current study, a novel swarm intelligence optimization algorithm is chosen. In 2022, Seyyedabbasi et al.27 presented the Sand Cat swarm optimization technique. The primary function of this program is to imitate the sand cat's hunting habits. Because it's hard to get food in the desert, sand cats choose to spend the day underground and hunt at night. The way the sand cat hunts is also really fascinating. They may detect prey moving underground because they are highly sensitive to sound frequencies and can hear sounds with low frequencies. The SCSO exhibits notable benefits in terms of enhanced mining capacity and reliable performance in addressing real-world challenges. Nevertheless, the weaknesses of SCSO are obvious. During the sophisticated phase of the SCSO algorithm, the individual sand cat has a tendency to become trapped in a local optima, resulting in lack of ability to identify a more favorable position.

Numerous researchers have conducted research with the objective of strengthening the comprehensive the ability of the SCSO algorithm. The authors Wu et al.28 provided a modified approach for tackling limited engineering optimization problems by introducing an improved sand cat swarm optimization problem. Attempting to boost the mobility of sand cats and improve their worldwide discovery ability, the researchers implemented the modified sand cat swarm optimization algorithm (MSCSO) with a loitering strategy. In an attempt to improve the overall performance of the algorithm and expedite convergence, a shot-based reverse learning method is incorporated. A power transformer defect diagnosis approach was proposed by Lu et al.29, which utilizes an improved sand Cat swarm optimization algorithm and a unit with a bidirectional gated cycle. The conventional sand Cat swarm algorithm was enhanced through the incorporation of logical chaotic mapping, a water wave dynamic component, adaptive weighting, and a gold sine approach. The superiority of ISCSO in terms of optimization precision and quickness of convergence has been demonstrated. As a result, a fault diagnosis technique utilizing L-Isomap and ISSO-BigRU has been created. The adaptive sand cat swarm optimization algorithm (COSCSO) was proposed by Wang et al.30. This approach is based on Nonlinear adaptive parameters、the Cauchy variation and the optimal neighborhood perturbation strategy. The enhanced algorithm has the capability of minimizing the existence of local optima within the population, expediting the rate of convergence, promoting the efficiency of search, and promoting population biodiversity. In their study, Jia et al.31 introduced a Sand Cat swarm optimization algorithm that incorporates quasi-reverse learning strategies. This algorithm effectively converts the three-dimensional path planning problem into an objective function derived from a mathematical model. Consequently, the algorithm facilitates the identification of the optimal path while considering the security constraint. Empirical evidence demonstrates that the enhanced algorithm can effectively identify the most advantageous route in various obstacle-laden conditions. The reactive power optimization approach for storage-distribution networks with wind and wind energy was suggested by Shang et al.32. The approach incorporates the multi-objective MOSCSO algorithm. The control variable in this research is the energy storage facility, and the simulation experiment runs using MOSCSO. In the exploration and development stages, Jiang et al.33 introduced the Cauchy mutation mechanism and Gaussian mutation mechanism, respectively. They additionally created the ISCSO algorithm and enhanced the engine failure detection technique of SDAE using the improved algorithm. The usefulness of the proposed strategy in enhancing average diagnostic accuracy while minimizing average time has been verified. The previously mentioned investigators primarily concentrate on developing the sand cat swarm optimization algorithm through adaptive weights and local variation. These tactics not only strengthen the algorithm's efficiency in various ways but also have potential applications in engineering.

Given the restrictions of conventional SCSO, along with the enhanced approaches put out by other researchers, this study introduces a novel technique called CWXSCSO, which combines elite decentralization and crossbar Sand Cat swarm optimization. The paper's fundamental framework is laid out as follows: The initial portion presents the fundamental optimization process of SCSO. The subsequent section introduces an original dynamic exponential element and implements elite decentralization during the developmental phase. Simultaneously, the algorithm incorporates the crossbar approach to disrupt the previous optimal solution. In the third portion, an aggregate of 15 benchmark test functions is employed to evaluate and compare different improvement strategies. The optimized performance of each strategy is assessed and compared. In Sect. “Comparison between CWXSCSO and different swarm intelligence algorithms”, the efficacy of the CWXSCSO approach is assessed using the identical set of CEC2019 test functions and CEC2021 test functions. This evaluation involves assessing several performance metrics such as the ideal value, median value, standard deviation, convergence curve, and the results of the Wilcoxon rank sum test. Section “Engineering application” of the study employs six classical engineering cases to evaluate the viability of the new algorithm in real-world engineering scenarios. To summarize, when comparing the conventional SCSO algorithm with CWXSCSO, it can be observed that CWXSCSO exhibits a certain degree of effectiveness in extinguishing the algorithm from local optima. Additionally, CWXSCSO strengthens the pace of convergence and solution accuracy of SCSO.

Sand cat swarm optimization algorithm

The optimization algorithms of sand cat swarms draw inspiration from their capacity to identify low-frequency noise. Sand cats inhabit challenging habitats characterized by sandy and stony deserts, such as the Sahara in Central Asia, the Sahara in Africa, and the Arabian Peninsula. They engage in daily relaxation and nocturnal hunting activities. They participate in prey detection by sensing low-frequency sounds, whether it is above or below ground. When prey is subterranean, they promptly identify it and excavate it. Based on the behavioral patterns exhibited by sand cats, the process of foraging can be delineated into two distinct phases: prey detection and prey predation. The SCSO algorithm places emphasis on two distinct phases, namely exploration and development, with a particular focus on maintaining a balance between these phases.

Initialize

The sand cat in a D-dimensional optimization problem is a one-dimensional array that operates as a representation of the solution to the problem. Every variable value (\({x}_{1}\),\({x}_{1}\), …, \({x}_{n}\)) represents a node. Additionally, every \(X\) must be positioned within the limits of the upper and lower limits.

Initially, an initialization matrix is generated based on the issue size, denoted as \(\left({N}_{pop}\times {N}_{d}\right),\left(pop=1,...,n\right)\). Furthermore, the solution that corresponds to the given input is generated in every iteration. If the subsequent output value exhibits higher efficiency, the current approach will be substituted. The solution for the next iteration is not stored if a superior solution is not obtained. Each sand cat's fitness value was established via the fitness function.

$${SandCat}_{i}=\left\{{SC}_{1},{SC}_{2},\dots ,{SC}_{n}\right\};1<i\le n$$
$${Act}_{i}^{j}=\left\{{X}_{11},{X}_{12},\dots ,{X}_{nm}\right\};1<i\le n;1<j\le d$$
$$Fitness=f\left(SandCat\right)=f\left({SC}_{1},{SC}_{2},\dots ,{SC}_{n}\right);\forall {x}_{i}$$

Hunt for prey

The vector \(R\) is derived from Eq. (2). The adaptive parameter R enhances the equilibrium between the transition and development of the two phases.

The parameter \(\overrightarrow{{r}_{G}}\) defines a general sensitivity that exhibits a linear reduction from 2 to 0. Furthermore, the variable \(r\) symbolizes the sensitivity span exhibited by each cat.

The expression \({\text{ite}}{\text{r}}_{c}\) denotes the present proportion of iterations. The notation \({\text{ite}}{\text{r}}_{M{\text{ax}}}\) denotes the upper limit of iterations. The \({S}_{M}\) number is derived from the acoustic attributes of the sand cat, hence its assumed value is 2.

$$\begin{array}{c}{r}_{G}={s}_{M}-\left(\frac{{S}_{M}*{\text{ite}}{\text{r}}_{c}}{{\text{ite}}{\text{r}}_{M{\text{ax}}}}\right)\end{array}$$
(1)
$$\begin{array}{c}R=2*{r}_{G}*{\text{r}}{\text{a}}{\text{n}}{\text{d}}\left(\mathrm{0,1}\right)-{r}_{G}\end{array}$$
(2)
$$\begin{array}{c}r={r}_{G}*rand\left(\mathrm{0,1}\right)\end{array}$$
(3)

Every individual sand cat adjusts its location based on its most ideal position \(\left({X}_{b}\right)\), present position \(\left({X}_{c}\right)\), and its sensitivity range \((r)\). Hence, the sand cat possesses the capability to identify alternative ideal prey areas, as determined by Eq. (4).

$$\begin{array}{c}X\left(t+1\right)=r*\left({X}_{b}\left(t\right)-{\text{rand}}\left(\mathrm{0,1}\right)*{X}_{c}\left(t\right)\right)\end{array}$$
(4)

Attack prey

The formula (5) denotes the gap \({X}_{rnd}\) between the sand cat and the prey, representing the simulation of the sand cat's attack on the target. Assuming a circular sensitivity span for the sand cat, a position is produced randomly from the best position \(\left({X}_{b}\right)\) and the present position \(\left({X}_{c}\right)\). Subsequently, a random angle is picked using the roulette method, and the assault is executed using formula (6). The utilization of randomly generated angles can effectively mitigate the risk of the algorithm succumbing to local optima.

$$\begin{array}{c}{X}_{rnd}=\left|{\text{rand}}\left(\mathrm{0,1}\right)*{X}_{b}\left(t\right)-{X}_{c}\left(t\right)\right|\end{array}$$
(5)
$$\begin{array}{c}X\left(t+1\right)={X}_{b}\left(t\right)-r*{X}_{rnd}*\mathit{cos}\left(\theta \right)\end{array}$$
(6)

Exploration and development

The utilization of adaptive values for the \(\overrightarrow{{r}_{G}}\) and \(R\) parameters facilitates the process of exploration and development, enabling the SCSO to smoothly transition between the two stages. The values of the R argument are considered to be well balanced when the contents of the \(\overrightarrow{{r}_{G}}\) argument are spread out in an equitable manner. In a nutshell, the R value is a stochastic value within the range [− 2 \(\overrightarrow{{r}_{G}}\), 2 \(\overrightarrow{{r}_{G}}\)], where \(\overrightarrow{{r}_{G}}\) is reduced from 2 to 0 in each iteration. Consequently, the parameter R is a randomized value within the bounds Report Phrase of [− 4, 4].

If the quantity of |R| is less than 1, the sand cats are directed to engage in prey assault. Conversely, if |R| is greater than 1, the cats are assigned the responsibility of identifying a novel potential solution over the entire region.

$$\begin{array}{l}X\left(t+1\right)=\left\{\begin{array}{c}r*\left({X}_{b}\left(t\right)-rand\left(\mathrm{0,1}\right)*{X}_{c}\left(t\right)\right);\left|R\right|>1;\,exploration\\ {X}_{b}\left(t\right)-{X}_{rnd}*{\text{cos}}\alpha *r;\left|R\right|\le 1;\,exploitation\end{array}\right.\end{array}$$
(7)

Improved sand cat swarm optimization algorithm

This study presents novel approaches to address the issues of slow convergence and susceptibility to local optima in the SCSO algorithm. Specifically, it introduces a dynamic exponential factor, an elite decentralization technique, and a crossbar strategy as potential enhancements to the SCSO algorithm. The subsequent part contains a detailed introduction to three improvement strategies.

Dynamic exponential factor

The parameter of the weight factor holds significant importance. With a relatively high weight factor, the algorithm exhibits robust global search capabilities, enabling it to enhance population variety and cover a vast area. When the comparison is small, the algorithm has a robust local search capability, enabling it to efficiently explore the ideal solution and expedite convergence. Local optimization is a process in which the sand cat participates in local search, as described by formula (4). According to formula (6), as the sand cat swarm approaches the local solution, it is restricted to approaching the solution that is deemed to be locally optimal and lacks the ability to achieve superior local optimization. A novel dynamic exponential factor is suggested as a solution to this challenge, drawing inspiration from existing work. This factor can compensate for the limited capacity for local exploitation during the initial phase and improve the overall search capability during the latter phase, so preventing the population from prematurely settling in the local optimal. Equation (8) illustrates an equation for the exponential factor.

$$\begin{array}{c}\omega ={\left({e}^{\left(1-{\left(\frac{t}{T}\right)}^{2}\right)}\right)}^{kt}\end{array}$$
(8)

In the given context, \(k\) represents an optimization factor that adheres to an exponential distribution. The variable \(t\) depicts the present quantity of iterations, while \(T\) indicates the ultimate amount of iterations.

Elite decentralization strategy

In the desert, the sand cat's power to update its individual location is limited to relying on the guidance of random individuals within the community, resulting in a weak global search capability. This study shows a novel approach to elite decentralization, aiming toward improving the proximity of individual sand cats to elite individuals and bolstering the local development capabilities of the sand cat population. The ultimate goal is to expedite and improve the sand cat population's ability to identify optimal solutions with greater speed and accuracy.

The calculation of fitness for each individual sand cat is performed, followed by the replication of the individual with the highest level of fitness into n copies, so building the elite matrix. When the probability of |R| is equal to 1, various probabilities h are introduced to update the location. Specifically, when h is equal to or less than 1/3, the population is searched worldwide. This approach addresses the limitation of the original algorithm, which is vulnerable to local optima, and prevents the sand cat population from evolving into "precocious". When the value of h exceeds 0.5, the elite matrix is included, ensuring that each dimension of the sand cat individual is in close alignment to the elite individual. This allows for the rapid identification of the best value.

Please revise formula (6) to update its position to the following formula:

When \(h\le 1/3\):

$$\begin{array}{c}X\left(t+1\right)={\omega *X}_{b}\left(t\right)-{C}_{1}*{\text{cos}}\left({C}_{2}\right)\left|r*{X}_{rnd}*\mathit{cos}\left(\theta \right)\right|+{X}_{E}\end{array}$$
(9)

When \(h>1/3\):

$$\begin{array}{c}X\left(t+1\right)=\omega *\left|{X}_{b}\left(t\right)-X\left(t\right)\right|+{X}_{E}\end{array}$$
(10)

among:

$$\begin{array}{c}{C}_{1}=2*\pi *rand\left(\mathrm{0,1}\right)\end{array}$$
(11)
$$\begin{array}{c}{C}_{2}=2*\left(1-\frac{t}{T}\right)*{\text{cos}}{\left(\frac{3*\pi *t}{2}\right)}^{3}\end{array}$$
(12)

Crossbar strategy

With a boost in the total amount of iterations, the sand cats within the population tend to cluster around the optimal individuals. This may give rise to the phenomenon of population diversity decline and hinder the algorithm's ability to develop the global optimal solution. This paper incorporates the horizontal crossover strategy into the sand cat swarm optimization algorithm to mitigate the occurrence of local optima within the algorithm. The horizontal crossover is utilized to cross-search the population, thus minimizing search blind spots and addressing the global optimization problem. The vertical crossover operation is executed on the optimal solution for the purpose of addressing early convergence of the algorithm, hence enabling the algorithm to transcend the local optima and enhance the population's variety. The Crossbar technique, as described in reference34, has the potential to improve the worldwide search functionality for addressing intricate optimization issues. Consequently, this can result in better precision in solving the algorithm and accelerated convergence speed.

Transverse crossing

Horizontal crossover refers to the process of conducting crossover operations across all dimensions of a population for the purpose to facilitate reciprocal learning among distinct individuals. Prior to the implementation of the horizontal crossing approach, all participants of the sand cat population are randomly paired together without any repetition. Subsequently, an arithmetic crossover is conducted, with the probability \({P}_{h}\) typically assigned an estimate of 1. The offspring are formed through the process of crossing the parent generation, as indicated by formula (13), (14):

$$\begin{array}{c}{M}_{i,d}^{hc}={q}_{1}*X\left(i,d\right)+\left(1-{q}_{1}\right)*X\left(j,d\right)+{c}_{1}*\left(X\left(i,d\right)-X\left(j,d\right)\right)\end{array}$$
(13)
$$\begin{array}{c}{M}_{j,d}^{hc}={q}_{2}*X\left(j,d\right)+\left(1-{q}_{2}\right)*X\left(i,d\right)+{c}_{2}*\left(X\left(j,d\right)-X\left(i,d\right)\right)\end{array}$$
(14)

where: \({q}_{1}\) and \({q}_{2}\) are random numbers [0,1]; Both \({c}_{1}\) and \({c}_{2}\) are random numbers [-1, 1]. \(X\left(i,d\right)\) and \(X\left(j,d\right)\) are the parents of d dimension \(X\left(i\right)\) and \(X\left(j\right)\) respectively. \({M}_{i,d}^{hc}\) and \({M}_{j,d}^{hc}\) represent the D-dimensional progeny of \(X\left(i,d\right)\) and \(X\left(j,d\right)\) by horizontal crossing, respectively. The generated offspring compete with their parents to retain the best fitness individual.

Longitudinal crossing

The SCSO algorithm exhibits a tendency to encounter local optima in subsequent iterations, a phenomenon frequently attributed to the occurrence of local optima in specific dimensions during the updating procedure. The vertical crossover is a type of arithmetic crossover that is applied to all individuals between two distinct dimensions. It involves updating only a specific dimension, so facilitating the evasion of a dimension that is imprisoned in a local optima. The vertical intersection of the \({d}_{1}\) and \({d}_{2}\) dimensions of individual \(i\) yields the descendant individuals, as calculated via Eq. (15).

$$\begin{array}{c}{M}_{i,{d}_{1}}^{vc}=q*X\left(i,{d}_{1}\right)+\left(1-q\right)*X\left(j,{d}_{2}\right)\end{array}$$
(15)

where: \(q\) is the random number on [0,1]: \({M}_{i,{d}_{1}}^{vc}\) is the child of parent \(X\left(i\right)\) generated by vertical crossing in \({d}_{1}\) and \({d}_{2}\) dimensions. The offspring individuals produced by longitudinal crossing compete with their parents to retain the individuals with better fitness.

Implementation of CWXSCSO algorithm

The modified sand cat swarm optimization technique suggested in this research follows the basic flow based on the aforementioned enhancement methods.

Step 1 Establish the initial position of the population and ascertain the parameters;

Step 2 The objective is to determine the fitness of a sand cat colony by calculating the current ideal location and target value.

Step 3 The position update formula of the basic SCSO algorithm (4) is utilized to update the current position of the sand cat when the value of |R| is greater than 1.

When the quantity of \(R\) is less than 1, the process of choosing the hunting approach is selected based on a random probability \(h\). When \(h\) is less than or equal to 1/3, the spatial arrangement of the sand cat is updated using the improved position formula (9). Conversely, when \(h\) is greater than or equal to 1/3, the spot of the sand cat is updated using the location formula (10).

Step 4 The horizontal cross operation is performed to cross all dimensions, and the position is updated based on Eqs. (13) and (14).

Step 5 Based on Formula (15), the longitudinal crossover operation is executed, followed by a comparison of fitness standards, and ultimately, the ideal reserve is selected.

Step 6 Assess the extent to which the algorithm satisfies the stop condition. If affirmative, exit the primary loop and display the desired location and value; else, revert back to Step 3.

Step 7 Terminate the program and display the optimal outcome.

Improved pseudocode of sand cat group optimization algorithm

Algorithm 1
figure a

Improved the pseudocode of sand cat swarm optimization algorithm.

CWXSCSO flow chart

Figure 1 describes the flow chart of the improved algorithm in detail, as shown below:

Figure 1
figure 1

CWXSCSO flow chart.

Computational complexity

The numerical representation of the time complexity of an algorithm is commonly denoted as \(O\). The CWXSCSO algorithm primarily comprises the dynamic exponential factor, elite decentralization approach, and crossbar strategy. The dimension of the search space is given as \(m\) when the population size is \(N\), and the greatest amount of iterations is \(T\), the time complexity analysis of the algorithm in this paper is as follows: The time complexity of initialization is \(O(m)\); The time complexity of calculating the fitness value is the dynamic exponent \(O(N+N\times logN)\). The dynamic index is improved on the basis of the original linear weight, and the time complexity is still \(O\left(N\times m\right)\) The elite decentralization approach has a temporal complexity of \(O\left(N\times m\right)\). The temporal complexity of the crossbar technique can be expressed as \(O\left(N\times m/2+m/2\right)=O(N\times m+m)\), and other calculations are small and negligible. Thus, the CWXSCSO method exhibits an overall computational complexity of \(O\left(m+T\times N\left(1+m+{\text{log}}N\right)\right)\), aligning with the computational complexity of the conventional sand cat swarm optimization algorithm.

Simulation experiment and result analysis

Benchmark function

To assess the efficacy and enhancement of CWXSCSO, a set of 15 benchmark functions was chosen, as outlined in Appendix A1. The function F1-F7 is unimodal, meaning it has just one global ideal and no local optimal. This characteristic allows for a more accurate evaluation of the algorithm's convergence time and optimization accuracy. The function F8-F13 exhibits multidimensionality and multimodality. Multiple local extreme values are frequently employed to assess the algorithm's performance in preventing local optima and facilitating search worldwide. The fixed-dimensional multimodal functions F14 and F15 are being referred to. Multimodal functions exhibit numerous local extrema.

Experimental results and analysis of reference function

To enhance the verification of the efficacy of each enhanced method in CWXSCSO, three strategies are examined individually. The concept can be categorized into three distinct strategies: dynamic factor (JSCSO), elite decentralization strategy (ZSCSO), and Crossbar strategy (XSCSO). The length of the query space for F1-F15 is fixed at 30, with a total size of \(N=50\). Additionally, the upper limit for the number of iterations, denoted as \({T}_{max}\), is limited to 1000. The optimum value, average value, and standard deviation are obtained by executing each function 30 times.

The statistics shown in Tables 1 and 2 demonstrate that the ideal value, median value, and standard deviation of CWXSCSO exhibit improved performance in comparison with SCSO when considering a dimensionality of 30. CWXSCSO, JSCSO, and ZSCSO algorithms achieve the theoretical best value for every round of the F1-F4 function for simple unimodal functions. In the F5 algorithm, the ideal value and mean value of CWXSCSO exhibit inferior performance compared to XSCSO, while the normal deviation is inferior to that of JSCSO. Nevertheless, CWXSCSO still possesses certain advantages when compared to the basic algorithm. The convergence of the method to the theoretical best value for the complex unimodal function F6 is frequently challenging. However, the inclusion of the crossbar method has resulted in improved optimization accuracy for the algorithm. The modified algorithm exhibits a 15-fold increase in accuracy compared to its original counterpart. The CWXSCSO strategy in F7 has superior performance in terms of ideal value, average value, and variance compared to SCSO and other methodologies. CWXSCSO exhibits the maximum optimization accuracy and the lowest standard deviation for multi-modal functions F8, F12, and F13. In conjunction with CWXSCSO, XSCSO exhibits superior optimization efficacy, hence indicating the advantageous nature of employing the crossbar technique to facilitate the algorithm's departure from local optima. All strategies in the F9-F11 function attain the theoretical ideal value, suggesting that the incorporated strategies exhibit favorable stability. Table 2 demonstrates that the optimization accuracy of CWXSCSO is preferable to that of SCSO and other processes for the multi-modal function F14-F15 with fixed dimensions. In simple terms, the revised algorithm incorporates dynamic factors and elite decentralization strategies to enhance its development performance. Additionally, the crossbar technique facilitates the algorithm's ability to transcend local optima.

Table 1 The optimization outcomes of the single-peak test function (F1-F6) were compared using various improvement methodologies.
Table 2 The optimization outcomes of multi-peak and fixed-dimension test functions (F8-F15) were compared using various improvement methodologies.

The iteration rules for functions exhibit a high degree of similarity, as depicted in Fig. 2. The iterative curve of CWXSCSO on the F1-F7 function exhibits a nearly linear pattern, suggesting that the enhanced approach outperforms the original technique with regard to both velocities of convergence and optimization precision. The F8 curve clearly demonstrates that the CWXSCSO algorithm achieved the optimal value after 110 iterations, but the previous algorithm did not reach the optimal value within 1000 iterations. Moreover, the image has multiple inflection points, indicating that the improved algorithm not only has a high fitness value, but also has a good ability to jump out of the local optimal. The curve between F11 and F13 has a linear pattern, suggesting that while the ideal value can be identified, the enhanced method demonstrates superior speed. The convergence speed of F9, F10, F14, and F15 exhibits rapid convergence towards the initial optimal value. Additionally, there are instances where the enhanced algorithm demonstrates the ability to surpass local optima and achieve speedy convergence.

Figure 2
figure 2

Convergence curve for comparison between strategies.

The numerical simulation results, as shown in Tables 1 and 2, along with the convergence diagram depicted in Fig. 2, demonstrate that the CWXSCSO algorithm exhibits superior optimization accuracy, faster convergence acceleration, and better robustness compared to the alternative approach. These findings provide empirical evidence supporting the effectiveness of the algorithm under consideration.

Comparison between CWXSCSO and different swarm intelligence algorithms

Test on CEC2019 benchmark function

The effectiveness analysis of the improved algorithm

This section presents a selection of six new upgraded algorithms for the purpose of comparing the optimization performance of CWXSCSO with other optimization algorithms. They are Salp swarm algorithm based on craziness and adaptive (CASSA)35, Subtraction-Average-Based Optimizer (GSABO)36, Grey Wolf Optimization Algorithm Based on Elite Learning for Nonlinear Parameters (IGWO)37, Whale optimization algorithm based on chaotic search strategy (CWOA)38, Whale Optimization Algorithm Based on Elite Opposition-based and Crisscross Optimization (ECWOA)34 and Multi-Strategy Chimp Optimization Algorithm and Its Application of Engineering Problem (EOSMICOA)39. The solution to the benchmark function in Appendix A2 has been obtained. The experiment had a population size of \(N=50\), with 1000 iterations. Each experiment was completed independently 30 times.

The analysis of Table 3 reveals that the enhanced CWXSCSO algorithm exhibits superior optimization accuracy compared to the other six intelligent optimization algorithms. Furthermore, the improved CWXSCSO algorithm demonstrates greater efficacy on the chosen test functions. The optimization accuracy of CWXSCSO meets the theoretical optimal value of 1 while solving the function F1-F2. Additionally, the average and standard deviations of CWXSCSO are preferable to those of the comparison algorithm. The results produced from the F3, F5, F6, F8, and F10 functions exhibit superior performance compared to the comparison algorithm. The standard deviation of CWXSCSO in function F4 is inferior to that of ECWOA, with just a slight improvement. However, in the F7 function, the ideal value and mean exhibit inferior performance compared to ECWOA, but the standard deviation demonstrates greater accuracy in comparison to ECWOA. In the F9 function, the CWXSCSO algorithm exhibits higher precision comparable to other algorithms. However, it is worth noting that the median value and standard deviation of CWXSCSO are comparatively weaker to those of other algorithms. Hence, it can be ascertained that while certain values of the enhanced algorithm presented in this study may not align with the theoretical ideal value, it generally exhibits superiority over other algorithms and possesses advantages in the realm of function optimization problems. This observation further underscores the efficacy of the proposed method.

Table 3 Compare the results with his intelligent algorithm.

The convergence properties of CWXSCSO can be easily observed in Fig. 3, in comparison to other methods. The figure clearly demonstrates that in functions F1 and F2, the curve exhibits rapid convergence and can swiftly approach the global optimal value. In function F3, the population rapidly attains an ideal value, which exhibits superior accuracy compared to alternative algorithms. Despite the relatively low solution accuracy in the F4 and F5 images, the function nevertheless offers several advantages beyond the original approach and other techniques. The optimization of F6-F10 exhibits several nuanced inflection points, suggesting that the algorithm demonstrates an exceptional capacity to overcome local optima and achieve stronger convergence accuracy as opposed to other techniques. In a nutshell judging on the CEC2019 test, CWXSCSO outperforms other algorithms.

Figure 3
figure 3

The fitness curves of each algorithm are compared.

Wilcoxon rank sum test

The Wilcoxon rank sum test is a non-parametric statistical test that can be utilized irrespective of the distribution of the subject under investigation and the availability of information regarding the distribution. Hence, it is frequently employed to assess the data distribution of two sets of autonomous samples that deviate from a normal distribution. This approach uses the rank of the sample as a substitute for the sample value in order to facilitate data comparison, thereby mitigating the impact of a single value within the sample on the entire sample. Hence, this approach can provide a more scientific representation of the algorithm's optimization performance contrasted to the median value and standard deviation. This section presents a comparison and analysis of the findings obtained from the CEC2019 test. The optimization outcomes of CASSA, GSABO, IGWO, CWOA, ECWOA, and EOSMICOA were compared using the Wilcoxon rank sum test, based on the CWXSCSO method. The hypothesis test employs a significance level of \(p=5\%\) as the criterion for judgment. At a significance level of 5%, it can be concluded that there is a substantial difference between the two groups of samples.

In Table 4, the presence of \(NaN\) indicates that there lacks a statistically noteworthy disparity between the two groups of data. Additionally, the symbols " + ", " = ", and " − " are used to indicate that the outcomes of the CWXSCSO algorithm outperforms, is equivalent to, or falls short of the comparison algorithm, appropriately. The findings indicate that CWXSCSO exhibits a notable advantage over both the novel method and existing enhanced optimization techniques.

Table 4 Results of Wilcoxon rank sum test for CEC2019 functions.

Test on CEC2021 benchmark function

Validity analysis on CEC2021 function

With the aim to comprehensively evaluate the optimization capabilities of the upgraded method, a set of 10 CEC2021 test functions with distinct optimization features were chosen (as illustrated in Appendix A3). The CWXSCSO is computed and thereafter compared to the previous six enhancements. The parameters are uniformly set to the total size \(N=50\), the upper limit number of iterations \({T}_{max}=1000\), and the dimension \(d=10\). Each function was simulated 30 times to obtain the optimum value, the average value, and standard deviation of the result. A plot is generated to depict the convergence curves of ten functions.

Based on the analysis of the data presented in Table 5, it turns out apparent CWXSCSO has higher performance relative to other algorithms in terms of both the ideal value and average value over the ten functions. The C3 function exhibits superior minimum and average values compared to GSABO and IGWO, while its standard deviation surpasses that of IGWO and EOSMICOA. This observation indicates that the enhanced algorithm lacks robust flexibility to the C3 function. The lowest and average values of functions C7 and C9 are optimal. However, the standard deviation is suboptimal. The algorithm exhibits a wide variety of fluctuations, although it possesses the capability to get the ideal theoretical value. The CWXSCSO algorithm outperforms other algorithms in functions C1, C2, C4-C6, C8, and C10, exhibiting superior values with a narrow range of fluctuations and excellent precision.

Table 5 Compared optimization results of different intelligent algorithms.

Figure 4 clearly demonstrates that all images of functions C1-C10 exhibit significant slopes, suggesting that the enhanced technique achieves faster convergence compared to existing optimization strategies. Photos C1, C4-C6, and C9 exhibit several twists in the CWXSCSO lines, suggesting that the enhanced CWXSCSO possesses the capability to transcend local optima. Contrary to CWXSCSO, other comparison algorithms like EOSMICOA have the capacity to exit the local optimal at numerous iterations. However, their rates of convergence rapidity and precision are not as great. CWXSCSO demonstrates rapid convergence to the global optimal in photos C3, C7, C8, and C10. In images, C3 and C8, the optimal fitness value of CWXSCSO convergence are comparable to that of the original method and CASSA algorithm, but it exhibits the fastest convergence speed.

Figure 4
figure 4

Convergence curves of different algorithms.

To summarize, the CEC2021 function test demonstrates that the CWXSCSO algorithm outperforms competing algorithms, hence confirming its effectiveness.

Wilcoxon rank sum test of CEC2021 test function

This section presents a comparison and analysis of the findings obtained from the CEC2021 test, with the aim of enhancing the evaluation of the algorithm's optimization performance. The optimization outcomes of CASSA, GSABO, IGWO, CWOA, ECWOA, and EOSMICOA were compared using the Wilcoxon rank sum comparison test, based on the CWXSCSO method. According to the analysis of the results in Table 6, it indicate that CWXSCSO has no significant disparity with the comparison algorithm across several functions. However, none of the functions is inferior to the algorithm, and almost all of the functions demonstrate superior performance in comparison to alternative algorithms. Hence, it may be inferred that CWXSCSO has excellent results compared to both the new method and existing enhanced optimization techniques.

Table 6 Results of Wilcoxon rank sum test for CEC2021 test.

Engineering application

Six engineering challenges have been selected for this part for the purpose to assess how well CWXSCSO performs when used to engineering optimization problems. The sine and cosine optimization algorithm (SCA)40, frost and ice optimization algorithm (RIME)41, butterfly optimization algorithm (BOA)42, Harris Eagle Optimization algorithm (HHO)8, and Osprey optimization algorithm (OOA)43 were chosen as the primary three technical applications. The whale optimization algorithm (WOA)7, the locust optimization algorithm (GOA)44, the gray wolf optimization algorithm (GWO)45, the marine predator optimization algorithm (MPA)46, and the frost and ice optimization algorithm (RIME) were used to compare the final three technical applications. Every algorithm in the experiment has a population of 30 and an upper limit of 1000 iterations.

Pressure vessel design problems

The performance of the modified algorithm pair gets assessed using pressure vessel design issues in this research. The main objective of the pressure vessel design challenge is to decrease the production expenses associated with the pressure vessel. This problem contains the selection of four optimization variables, namely shell thickness \({T}_{S}\), head thickness (\({T}_{h}\)), inner radius (\(R\)), and length of cylinder section without head (\(L\)). The mathematical description of the pressure vessel design problem is as follows:

variable:

$$\overrightarrow{x}=\left[{x}_{1} {x}_{2} {x}_{3} {x}_{4}\right]=\left[{T}_{S} {T}_{h} R L\right]$$

Function:

$$f\left(\overrightarrow{x}\right)=0.6224{x}_{1}{x}_{3}{x}_{4}+1.7781{x}_{2}{x}_{3}^{2}+3.1661{x}_{1}^{2}{x}_{4}+19.84{x}_{1}^{2}{x}_{3}$$

Constraint condition:

$${g}_{1}\left(\overrightarrow{x}\right)=-{x}_{1}+0.0193{x}_{3}\le 0$$
$${g}_{2}\left(\overrightarrow{x}\right)=-{x}_{3}+0.00954{x}_{3}\le 0$$
$${g}_{3}\left(\overrightarrow{x}\right)=-\pi {x}_{3}^{2}-\frac{4}{3}\pi {x}_{3}^{3}+1296000\le 0$$
$${g}_{4}\left(\overrightarrow{x}\right)={x}_{4}-240\le 0$$

Variable interval:

$$0\le {x}_{1},{x}_{2}\le 99, 10\le {x}_{3},{x}_{4}\le 200$$

The experimental findings of CWXSCSO and the comparison algorithm are presented in Table 7. The CWXSCSO yields a value of 5886.05. When compared to alternative algorithms, this particular algorithm exhibits a superior competitive advantage in terms of maintaining the proper functioning of the pressure vessel while simultaneously minimizing costs. Benefits in guaranteeing the operation of the pressure vessel while reducing expenses. The updated method demonstrates rapid convergence to the ideal value with the best convergence accuracy, as depicted in Fig. 5. In turn, the CWXSCSO facility exhibits exceptional engineering optimization capabilities.

Table 7 Optimization results of pressure vessel design problems.
Figure 5
figure 5

Optimization convergence diagram of pressure vessel design problem.

Welding beam design problems

The issue at hand is the Welded Beam Design (WBD), which involves the utilization of an optimization method to minimize the production cost associated with the design. The optimization problem can be boiled down to the identification of four design variables that meet the constraints of shear stress \((\tau )\), bending stress \((\theta )\), beam bending load \(\left({P}_{c}\right)\), end deviation \((\delta )\), and boundary conditions, namely beam length \((l)\), height \((t)\), thickness \((b)\), and weld thickness \((h)\). The objective is to minimize the manufacturing cost of welded beams. The problem of welded beams is a common example of a nonlinear programming problem. The mathematical description of the welded beam design problem is as follows:

Variable:

$$\overrightarrow{x}=\left[{x}_{1} {x}_{2} {x}_{3} {x}_{4}\right]=\left[h l t b\right]$$

Function:

$$f\left(\overrightarrow{x}\right)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4}\left(14.0+{x}_{2}\right)$$

Constraint condition:

$${g}_{1}\left(\overrightarrow{x}\right)=\tau \left(\overrightarrow{x}\right)-{\tau }_{max}\le 0$$
$${g}_{2}\left(\overrightarrow{x}\right)=\sigma \left(\overrightarrow{x}\right)-{\sigma }_{max}\le 0$$
$${g}_{3}\left(\overrightarrow{x}\right)=\delta \left(\overrightarrow{x}\right)-{\delta }_{max}\le 0$$
$${g}_{4}\left(\overrightarrow{x}\right)={x}_{1}-{x}_{4}\le 0$$
$${g}_{5}\left(\overrightarrow{x}\right)=P-{P}_{c}\left(\overrightarrow{x}\right)\le 0$$
$${g}_{6}\left(\overrightarrow{x}\right)=0.125-{x}_{1}\le 0$$
$${g}_{7}\left(\overrightarrow{x}\right)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4}\left(14.0+{x}_{2}\right)-5.0\le 0$$

Variable interval:

$$0.1\le {x}_{1}\le 2,{ 0.1\le x}_{2}\le 10, 0.1\le {x}_{3}\le 10 ,{0.1\le x}_{4}\le 2$$

As can be seen from Table 8, the final result of CWXSCSO is 1.6935. As can be seen in Fig. 6, the initial fitness value of the improved algorithm is already very good, and there are several subtle turns later, indicating that it has the ability to jump out of the local optimal. The improved algorithm achieves the purpose of reducing the manufacturing cost, and the cost of manufacturing welded beams is minimal compared with other algorithms.

Table 8 Optimization results of welding beam design problems.
Figure 6
figure 6

Optimization convergence diagram of welding beam design problem.

Reducer design problem

The reducer holds an important place within mechanical systems as a crucial component of the gear box, serving a diverse range of applications. The primary aim of this challenge is to diminish the overall weight of the reducer through the optimization of the seven parameter variables. They are the tooth surface width \(b\)(= \({x}_{1}\)), the gear module \(m(={x}_{2})\), the tooth count in the pinion \(z(={x}_{3})\), the measurement of the initial shaft distance between bearings. \({l}_{1}(={x}_{4})\), the distance between the bearings of the second shaft \({l}_{2}(={x}_{5})\), the diameter of the initial shaft \({d}_{1}(={x}_{6})\) and the measurement of the diameter of the second shaft \({d}_{2}(={x}_{7})\). The mathematical description of the speed reducer design problem is as follows:

Variable:

$$\overrightarrow{x}=\left[{x}_{1} {x}_{2} {x}_{3} {x}_{4} {x}_{5} {x}_{6} {x}_{7}\right]=\left[b m z {l}_{1} {l}_{2} {d}_{1} {d}_{2}\right]$$

Function:

$$f\left(\overrightarrow{x}\right)=0.7854{x}_{1}{x}_{2}^{2}\left(3.3333{x}_{3}^{2}+14.9334{x}_{3}-43.0934\right)-1.508{x}_{1}\left({x}_{6}^{2}+{x}_{7}^{2}\right)+7.4777\left({x}_{6}^{3}+{x}_{7}^{3}\right)+0.7854({x}_{4}{x}_{6}^{2}+{{x}_{5}x}_{7}^{2})$$

Constraint condition:

$${g}_{1}\left(\overrightarrow{x}\right)=\frac{27}{{x}_{1}{x}_{2}^{2}{x}_{3}}-1\le 0$$
$${g}_{2}\left(\overrightarrow{x}\right)=\frac{397.5}{{x}_{1}{x}_{2}^{2}{x}_{3}^{2}}-1\le 0$$
$${g}_{3}\left(\overrightarrow{x}\right)=\frac{1.93{x}_{4}^{3}}{{x}_{2}{x}_{3}{x}_{6}^{4}}-1\le 0$$
$${g}_{4}\left(\overrightarrow{x}\right)=\frac{1.93{x}_{5}^{3}}{{x}_{2}{x}_{3}{x}_{7}^{4}}-1\le 0$$
$${g}_{5}\left(\overrightarrow{x}\right)=\frac{\sqrt{{\left(\frac{745{x}_{4}}{{x}_{2}{x}_{3}}\right)}^{2}+16.9\times {10}^{6}}}{110.0{x}_{6}^{3}}-1\le 0$$
$${g}_{6}\left(\overrightarrow{x}\right)=\frac{\sqrt{{\left(\frac{745{x}_{4}}{{x}_{2}{x}_{3}}\right)}^{2}+157.5\times {10}^{6}}}{85.0{x}_{6}^{3}}-1\le 0$$
$${g}_{7}\left(\overrightarrow{x}\right)=\frac{{x}_{2}{x}_{3}}{40}-1\le 0$$
$${g}_{8}\left(\overrightarrow{x}\right)=\frac{{5x}_{2}}{{x}_{1}}-1\le 0$$
$${g}_{9}\left(\overrightarrow{x}\right)=\frac{{x}_{1}}{{12x}_{2}}-1\le 0$$
$${g}_{10}\left(\overrightarrow{x}\right)=\frac{{1.5x}_{6}+1.9}{{x}_{4}}-1\le 0$$
$${g}_{11}\left(\overrightarrow{x}\right)=\frac{{1.1x}_{7}+1.9}{{x}_{5}}-1\le 0$$

Variable interval:

$$2.6\le {x}_{1}\le 3.6 ,0.7\le {x}_{2}\le 0.8 ,17\le {x}_{3}\le 28 , 7.3\le {x}_{4}\le 8.3 ,7.8\le {x}_{5}\le 8.3,$$
$$2.9\le {x}_{6}\le 3.9, 5.0\le {x}_{7}\le 5.5$$

Table 9 and Fig. 7 demonstrate that the modified method is adept at minimizing the weight of the reducer under 11 boundaries. It suggests that the enhancement is effective and may be more effectively utilized in mechanical systems.

Table 9 Optimization results of reducer design problems.
Figure 7
figure 7

Reducer design optimization convergence curve.

Step cone pulley problem

This engineering project aims to create a 4-step cone pulley with a minimal weight by looking at 5 design elements. Four variables represent the diameter of individual step of the pulley, denoted as \({d}_{i}(i=\mathrm{1,2},\mathrm{3,4})\), while the final variable represents the magnitude of the pulley's breadth, denoted as \(w\). There are 8 nonlinear constraints and 3 linear constraints in the problem. The restriction is to maintain uniformity in the belt length \({C}_{i}\), tension ratio \({R}_{i}\), and belt transfer power \({P}_{i}\) throughout all steps. The mathematical description of the step cone pulley problem is as follows:

Function:

$$f\left(x\right)=\rho \omega \left[{d}_{1}^{2}\left\{1+{\left(\frac{{N}_{1}}{N}\right)}^{2}\right\}+{d}_{2}^{2}\left\{1+{\left(\frac{{N}_{2}}{N}\right)}^{2}\right\}+{d}_{3}^{2}\left\{1+{\left(\frac{{N}_{3}}{N}\right)}^{2}\right\}+{d}_{4}^{2}\left\{1+{\left(\frac{{N}_{4}}{N}\right)}^{2}\right\}\right]$$

Constraint condition:

$${h}_{1}\left(x\right)={C}_{1}-{C}_{2}=0, {h}_{2}\left(x\right)={C}_{1}-{C}_{3}=0 , {h}_{3}\left(x\right)={C}_{1}-{C}_{4}=0$$
$${g}_{\mathrm{1,2},\mathrm{3,4}}\left(x\right)={R}_{i}\ge 2, {g}_{\mathrm{5,6},\mathrm{7,8}}\left(x\right)={P}_{i}\ge \left(0.75*745.6998\right)$$

where:

$${C}_{i}=\frac{\pi {d}_{i}}{2}\left(1+\frac{{N}_{i}}{N}\right)+\frac{{\left(\frac{{N}_{i}}{N}-1\right)}^{2}}{4a}+2a i=\left(\mathrm{1,2},\mathrm{3,4}\right)$$
$${R}_{i}=exp\left[\mu \left\{\pi -2{{\text{sin}}}^{-1}\left\{\left(\frac{{N}_{i}}{N}-1\right)\frac{{d}_{i}}{2a}\right\}\right\}\right] i=\left(\mathrm{1,2},\mathrm{3,4}\right)$$
$${P}_{i}=stw\left[1-exp\left[-\mu \left\{\pi -2{{\text{sin}}}^{-1}\left\{\left(\frac{{N}_{i}}{N}-1\right)\frac{{d}_{i}}{2a}\right\}\right\}\right]\right]\frac{\pi {d}_{i}{N}_{i}}{60} i=\left(\mathrm{1,2},\mathrm{3,4}\right)$$
$$\rho =7200kg/{m}^{3} , a=3m ,\mu =0.35 ,s=1.75MPa ,t=8mm$$

Variable interval:

$$0\le {d}_{1}, {d}_{2}\le 60, 0\le {d}_{3}, \omega \le 90$$

Table 10 clearly demonstrates that the MPA method outperforms the CWXSCSO algorithm, but it still possesses certain advantages over other algorithms. Figure 8 illustrates that while the precision of convergence in CWXSCSO is less than that of MPA, its convergence speed beats that of MPA. Despite lacking MPA for the stepping cone pulley problem, CWXSCSO still has the benefit of rapid convergence speed.

Table 10 Optimization results of step cone pulley problem.
Figure 8
figure 8

Optimization convergence diagram of step cone pulley problem.

Planetary gear train design optimization problem

In power mechanical systems, the design of a planetary gear train presents a limited optimization problem. The issue encompasses three optimization variables, specifically the quantity of gear teeth \(\left({N}_{1},{N}_{2},{N}_{3},{N}_{4},{N}_{5},{N}_{6}\right)\), gear modulus \(\left({m}_{1},{m}_{2}\right)\), and the figure of merit \(\left(p\right)\). The primary aim of the issue is to limit the maximum error associated with the transmission ratio employed in automotive production. The issue at hand encompasses a total of six integer variables, three discrete variables, and eleven distinct geometric and assembly restrictions. The mathematical description of the planetary gear train design optimization problem is as follows:

Variable:

$$x=\left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6},{x}_{7},{x}_{8},{x}_{9}\right)=\left({N}_{1},{N}_{2},{N}_{3},{N}_{4},{N}_{5},{N}_{6},{m}_{1},{m}_{2},p\right)$$

Function:

$$f\left(x\right)=max\left|{i}_{k}-{i}_{ok}\right|, k=\left\{\mathrm{1,2},\dots ,R\right\}$$

where:

$${i}_{1}=\frac{{N}_{6}}{{N}_{4}}, {i}_{o1}=3.11, {i}_{2}=\frac{{N}_{6}\left({N}_{1}{N}_{3}+{N}_{2}{N}_{4}\right)}{{N}_{1}{N}_{3}\left({N}_{6}+{N}_{4}\right)}, {i}_{OR}=-3.11, {I}_{R}=-\frac{{N}_{2}{N}_{6}}{{N}_{1}{N}_{3}}, {i}_{O2}=1.84$$

Constraint condition:

$${g}_{1}\left(x\right)={m}_{2}\left({N}_{6}+2.5\right)-{D}_{max}\le 0$$
$${g}_{2}\left(x\right)={m}_{1}\left({N}_{1}+{N}_{2}\right)+{m}_{1}\left({N}_{2}+2\right)-{D}_{max}\le 0$$
$${g}_{3}\left(x\right)={m}_{2}\left({N}_{4}+{N}_{5}\right)+{m}_{2}\left({N}_{5}+2\right)-{D}_{max}\le 0$$
$${g}_{4}\left(x\right)=\left|{m}_{1}\left({N}_{1}+{N}_{2}\right)-{m}_{1}\left({N}_{6}+{N}_{3}\right)\right|-{m}_{1}-{m}_{2}\le 0$$
$${g}_{5}\left(x\right)=-\left({N}_{1}+{N}_{2}\right){\text{sin}}\left(\frac{\pi }{p}\right)+{N}_{2}+2+{\delta }_{22}\le 0$$
$${g}_{6}\left(x\right)=-\left({N}_{6}-{N}_{3}\right){\text{sin}}\left(\frac{\pi }{p}\right)+{N}_{3}+2+{\delta }_{33}\le 0$$
$${g}_{7}\left(x\right)=-\left({N}_{4}+{N}_{5}\right){\text{sin}}\left(\frac{\pi }{p}\right)+{N}_{5}+2+{\delta }_{55}\le 0$$
$${g}_{8}\left(x\right)={\left({N}_{3}+{N}_{5}+2+{\delta }_{35}\right)}^{2}-{\left({N}_{6}-{N}_{3}\right)}^{2}-{\left({N}_{4}+{N}_{5}\right)}^{2}+2\left({N}_{6}-{N}_{3}\right)\left({N}_{4}+{N}_{5}\right){\text{cos}}\left(\frac{2\pi }{p}-\beta \right)\le 0$$
$${g}_{9}\left(x\right)={N}_{4}-{N}_{6}+{2N}_{5}+2{\delta }_{56}+4\le 0$$
$${g}_{10}\left(x\right)={2N}_{3}-{N}_{6}+{N}_{4}+2{\delta }_{34}+4\le 0$$
$${h}_{1}\left(x\right)=\frac{{N}_{6}-{N}_{4}}{p}=integer$$

where:

$${\delta }_{22}={\delta }_{33}={\delta }_{55}={\delta }_{35}={\delta }_{56}=0.5$$
$$\beta =\frac{{cos}^{-1}\left({\left({N}_{4}+{N}_{5}\right)}^{2}+{\left({N}_{6}-{N}_{3}\right)}^{2}-{\left({N}_{3}+{N}_{5}\right)}^{2}\right)}{2\left({N}_{6}-{N}_{3}\right)\left({N}_{4}+{N}_{5}\right)}$$
$${D}_{max}=220$$

Variable interval:

$$P=\left(\mathrm{3,4},5\right), {m}_{1},{m}_{2}=\left(\mathrm{1.75,2.0,2.25,2.5,2.75,3.0}\right) , 17\le {N}_{1}\le 96,$$
$$14\le {N}_{2}\le 54, 14\le {N}_{3}\le 51, 17\le {N}_{4}\le 46, 14\le {N}_{5}\le 51, 48\le {N}_{6}\le 124$$

Based on the data shown in Fig. 9 and Table 11, it is evident that CWXSCSO continues to outperform other methods in terms of convergence accuracy and convergence speed. This illustrates the potential for widespread implementation and utilization of the upgraded algorithm in power machinery.

Figure 9
figure 9

Convergence curve of planetary gear train design optimization problem.

Table 11 Results of planetary gear train design optimization problem.

Robot clamping optimization problem

The issue of robot hand claws is a complex challenge within the field of mechanical structure engineering. The goal of the robot clamping optimization is to minimize the disparity between the highest and lowest magnitudes of forces. The challenge of robot grippers encompasses a total of seven continuous design variables– the three connecting rods \((a,b,c)\), the vertical displacement of the linkages \((d)\), the vertical distance separating the initial node of the robotic arm from the end of the actuator \((e)\), the displacement in the horizontal direction between the actuator end and the linkages node \((f)\), and the angle of the second and third linkages in a geometric context \(\left(\rho \right)\). There appear a total of seven distinct limitations. The mathematical description of the robot clamping optimization problem is as follows:

Variable:

$$x=\left({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6},{x}_{7}\right)=\left(a,b,c,d,e,f,p\right)$$

Function:

$$f\left(x\right)=-\underset{z}{{\text{min}}}{F}_{k}\left(x,z\right)+\underset{z}{{\text{max}}}{F}_{k}\left(x,z\right)$$

Constraint condition:

$${g}_{1}\left(x\right)=-{Y}_{min}+y\left(\left(x\right),{Z}_{max}\right)\le 0$$
$${g}_{2}\left(x\right)=-y\left(\left(x\right),{Z}_{max}\right)\le 0$$
$${g}_{3}\left(x\right)={Y}_{max}-y\left(\left(x\right),0\right)\le 0$$
$${g}_{4}\left(x\right)=y\left(\left(x\right),0\right)-{Y}_{G}\le 0$$
$${g}_{5}\left(x\right)={l}^{2}+{d}^{2}-{\left(a+b\right)}^{2}\le 0$$
$${g}_{6}\left(x\right)={b}^{2}-{\left(a-d\right)}^{2}-{\left(l-{Z}_{max}\right)}^{2}\le 0$$
$${g}_{7}\left(x\right)={Z}_{max}-f$$

where:

$$\alpha ={cos}^{-1}\left(\frac{{a}^{2}+{g}^{2}-{b}^{2}}{2ag}\right)+\Phi , g=\sqrt{{f}^{2}+{\left(z-f\right)}^{2}}$$
$$\beta ={cos}^{-1}\left(\frac{{b}^{2}+{g}^{2}-{a}^{2}}{2ag}\right)-\Phi ,\Phi ={tan}^{-1}\left(\frac{d}{f-z}\right)$$
$$y\left(x,z\right)=2\left(l+d+c{\text{sin}}\left(\beta +p\right)\right)$$
$${F}_{k}=\frac{Pb{\text{sin}}\left(\alpha +\beta \right)}{2c{\text{cos}}\left(\alpha \right)}, {Y}_{min}=50, {Y}_{max}=100, { Y}_{G}=150, P=100$$

Variable interval:

$$0\le d\le 50, 100\le c\le 200, 10\le e,a,b\le 150, 1\le p\le 3.14, 100\le f\le 300$$

The data presented in Table 12 indicates that the CWXSCSO has the smallest disparity between its maximum force and minimum force. The curve convergence accuracy of CWXSCSO is the highest, as depicted in Fig. 10. Thus, the modified algorithm exhibits commendable competitive prowess within the field of mechanical engineering.

Table 12 Results of robot clamping optimization problem.
Figure 10
figure 10

Convergence diagram of robot clamping optimization problem.

Conclusion

The article introduces a sand cat swarm optimization algorithm which enhances the optimization power by including elite decentralization and crossbar technique. The algorithm that has been enhanced introduces a novel dynamic exponential factor. The position formula is updated using the elite decentralization technique, followed by the introduction of the crossbar strategy to accelerate the rate of convergence and improve the precision of search results. All strategies undergo testing using an assortment of 15 benchmark functions. CWXSCSO demonstrates superior search efficiency and stability, along with improved local search power and optimization accuracy, when compared to SCSO. Among the many strategies, elite decentralization and crossbar strategies have been found to be advantageous in enabling the algorithm beyond local optima. Simulation experiments were conducted on 10 test functions of CEC2019 and 10 test functions of CEC2021 using CWXSCSO and 6 other optimization algorithms. The results show that CWXSCSO outperforms the other 6 optimization algorithms in terms of optimization results. Additionally, CWXSCSO has the capacity to produce global optimal solutions for certain functions. The feasibility of the modified algorithm in practical engineering issues is further proven by the design of pressure vessel, welding beam, reducer, stepping cone pulley, planetary gear train design optimization, and robot clamping optimization. consequently, the primary purpose of this stage is to utilize it in order to address extensive, intricate multi-objective optimization problems and real engineering implementations.