Improved marine predators algorithm for engineering design optimization problems

The Marine Predators Algorithm (MPA) is recognized as one of the optimization method in population-based algorithm that mimics the foraging optimization strategy dominated by the optimal foraging theory, which encounter rate policy between predator and prey in marine ecosystems for solving optimization problems. However, MPA presents weak point towards premature convergence, stuck into local optima, lack of diversity, specifically, which is in the real-world niche problems within different industrial engineering design domains. To get rid of such limitations, this paper presents an Improved Marine Predators Algorithm (IMPA) to mitigate above mentioned limitations by deploying the self-adaptive weight and dynamic social learning mechanism that performs well and challenges tough multimodal benchmark-functions and CEC 2021 benchmark suite, compared with the state-of-the-art hybrid optimization algorithms and the recently modified MPA. The experimental results show that the IMPA outperforms with better precision attainment and better robustness due to its enjoying equalized exploration and exploitation feature over other methods. In order to provide a promising solution for industrial engineering design problems and highlight the potential of the IMPA as a useful tool for solving real-world problems. This study has implemented four highly representative engineering design problems, including Welded Beam Design, Tension/Compression Spring Design, Pressure Vessel Design and Three Bar Design. The experimental results also proved its efficiency to successfully solve the complex industrial engineering design problems.

Rand represents a random vector between 0 and 1, while min X and max X denote the lower and upper bounds of the parameters being optimized.This initial dispersion of solutions sets the stage for subsequent optimization steps.In the context of MPA, the best solution discovered is identified as the dominant Predators, and it plays a pivotal role in the creation of a matrix known as Elite, as depicted in Eq. (2).These matrices are instrumental in tracking and locating prey, relying on the positional information of the prey itself.
,... , ... .......... ,... , ,... , (2) For the Elite matrix, I d n X , represents the best predators vector, where n corresponds to the number of Predators, and d is the number of dimensions.The Elite matrix undergoes updates when a superior agent replaces the best predators.The Prey matrix, on the other hand, shares a similar dimensionality with the Elite matrix.Agents in pursuit of prey adjust their positions based on the information contained within the Prey matrix.Therefore, the initialization phase is pivotal in generating the initial Prey matrix, with the best predators being responsible for creating the Elite matrix.Eq. ( 3) provides an expression for the Prey matrix.
In essence, the entire optimization process in MPA hinges on the interactions and dynamics governed by the Prey and Elite matrices, encapsulating the algorithm's unique approach to problem-solving.There are three phases in the MPA's optimization process: Phase 1 -High Velocity Ratio (HVR): This initial phase occurs when the prey's speed falls below that of the predators, emphasizing the importance of exploration in the early stages of optimization.The mathematical representation of this phase is given by in Eq. ( 4): While Iteration<1/3×max(Iteration) B R represents the Brownian Motion utilizing a regularly distributed random vector.
The symbol ⊗ denotes entry-wise multiplications, simulating the prey's motion via the multiplication of B R .The phase introduces two variables: a constant number P and a vector of random values R ranging from 0 to 1.
Phase 2 -Uniform Velocity Ratio (UVR): In this phase, both prey and predators move at comparable speeds, necessitating a balance between exploration and exploitation.Half of the agents are assigned to exploration, and the other half to exploitation, with both predators and prey sharing these responsibilities.The mathematical representation during this phase is as follows: While 1/3×max(Iteration)<Iteration<2/3×max(Iteration) The first half of searching agents in Eq.( 5) L R is a randomly generated vector based on Levy Flight (LF).It simulates prey movement by adding prey location to the step size.The MPA posits that the other 50% of the population follows the rules outlined in Eq. (6).Phase 3 -Low Velocity Ratio (LVR): This stage occurs when the prey moves more slowly than the predators and is defined as in Eq. ( 7) : While Iteration>2/3×max(Iteration) Eddy formation or Fish Aggregation Devices (FADs), which can impact the algorithm's behavior.These factors are considered as local optima avoidance operators and influence the optimization process.Longer hops during simulation are considered to minimize stagnation in local optima.The impact of FADs is outlined as follows in Eq. ( 8): These phases and adaptive strategies define the dynamics of the MPA, which is presented in Fig. 1.They are enabling it to navigate the optimization landscape efficiently while addressing different scenarios and challenges.

2.Background and related work
Since the traditional MPA was initially presented in 2020, a number of MPA versions have been put up for a wide range of optimization issues.Binary, discrete, modified, hybridized, chaotic, quantum, and multi-objective MPA versions are the different classifications for these MPA variations.Abdel-Basset et al. [7] created the IMPA to solve multi-threshold medical image segmentation.This approach improves the particles that fail to find a workable solution after a certain number of iterations by adding a ranking-based diversity reduction (RDR) mechanism to the traditional MPA.In order to detect COVID-19, the model utilized in this study operates on the chest X-ray pictures.The experiment findings demonstrate that the IMPA provides higher-quality segmented images than other algorithms when compared against a wide range of other meta heuristics.
The IMPOA is suggested in Shaheen et al. [8] as a solution to the CHP issue.By improving predator techniques that take environmental and climatic variation into account, IMPOA maintains a balance between the exploitation and exploration phases.Several limitations are taken into consideration when solving the CHP issue using the suggested technique.Test systems including 5, 48, 84, and 96 units were used to assess the IMPOA's efficacy.The simulation's findings demonstrate the IMPOA's superiority in terms of stability and pace of convergence.
In Yu et [9], an adaptive MPA (AMPA) is presented as a way to minimize load demands in Hoxtolgay by optimizing the configuration of a hybrid power system integrating batteries, PV, and diesel generator.Three objective functions, including the annualized cost, the value of CO2 emissions, and the chance of load loss for the hybrid power generation system, are minimized in this multi-objective optimization problem.The AMPA is compared with LOA, FOA, COA, and conventional MPA and validated on multiple benchmark functions.The simulation findings validate AMPA's best accuracy and fastest convergence.
Shaheen, El-Sehiemy, et al. [10] offer an improved MPA to address simultaneous distribution reconfiguration with distribution generations.The suggested method accounts for differences in the temperature and environment.The prey's new positions are updated in EMPA using a random probability.The voltage stability index (VSI) is improved and power losses are minimized using the suggested algorithm under different loading scenarios.EMPA's performance was evaluated using IEEE 33, 83, and 137-bus distribution networks.Compared to other competing algorithms, the EMPA produces better results by efficiently minimizing the problem under consideration.
In Yang et al. [11], multi-strategy marine predators algorithm-joint regularized is a semi-supervised classification model.Machine Learning with Applications semi-supervised extreme learning machine (MSMPA) is built.MSMPA incorporates supervised information regularization and is based on Hessian regularization.MSMPA uses a number of techniques to enhance its functionality.In order to create a high-quality initial population for the MSMPA, a chaotic opposition learning strategy is applied during initialization.In each of the three stages of the MSMPA, adaptive inertia weights and adaptive step control factors are used to improve local optimal prevention, convergence speed, and exploration and exploitation capabilities.The simulation studies show that MSMPA exhibits better classification accuracy and stability than other competitive classification approaches.
In Houssein, Hassaballah et al. [12], a novel nonlinear step factor control method is utilized to balance the exploration and exploitation stages, increase the pace of convergence, and improve the MPA's capacity for global search.Convolutional Neural Networks (CNNs) and the suggested IMPA are combined for the categorization of electrocardiograms (ECGs).The resulting model, dubbed IMPA-CNN, is tested on the European ST-T database, the St. Peters-burg INCART database, and the MIT-BIH arrhythmia database.The experimental results indicate the superiority of the IMPA-CNN models compared to MPA, GSA-NN, EO-CNN, HHOCNN, SCA-CNN, PSO-CNN and WOA-CNN models using multiple assessment metrics.
In Liu et al. [13], the MPA is combined with a novel predator encoding mechanism based on internet protocol (IP) to create an IPMPA algorithm.A deep convolutional neural network (DCNN) and the suggested IPMPA are merged to create a new model called DCNN-IPMPA, which is used for COVID-19 diagnosis.We evaluate the DCNN-IPMPA's performance using the COVID-CT and SARS-CoV-2 datasets.The outcomes of the simulation demonstrate that the DCNN-IPMPA model outperforms other models in terms of results.
Aydemir, S. B [14]proposed a dynamic selection strategy named as FOCLMPA that adapts during the evolutionary process.This dynamic approach assigns higher selection probabilities to parents with superior fitness, resulting in accelerated convergence and heightened exploration capabilities.They innovatively suggested a dynamic dimension and a greedy strategy by dimension approach.This strategy evaluates solutions in each dimension, mitigating the risk of local optima, thereby enhancing the overall performance of the algorithm.
Du, P& Guo, J [15]ventured into hybridization named as EMPA by combining the Marine Predators Algorithm with opposition-based learning.This hybrid approach combines opposing initial numbers with a self-adaptive component strategy, effectively liberating the algorithm from local optima traps and facilitating superior convergence.These include opposition learning to expand the search range, adaptive evolution for heightened global exploration, neighborhood search to diversify the population, and greedy selection to ensure solution quality.
Han, M&Du, Z [16] presented a modification to the MPA algorithm by adjusting conversion parameters.This adjustment transitions from a linear decline to a nonlinear one, optimizing the balance between global and local exploration.They put forward a hybrid algorithm seamlessly integrating the exploitation capabilities of crossover with individual solutions' personal best states, self-learning mechanisms, and global search mechanisms.This algorithm updates solutions using sine or cosine strategies, implementing a novel approach to mutualism phases.
Kumar, S. et al. [17] have demonstrated the remarkable prowess of MPA in addressing multi-objective problems related to real-time task scheduling within multiprocessor systems, underscoring its relevance in the realm of computational optimization.
The existing body of literature includes a comprehensive array of variants, iterations, and applications of MPA.The field dedicated to optimizing the MPA is replete with innovative strategies aimed at elevating its performance.These contributions represent a diverse tapestry of ideas tailored to enhancing the algorithm's efficacy and efficiency.Nonetheless, like so many other metaheuristics, the MPA exhibits some disadvantages due to the no-free-lunch theorem which affirms that no algorithm is capable to efficiently handle all optimization problems.Finding solutions that minimize or maximize the objective function can be extremely difficult in the majority of the improved Marine Predator Algorithm for Optimization due to their numerous decision variables, dense local optimal solutions, and high computational effort.The Marine Predator Algorithm will converge to the local optimal position in the space, stretching from the current number to the opposite number which has made less progress in handling high-dimensional optimization problems than the majority of the aforementioned research.For this purpose, The processes for the development of the Improved Marine Predators Algorithm (IMPA) with self-adaptive weight and dynamic social strategy have been performed.The motivation for this article is listed below.1.Self-adaptive weight parameter tuning scheme is adopted into the IMPA.according to the proportion of fitness value; 2.Dynamic Social Mechanism was implemented to balance exploration and exploitation, preventing premature convergence to a local optimal position; 3.The proposed algorithm was tested by 23 benchmark test functions and IEEE CEC 2019 benchmark, compared to state-of-the-art algorithms and different MPA variants.
4.The proposed algorithm was tested on four different real-world engineering problems and compared with some algorithm in the literature.
The remaining part of this paper is organized as follows.Section 3 describes the Improved Marine Predators Algorithm in detail.Section 4 gives the experimental results and analysis of the benchmark function.Section 5 examined the IMPA in engineering problems.Section 6 concludes this paper and indicates future research.

A. Self-adaptive Weight
In this study, we introduce an innovative approach that incorporates an adaptive weight parameter into the optimization process.By harnessing the adaptive nature of this parameter, our approach enhances the algorithm's global exploration capability and its ability to escape local optima, all while maintaining strong performance in local refinement when optimization conditions stabilize.Specifically, during the initial stages of optimization, our approach accelerates global exploration of the solution space.As the optimization process stabilizes, it shifts focus towards local solution development, achieving a balance between global and local exploration.To achieve this, we leverage the positional information of the destination point through the design of a self-adaptive weight positionupdate mechanism, denoted as Eq. ( 9)-Eq.( 12): While Iteration<1/3×max(Iteration) It is difficult to tune model and find suitable weight parameter ω.We want to minimize the function of our model by changing the weight parameters.Bayesian optimization helps us to find the most suitable point for the weighting parameter in the least number of steps.Bayesian optimization also uses an Acquisition Function [18], which directs the sampling to regions that are likely to be better than the current best observation.The Acquisition Function, brought into the model, picks the best performing parameters.The tuning is cross-validated by the Grid Search CV [19], which iterates through all permutations of the incoming parameters and returns the evaluation metrics scores for all parameter combinations by means of cross-validation.
Our enhanced approach replaces premature convergence, which improves the subpar search capability observed.The weight parameter, denoted as ω, is a self-adaptive balancing factor.In this study, we propose a self-adaptive weight ω that guides dynamic correction within the update formula, where When the fitness value of the current position variable surpasses the global fitness value, the inertia weight assumes a higher value, which enhances global exploration capabilities and expands the search space for feasible solutions.Conversely, the self-adaptive weight generates a smaller value, promoting faster convergence rates and facilitating local refinement.The self-adaptive weight plays a critical role in enabling particles within the MPA to autonomously select between global and local phases, thereby enhancing accuracy, convergence speed, and reducing the likelihood of falling into local optima.Our improved algorithm introduces a random selection mechanism for self-adaptive weight crossover, ensuring a harmonious balance between predators and prey position updates.This synergy between global exploration and local development gradually narrows the search space, allowing the algorithm to hover near the target prey.This adjustment feature mitigates premature convergence, thereby addressing the suboptimal search capability observed in the MPA and ultimately enhancing algorithm performance.

B. Dynamic Social Strategy
In this section, we introduce another critical element of the Improved Algorithm, which balances the potential neighborhood information of Elite Predators with that of other individuals and employs it to enhance the MPA.This provides more effective updates for the optimal individuals, reducing the probability of falling into local optima and attempting to overcome premature convergence.This significantly increases the probability of the population reaching the global optimum, thereby enhancing the development capacity.
This dynamic social strategy optimizes the search space and accelerates the effectiveness of the search process in the IMPA.The modified search mechanism introduced in the IMPA is expressed as follows: While Iteration<1/3×max(Iteration)  This comprehensive approach, which combines dynamic weight adaptation and social strategy, significantly enhances the optimization capabilities of the IMPA, improving its performance across a range of optimization problems.Thus the phase of opposition based weight and self-adaptive strategy provides an enhanced global and local search which helps in increasing appropriate diversity and avoids the skipping of true solutions.The flow of search process of proposed IMPA is presented in Fig. 2 A detailed analysis off enhanced diversity of solutions and exploitation of search space has been done in experimental section.

A. Benchmark Problem Experiments
In the realm of optimization research, a fundamental aspect of assessing the effectiveness of newly proposed algorithms lies in the rigorous evaluation process employed.In this context, our study undertakes a comprehensive examination of the IMPA across a diverse range of benchmark problems.These benchmark functions, numbering 23 in total, are categorized into three distinct classes: unimodal, multimodal, and fixed dimension problems [20,21].
For the evaluation of these classical problems, a population size of 50 solutions was employed, with a uniform termination criterion of 1000 iterations applied to all algorithms.To ensure the robustness and reliability of our findings, a thorough comparative analysis was conducted based on two evaluation criteria: the mean and standard deviation.This evaluation was carried out through 20 independent runs on each benchmark function, yielding a comprehensive assessment of the algorithm's efficacy.Experimental computations were executed using MATLAB 2020a on a personal computer equipped with a 3.2 GHz CPU and 16 GB RAM.
The proposed method must show its competitive performance over some of those truly state-of-the-art methods.The results of our study reveal that the IMPA exhibits superior optimization performance compared to the MPA and several the most recently proposed state-of-the-art methods algorithms, including Differential Evolution Algorithm (DE) [22], African Vultures Optimization Algorithm (AVOA) [23], Artificial Gorilla Troops Optimizer (GTO) [24], Covariance Matrix Adaptation Evolutionary Strategies (CMAES) [25] and Improved Sine Cosine Algorithm (IWSCA) [26].Furthermore, this section includes a comparative analysis with some improved versions of MPA, specifically Enhanced MPA (EMPA) [27] and Gradient-Descent-based MPA (GDMPA) [28].The detailed results of this comparative assessment are presented in Table 1.
In our comparative analysis, we examined the experimental results of various algorithms applied to the F1-F23 functions.The IMPA consistently attained the theoretical global optimum for 21 out of the 23 functions (F2-F4, F6-F23).Even in cases where the IMPA did not reach the theoretical global optimum for functions F1 and F5, the solutions it obtained surpassed those derived from alternative algorithms.Conversely, competing algorithms outperformed the IMPA in only two instances (F1 and F5) while performing worse in the remaining results.
The comparative performance of DE, GTO, CMAES and IWSCA fell short when compared to the exceptional performance of the IMPA across all 23 benchmark functions.Notably, the IMPA surpassed other improved versions of MPA, such as EMPA and GDMPA.These findings underscore IMPA's superior performance in terms of global exploration capabilities and its ability to evade local optima in comparison to the other algorithms.This study additionally presents the convergence curves for the nine optimization algorithms refer to Fig. 3.The results clearly indicate that the IMPA excels in reaching the theoretical global minimum for the Unimodal functions (F2-F4), achieving this within approximately 400 to 500 iterations.For the Multimodal functions (F9-F15), the IMPA also exhibits faster convergence than other algorithms, reaching the theoretical global minimum in 100 iterations.Even in the context of fixed-dimension problems, the IMPA demonstrates superior convergence speed and accuracy relative to other algorithms, highlighting its distinctive characteristics.
Conversely, the other algorithms consistently experience premature convergence across all benchmark functions, yielding solutions consistently inferior to those of IMPA.Although AVOA and CMAES perform better than IMPA in only two functions (F8 and F16), they lag behind in other scenarios.The convergence curves clearly demonstrate that IMPA consistently reaches the theoretical global minimum for most benchmark functions, underscoring its superior optimization performance.The outstanding convergence speed of IMPA significantly reduces the computational complexity of the optimization problem.

Fig 3. Convergence curves for benchmark functions
In order to illustrate whether there are differences between the IMPA and the comparison algorithm, the Wilcoxon rank-sum test [29] method is also introduced to analyze the results of the algorithm, which is present in Table2.The significance p-value is set to 0.05, When the p-value in the Wilcoxon rank-sum test is less than 0.05, it means that there is a significant difference between the algorithm proposed in this article and the comparison algorithm.Otherwise, it means that there is no significant difference between the algorithm in this article and the comparison algorithm.The test results are shown in Table 6.In the result table, '+' denotes the IMPA is better than other counterpart, ' -' means other algorithms is better, and 'NAN' means there is no significant difference between the two algorithms after the test.The results show that the p-value of the IMPA and the GTO on the function is greater than 0.05, indicating that there is no significant difference between the two algorithms.In addition, the p-value of the IMPA are less than 0.05, indicating that there is a significant difference between the IMPA and other comparison algorithms. it is obvious that the IMPA performs better than other algorithms, and the improvement strategy introduced in the IMPA is effective.In Fig. 4, we present a box-plot analysis of the global optimal solutions obtained by the comparison algorithms across 20 independent experiments involving various test functions.This visualization underscores that the IMPA consistently converges to the theoretical extreme values for different test functions, exhibiting a stable distribution of optimal values and robust algorithm performance.Conversely, results obtained from the other algorithms display significant deviations from the theoretical extreme values, and they exhibit substantial volatility, indicating poor algorithmic robustness.

B. IEEE CEC2021 experiments
In order to comprehensively assess and compare the performance of the IMPA on Large Scale Global Optimization (LSGO) challenges, we leveraged the IEEE CEC2021 benchmark suite in this research endeavor.This benchmark suite has been meticulously curated to pose intricate optimization problems, as elucidated in Table 3, thus illuminating the multifaceted complexities characterizing the optimization landscapes of these challenges [30][31].In maintaining methodological consistency, we conducted 20 independent runs on each function, employing similar population sizes and iteration counts as our previous experiments.This stringent approach ensured a robust evaluation of the algorithms under scrutiny.The function values are summarized in Table 4, which provide an in-depth analysis of the performance of the IMPA and its comparison to other algorithms.It shows that the IMPA get closer to the minimum values than other algorithms except for CEC09.Evidently, the IMPA exhibited a noteworthy propensity to approach the minimal values more closely than any other algorithms except for CEC09.Notably, the IMPA excelled in its performance, surpassing other algorithms.It demonstrated substantial advancements in accuracy and convergence speed across the majority of CEC2021 functions.The IMPA obtain the remarkable performances, which outperforms other algorithms in solving LSGO problems with a significant improvement in accuracy and convergence speed for most of CEC2021 functions.The convergence provide a visual narrative of algorithmic behavior by Fig 5 .The IMPA emerges as a frontrunner, consistently yielding smaller function values, thereby expediting the convergence process.In particular, the IMPA outperformed other algorithms in achieving theoretical global minima within a mere handful of iterations for functions of CEC2021.Additionally, the IMPA exhibited notable celerity in terms of convergence and precision.The IMPA also exhibits faster convergence than other algorithms, reaching the theoretical global minimum within 200 iterations.Even in the context of Composition Functions, the IMPA demonstrates superior convergence speed and accuracy relative to other algorithms, highlighting its distinctive characteristics.
Conversely, the other algorithms consistently experience premature convergence across all benchmark functions, yielding solutions consistently inferior to those of IMPA.Although GTO perform better than IMPA for CEC06-CEC10, and EMPA perform better than IMPA for CEC03-CEC4, they lag behind in other scenarios.The convergence curves clearly demonstrate that IMPA consistently reaches the theoretical global minimum for most benchmark functions, underscoring its superior optimization performance.The outstanding convergence speed of IMPA significantly reduces the computational complexity of the optimization problem.

Fig 5. Convergence curves for CEC 2021 functions
Wilcoxon rank-sum test method is also introduced to analyze the results of the algorithm for IEEE CEC2021.The test results are shown in Table 5, the p-value of the IMPA are less than 0.05 ,which is indicated that there is a significant difference between the IMPA and other comparison algorithms except for GTO. it is obvious that the IMPA performs better than other algorithms, and the improvement strategy introduced in the IMPA is effective.Box-plot analysis of global optimal solutions are shown by Fig 6 .The findings elucidate that IMPA consistently converges to the theoretical extreme values of these functions, maintaining stable distributions and exemplifying algorithmic robustness.In contrast, the results obtained from other algorithms are marked by significant deviations from the expected extreme values, indicating a degree of instability and reduced robustness in their algorithmic compositions.

5.Engineering design optimization
Mechanical optimization problems are inherently entwined with mathematical modeling.The key of constructing an optimal design mathematical model lies in the identification of design variables, objective functions, and constraints.In our quest to assess the efficacy of the newly proposed IMPA, we have turned our focus to a selection of realworld industrial engineering design challenges.These problems are enriched with constraints that reflect practical engineering scenarios.We present these challenges below, as they form the basis of the experimentation phase.

A. Welded Beam Design
The objective of this particular engineering challenge revolves around the minimization of the weight of a welded beam [32].The core mission in this problem is the optimization of these four primary parameters, seeking to minimize the weight of the welded beam.These parameters are articulated as the width of the welded (h), the height of the clamped bar (t), the length of the bar (l), and the width of the bar (b), whose interplay illustrated in Fig. 7 The results stemming from the application of the proposed IMPA are diligently compared with those obtained through other established methods.The outcomes are meticulously detailed in Table 6, providing optimal parameter values and objective function values for all the algorithms under comparative scrutiny.Remarkably, the IMPA stands out as a paragon of reliability, outperforming other state-of-the-art methodologies.It yields optimal variables represented as h = 2.42253E-011, l = 6.15443E+00, t = 8.54323E+00, b = 2.64534E-01, accompanied by the best objective function value f(x) =1.69524E+00.
For a visual insight into the effectiveness of the IMPA, the objective function is presented in Fig. 8.This graphical representations aptly showcase the IMPA's ability to swiftly converge towards the optimal solution within 15 iterations.The parameter values and parameter's curve further substantiate the algorithm's efficacy.It is abundantly clear that the recommended IMPA has delivered outstanding results, yielding optimal parameters and objective function values that contribute significantly to the problem's resolution.

B. Tension/Compression Spring Design
The primary objective of this study revolves around the reduction of total weight in a specific spring design problem [33].The fundamental of this tension/compression spring design issue lies in the quest to minimize the overall weight of the designated spring.This endeavor necessitates the modulation of three key variables, namely wire diameter (d), mean coil diameter (D), and the number of active coils (N), as visually portrayed in Fig. 9.The mathematical model of tension/compression spring design is described as follows:  = ( 1 ,  2 ,  3 ) = (ℎ, , ) The IMPA performance is assessed in comparison to other established methods.Table 7 presents the results and optimal parameter values, encompassing the best outcomes for all comparative techniques, including the proposed IMPA.Remarkably, the IMPA showcases superior performance, furnishing a more robust solution with optimal variable values at d=5.177500E−02, D=3.587919E−01, N=1.116839E+01, and the best objective value of f(x) = 3.63877E+00.
The objective cost curves of the proposed IMPA, which are presented in Fig. 10.This graphical representations offer a compelling illustration of the IMPA's ability to swiftly converge towards the optimal solution within the first 20 iterations.The curves of the parameter values further underscore the efficacy of the IMPA.

C. Pressure Vessel Design
The primary aim of this engineering challenge is to minimize the weight of a specific type of cylindrical pressure vessel [34].This study centers its examination on four key design parameters, which include the shell width (x1), head width (x2), internal radius (x3), and the height of the cylindrical part, excluding the head (x4), as thoughtfully illustrated in Fig. 11.The mathematical model of pressure vessel design is described as follows: To substantiate the algorithm's efficacy, the objective cost curves is presented in Fig. 12.These visual representations aptly demonstrate that the IMPA swiftly converged to the optimal solution before reaching the 30th iteration.The figures displaying parameter values and the curve further reinforce the algorithm's effectiveness in addressing the pressure vessel design problem.

D. Three Bar Design
This question is intended to minimize the weight of a particular three shot Truss [35].The two main design parameters studied in this issue are the cross-sectional area of component 1 (x1), component 2 (x2) and component 3 (x3), as shown in Fig. 12. Use the proposed IMPA to obtain the optimal solution and compare the results with other established methods.The mathematical model of the Three-bar truss design problem is mathematically described as follows:   9 summarizes the results of the competitive approach in solving the three link Truss design problem, including the recommended IMPA.This table provides the optimal parameter values obtained by all algorithms.It is evident that the proposed IMPA produces more reliable results than other state-of-the-art methods.The optimal solution obtained by IMPA is x1=7.886654E-01and x2=4.082758E-01, with the optimal objective function value f(x)= 1.863895E+02.
Fig. 13 shows the convergence curve of the objective function of the proposed IMPA.The IMPA can quickly find the optimal solution, which can be achieved with only 15 iterations, as shown in the convergence diagram in Fig. 14.The first parameter curve aim to demonstrate the effectiveness of the proposed IMPA.The convergence curve indicates that the IMPA provides excellent results and quickly provides an ideal solution.In addition, the diversity of potential solutions has been confirmed.Fig14.Convergence curves for Three Bar Truss

6.Conclusion
In summary, the IMPA is a novel optimization algorithm for solving industrial engineering design problems.The IMPA is an innovative approach to the Marine Predators Algorithm by incorporating dynamic social strategy and self-adaptive weight setting, which can relatively quickly find near optimal solutions and overcome the limitations by avoiding local optima and improving overall performance.The IMPA contributes to the balance exploration and exploitation by dynamic social strategy and self-adaptive weight.The superiority of IMPA in CEC 2021 functions has been clearly demonstrated.Meanwhile the results of this study demonstrate the effectiveness of the IMPA in finding highquality solutions for industrial engineering design problems.Compared with optimization methods and other versions of MPA, including versions of MPA, EMPA, GDMPA, DE, AVOA, GTO, CMAES, IWSCA, the IMPA has several advantages and faster convergence to the optimal solution and reduced chances of falling into local minima.Box-plot analyses prove that most functions of IMPA converge with low standard deviation and the least outlier.Wilcoxon rank tests which performed the method also show that IMPA differs significantly from any other algorithm.The IMPA has also been proven to be more effective in finding high-quality solutions in a shorter period of time.In order to provide a promising solution for industrial engineering design problems and highlight the potential of the IMPA as a useful tool for solving real-world problems.This study has implemented four highly representative engineering design problems, including Welded Beam Design, Tension/Compression Spring Design, Pressure Vessel Design and Three Bar Design.The experimental results also proved its efficiency to successfully solve the complex industrial engineering design problems.
Every method has advantages and disadvantages, and the IMPA is no exception.The improved algorithm may performed worse than the GTO for IEEE CEC2021.Therefore in further investigation, we may combine all chaos mapping in the same algorithm and use some adaptive strategies to decide which parameters will be activated.Future research should focus on implementing IMPA on multi-objective optimization.This would allow for more extensive testing of the algorithm's capabilities and provide a better understanding of its potential in real-world applications.
fitness value of the current iteration for the position variable, and i Elite signifies the global optimal fitness value.Its function is to dynamically adjust the weight between the destination point and the current individual's position.
i Elite As the parameters of the IMPA is concerned, N is the population , D is the dimension and Max_iters is Number of iterations.The time complexity of setting the adaptive weight and Dynamic Social Strategy is O(N•D).The time complexity of IMPA is O(N•D•Max_iter), which is the same as that of the origin MPA algorithm, indicating that the two improved strategies do not increase the computational burden of the Marine Predator Algorithm.

Fig 13 Three
Fig 13 Three Bar Truss Design Table 9 Comparison results for Three Bar Truss Design

Pseudocode. The Improved Marine Predators Algorithm
locally while still maintaining the ability to escape from local optima.When the search area provided by the coefficient is very large, the updated solution may diverge from the current state to avoid falling into local optima.This dynamic social component amalgamates with the self-adaptive weight, collectively providing guidance to the current solution.By combining the directional influence of the best solution state and the best population state, the newly developed algorithm encompasses the following steps:

Table 1 Comparison results for benchmark functions
* represent the optimal solution of the function