A modified particle swarm optimization algorithm for a vehicle scheduling problem with soft time windows

This article constructed a vehicle scheduling problem (VSP) with soft time windows for a certain ore company. VSP is a typical NP-hard problem whose optimal solution can not be obtained in polynomial time, and the basic particle swarm optimization(PSO) algorithm has the obvious shortcoming of premature convergence and stagnation by falling into local optima. Thus, a modified particle swarm optimization (MPSO) was proposed in this paper for the numerical calculation to overcome the characteristics of the optimization problem such as: multiple constraints and NP-hard. The algorithm introduced the “elite reverse” strategy into population initialization, proposed an improved adaptive strategy by combining the subtraction function and “ladder strategy” to adjust inertia weight, and added a “jump out” mechanism to escape local optimal. Thus, the proposed algorithm can realize an accurate and rapid solution of the algorithm’s global optimization. Finally, this article made typical benchmark functions experiment and vehicle scheduling simulation to verify the algorithm performance. The experimental results of typical benchmark functions proved that the search accuracy and performance of the MPSO algorithm are superior to other algorithms: the basic PSO, the improved particle swarm optimization (IPSO), and the chaotic PSO (CPSO). Besides, the MPSO algorithm can improve an ore company’s profit by 48.5–71.8% compared with the basic PSO in the vehicle scheduling simulation.

in application areas like as engineering, finance, and computer science.For example, Mukhopadhyay and Banerjee 9 proposed chaotic multi-swarm particle swarm optimization algorithm to optimize the parameters of the autonomous chaotic laser system.Jena et al. 10 combined the PSO algorithm with an improved Q-learning algorithm to solve load balancing problems in cloud computing environment.Mariangela 11 proposed an artificial neural network (ANN) together with PSO algorithm to select the optimal process parameters for the Micro electrical discharge machining process.Hao Feng et al. 12 proposed an improved PSO algorithm to obtain the best Proportional-Integral-Derivative (PID) controller coefficients by solving the trajectory control problem of the electro-hydraulic position servo system.Xing et al. 13 proposed an improved PSO algorithm to develop the energy consumption optimization model of tramway operation for reducing the traction energy consumption of the tramway.Wenyi Du et al. 14 proposed an improved particle swarm optimization (PSO) algorithm to model the orderly charging strategy for the new energy vehicles (EV).Olmez et al. 15 proposed the particle swarm with visit table strategy (PS-VTS) meta-heuristic technique to improve the effectiveness of Electroencephalogram (EEG)-based human emotion identification.
Similar to other swarm intelligence algorithms, the basic PSO algorithm, which is a non-globally convergent optimization algorithm, has poor diversity in the later stages and is easily prone to stagnation during the iteration process 16 .In application situations, PSO algorithms often experience the shortcoming of premature convergence and stagnation by falling into local optima.Therefore, many researchers have proposed corresponding improvement strategies to enhance the optimization ability of the algorithm.For example, Yue et al. 17 proposed a modified PSO algorithm with a circular topology and it can form stable niches and locate multiple potential optimal solutions when solving multimodal multi-objective optimization problems.Gao et al. 18 proposed a starstructured particle swarm optimization algorithm with a uniform calculation method for solving multimodal multi-objective problems.It has a closeness of over 95% compared to real Pareto frontiers.Solomon et al. 19 designed a collaborative multi-swarm PSO algorithm for distributed computing environments.Simulation results showed that the PSO algorithm has high parallelism and achieved a maximum of 37 times speedup.Duan et al. 20 designed an improved particle swarm optimization (IPSO) algorithm with nonlinear attenuation law and varying inertia weights to improve the coupling accuracy in laser-fiber coupling.Sun et al. 21proposed an improved particle swarm optimization algorithm by combining Non-Gaussian random distribution to optimize the design of wind turbine blades.Liu et al. 22 introduced the differential evolution (DE) algorithm into PSO and proposed a hybrid algorithm called PSO-DE.Peng et al. 23 proposed the symbiotic particle swarm optimization (SPSO) algorithm by adopting a multi population strategy.
In recent years, some researchers applied the PSO algorithms in the VSP fields.For instance, Rui et al. 24 constructed an appropriate mathematical model for the typical vehicle-scheduling problem and proposed an improved immune particle swarm optimization with adaptive search(AS-ICPSO) strategy.Experimental results show that the proposed strategy can handle vehicle scheduling problem excellently.Hannan et al. 25 proposed a modified particle swarm optimization (PSO) algorithm to solve a capacitated vehicle-routing problem.Sun et al. 26 proposed a hybrid cooperative co-evolution algorithm (hccEA), in which a modified PSO is embedded into the cooperative co-evolution framework, to solve the vehicle scheduling problem with uncertain processing time.Xu et al. 27 proposed a hybrid genetic algorithm and particle swarm optimization (PSO) for vehicle routing problem with time window, which decoded the path by particle real number coding method.It can avoid falling into local optimum.
In general, the basic PSO has been improved and developed by many researchers to date with many examples, and the improved methods can be classified into four categories: adjusting the distribution of algorithm parameters; changing the updating formula of the particle swarm position; modifying the initialization process of the swarm; combining with other intelligent algorithms.To improve the overall performance of the particle swarm algorithm, a modified particle swarm optimization (MPSO) is proposed for solving the multiple constraints and NP-hard vehicle scheduling problem.The MPSO algorithm is implemented under the cooperation of the following hybrid strategies: modifying the initialization process by the "elite reverse" strategy, changing the updating formula with an improved adaptive strategy, and adding the local optimal "jump out mechanism".Compared with the other PSO algorithms, MPSO can avoid the resource wastes caused by population degradation and has good convergence accuracy and global search performance, especially when dealing with complex problems.This paper is presented as follows: "Formulation of VSP" presents the formulation of the vehicle scheduling optimization problem for a certain ore comp.The detailed strategies for the improvement of MPSO are described in "Modified particle swarm optimization algorithm".In the "Simulation and discussion", the benchmark and VSP simulations are given to verify the validity of the algorithm."Conclusions" is given for a summary of this paper.

Formulation of VSP Definitions and Declarations
1. Define the collection J = {1, 2, . . ., n} represents the arrival order of vehicles, and n is the total number of vehicles; 2. Define the collection R = {1, 2, . . ., m} represents vehicle types, and m is the total number of vehicle types, for instance: 1 means heavy vehicle, 2 means medium vehicle, and 3 means light vehicle; 3. Define variables y ij = 1, vehicle i and vehicle j is adjacent, and vehicle i is in front 0, other 4. Define E j , L j as the earliest and latest arrival times of vehicle j, where j = 1, 2, . . ., n; 5. Define x j as the actual arrival time for the vehicle j, where j = 1, 2, . . ., n; 6. Define T j as the expected arrival time for the vehicle j, where j = 1, 2, . . ., n; 7. Define s rk ij as the safety time interval between the vehicle i and the vehicle j, where the vehicle i is vehicle type r, the vehicle j is vehicle type k, r, k ∈ R , i, j ∈ J and the vehicle i is in front.8. Define z ij as whether the vehicle i whose type is r and the vehicle j whose type is k are adjacent, where i, j ∈ J, and r, k ∈ R.
9. Define γ ik = 1, vehicle i is vehicle type k, wherei ∈ J, k ∈ R 0, other 10.Defined g j , h j as the unit time cost of early arrival or late arrival of the vehicle j. 11.Defined α j = max 0, T j − x j , β j = max 0, x j − T j as earliness of arrival and tardiness of arrival of the vehicle j.

Modeling of VSP
The vehicle scheduling problem (VSP) for the ore company can be described as: there are n vehicles that need to enter the ore company for loading within a certain period, and the vehicles have a corresponding soft time window: earliest arrival time and latest arrival time.Within this soft time window, the company must meet both the production and quality requirements of ore production, as well as the total number of vehicles entering the site, and finally choose an optimal time for each vehicle as the arrival time of the vehicle.This paper mainly studies the VSP problem in the terminal area of the ore company, which means all vehicles enter the company by pairing approach.To ensure the safety of the vehicles' loading process, a certain safety separation must be maintained between vehicles.Because different types of vehicles may take different amounts of time to assemble ore and spend different amounts of time entering and leaving the yard, the safety interval between two adjacent vehicles is also different.In the process of building the vehicle scheduling model, most of the parameters are measured in terms of time, so we convert the safe interval between vehicles into a time interval to ensure the accuracy of the model calculation, as shown in Table 1.
To ensure the safety of the vehicles and meet the basic production requirements, a scheduling sequence should be searched and optimized.Finally, our goal is to assign an optimal arrival time for each vehicle such that the following objective function is minimized.
Here, Eq. (1) minimizes the total penalty of arriving deviations from the target arriving time; Eq. ( 2) indicates the soft time windows for each vehicle.Eq. (3) link the decision variables x j and parameters T j to decision variables α j and β j ; Eq. ( 4) represents the safety interval constraint of continuous arrival of vehicles.Given a pair of vehicles, Eq. ( 5) ensure one lands before the other.Eq. ( 6) links the decision variables z ij and γ ir and ensure the vehicle i whose type is r and the vehicle j whose type is k are adjacent; Eq. ( 7) ensure the uniqueness constraint of the (1) st.E j ≤ x j ≤ L j , ∀j ∈ J; (3) x j = β j − α j + T j , ∀j ∈ J; (4) (5) y ij + y ji = 1, ∀j, i ∈ J; j � = i; The "elite reverse" learning strategy In the basic PSO algorithm, the population is initialized by a pure random strategy.However, the optimization accuracy and convergence speed are often limited by the random strategy.In this paper, the "elite reverse" learning strategy 30 is introduced for the initialization to accelerate the algorithm's solution speed and maintain the algorithm's population diversity well.The specific operation is shown as follows: firstly, the initial population position matrix of the particle swarm generated by the random strategy is used, so that the elite solution vector of a single particle is Secondly, the calculation formula ( 11) is applied to obtain the elite reverse solution: where u ij , l ij represent the maximum and minimum values in the dimension j and x ij (k * ) represents the new particle position; k r is a random value that belongs to the interval (0,1).Finally, the fitness functions of the elite solution and the elite reverse solution are ranked, and the top n high-quality solutions are selected to form a new population position matrix.

The mutation strategy from Genetic algorithm
In the iteration process of the basic PSO algorithm, the overall diversity of particle swarms would be reduced.To overcome this difficulty, the mutation strategy in the genetic algorithm 31 is introduced to increase the diversity of individual extreme values and reduce the probability of particle swarms falling into the local optimum.The core of this strategy is to screen particles after each iteration, and the selected particles are applied by the position mutation formula (12).
where x * denotes the position of the particle after mutation.

The adaptive weighting strategy
Inertia weight w is directly related to the convergence speed.The larger inertia weight w makes the particle have a stronger global search ability, and the smaller w makes the particle have better local search ability 32 .To improve the flexibility of particle flight speed change, an improved strategy combining the decreasing function and the "ladder" method is proposed to adjust the weight value.In the traditional "ladder" method, a constant value was chosen for each "ladder" which may lose a certain degree of flexibility.This paper proposes a "threelevel ladder" adaptive strategy, in which the subtraction function method is applied for each "ladder", to realize adaptive changes in each stage.The details of the switching formula are shown: where[w si , w ei ], i = 1, 2, 3 is the range of inertia weight; f(g) is the fitness function value corresponding to the global optimal solution; Fit1 and Fit2 are the autonomous set values, they are not fixed and unchanging but are determined by a comprehensive balance of the complexity of the optimized problem, the required optimization Vol.:(0123456789) ] need to be adjusted according to the condition of the objective function in different application contexts.They are selected to provide a balance between local and global exploration and thus ensure the optimal solution can be found with a small number of iterations 33 .Thus, in the early stage, the particle swarm optimization algorithm should have a larger w value, so that the particle has a strong global optimization ability.The value of w gradually decreases in the later stage of the algorithm, so that the algorithm has better local search ability and improves the accuracy of the solution.

The local optimal "jump out" mechanism
To avoid the phenomenon that the PSO algorithm easily falls into local optimum during the search process, the "jump out" mechanism is added.The criterion of falling into the local optimum is determined as: when the slope value of the global optimal fitness function curve is less than the specified value ε in consecutive m itera- tions, it can be regarded as falling into the local optimum.The basic idea of the "jump out" mechanism is to be close to the global worst position and away from the global optimal position.The specific calculation formula is given as follows: where bad represents the information of the global worst position.

The details of the algorithm process
The pseudo-code of the MPSO algorithm is demonstrated in Table 2.
The overall flowchart for the optimal placement of the MPSO is shown in Fig. 1.Besides, the specific steps and execution process are given as follows: (1) Initialize the basic parameters: particle swarm size, maximum number of iterations, inertia weight value, learning factor, and particle swarm dimension, etc.; (2) Generate initialized particle swarm positions according to the "elite reverse" learning strategy; (3) Calculate the fitness value of each particle according to the fitness function and determine whether the termination conditions are met, if yes go to step (8), otherwise go to step (4); (4) Update the parameters: p ij , g j , bad and determine whether to fall into the local optimum according to the criterion, if yes go to step (7), otherwise, go to step (5); (5) Screen particles for mutation operation and calculate the inertia weight value by Eq. ( 13); (6) Update the velocity and position of particles according to Eqs. ( 9) and ( 10) and jump to step (3); (7) Execute the position "jump out" strategy by Eq. ( 12) and jump to step (6); (8) End.

Time complexity analysis
The time complexity of an algorithm is an important aspect to consider 34,35 .The computational complexity of the PSO algorithm is difficult to calculate precisely.It is mainly composed of the swarm size, the maximum number of iterations, and the complexity of the problem to be solved 36 .Initialize the basic parameters: N, D, m, G, ε, . . .

2:
Generate an initial population with the "elite reverse" learning strategy 3: Evaluate the fitness f k g for each individual 5: Initialize p ij ,g j and bad among population Jump out the algorithm by equation ( 14): 8: 9: else 10: Screen particles for mutation operation by Eq. ( 12): 11: Calculate the fitness values of the new particle, and Update p ij ,g j and bad According to Algorithm 1, the proposed MPSO algorithm can be divided into two main phases: first, the "elite reverse" learning strategy is used for the initialization of particles and velocity.The elites are first determined (Line 2), time complexity of this step is of order O(D 2 ) .Second, updating of particle position, and velocity and evaluating of fitness solution.The main loop of MPSO is executed for G iterations.Here, N dimensions are mutated per particle in the step (Lines 10), in which calculating the mutation probability per particle is of orderO (1).Thus, the time complexity of the mutation operator is O(N * D) .The step (Lines 12) updates the p ij and the g j , which is of order O(D).The step (Lines 13) of updating the parameters w is of order O(1).The velocity and position vectors of particles are updated in the step (Line 15).Based on Eq. ( 9), the time required for velocity updating per particle is of order O(D * N) .Furthermore, based on Eq. ( 10), the time complexity of position updating per particle is of order O(D).Thus, updating the velocity and position vectors of all particles is of order O(D * N) .The dominant step in each iteration is the mutation operator and the velocity updating of the swarm, which are with the same time complexity O(D * N) .The time complexity of other steps is relatively small and can be ignored compared to the above processes.Therefore, the total time complexity of the main loop of MPSO is of order O(N * G * D).

Simulations and discussion
To verify the effectiveness of the proposed MPSO algorithm, a benchmark function verification experiment and an ore vehicle scheduling optimization simulation are designed.Here, MPSO is compared and analyzed with other improved particle swarm optimization algorithms (PSO 37 , IPSO 38 , CPSO 39 ).All simulations are implemented on a computer with Intel i5-5800H GPU, 1.80 GHz, and 16GB RAM.The codes are programmed by MATLAB R2018b.
In this experiment, the maximum value, the median value, the minimum value, the mean value and the standard deviation (SD) are used as the performance indicators to judge the optimization ability of the algorithm.The simulation results are shown in Table 5 and Figs. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13.The best value in Table 5 are shown in bold.The standard deviation reflects the stability of the algorithm, and the MPSO algorithm has obvious advantages for most of the functions F1-F12.Considering the UM benchmark functions, the results of (F1-F7) by MPSO perform better than other selected algorithms.For the MM functions (F8-F12), the best  5, high-quality solutions can be obtained by the MPSO algorithm.
The average fitness values of the optimal solution of each algorithm are plotted to compare the performance of each algorithm more clearly and intuitively, as shown in Figs. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13.Taking function F2 as an example, the changes in the adaptation value log(J) of the four algorithms in Fig. 3 are analyzed in detail.The CPSO algorithm declined rapidly in the first 200 generations, but could not jump out after falling into "local optimization", resulting in the largest log(J) value of this algorithm and the worst convergence accuracy of the algorithm.The PSO algorithm can reach the optimal solution around 4000 generations, and the particles of the IPSO algorithm reach the optimal solution around 2500 generations.The particles fall into "optimization" around the 700th to 1200th generations in the MPSO algorithm, but the "jump-out" strategy of the algorithm increases the possibility of other searches in the direction of the optimal solution, and finally, the optimal solution is obtained around the 1300th generation.The results show that the search speed and search accuracy of the MPSO algorithm are improved and better than the other three algorithms.
Based on Figs. 2, 3, 4, 5, 6, 7 and 8, the convergence speed of the MPSO algorithm is significantly faster than that of other algorithms when the unimodal test functions are solved.Besides, the log(J) value of the MPSO algorithm is the lowest, which indicates that the optimization accuracy of the MPSO algorithm is higher than other algorithms.Figs. 9, 10 , 11, 12 and 13 show that when the MPSO algorithm is used to solve the multimodal test function, it quickly converges to a small optimal range after about 300 iterations, and its convergence speed is much greater than that of other algorithms, and the optimal solution value is significantly lower than that of other algorithms.Thus, the MPSO algorithm has a fast convergence speed and high global search capability for multimodal functions.Finally, the MPSO algorithm has better convergence than the other three PSO algorithms for solving different test functions.
To evaluate the performance of different PSO algorithms, statistical tests should be conducted 43 .In general, the results of an optimization algorithm cannot be distributed normally.Due to the stochastic nature of the   V max Value of maximum particle's velocity 0.1

V min
Value of minimum particle's velocity −0.1 meta-heuristics, it is not enough to compare algorithms based on only the mean and standard deviation values 44,45 .When the optimization results cannot be assumed to obey the normal distribution, a non-parametric test for comparison is necessary to judge whether the results of the algorithms differ from each other in a statistically significant way.Thus, the Wilcoxon non-parametric statistical test 46 is used by u to obtain a parameter called p-value to verify whether two sets of solutions are different to a statistically significant extent or not.Generally, it is considered that p ≤ 0.5 can be considered as a statistically significant superiority of the results.
The p-values calculated in Wilcoxon's rank-sum test comparing MPSO and other PSO algorithms are listed in Table 6 for all benchmark functions.The p-values in Table 6 additionally present the superiority of the MPSO  because all of the p-values are much smaller than 0.05.Besides, Fig. 14 shows a set of box-plots of performance comparisons of all algorithms for the benchmark functions of F1 to F12.From Table 6 and Fig. 14, it is obvious that the MPSO has superior performance in terms of solving unimodal and multimodal functions 0 1000 2000 3000 4 000 5 000 6 000 7 000 8 000 Iterative number -

Vehicle scheduling optimization simulation
A total of 10 ore vehicles (of which vehicles 1 and 2 are light vehicles, and vehicles 3-10 are medium-sized vehicles) are considered in this experiment.The earliest and latest arrival times of vehicles are shown in Table 7, the interval constraints of the arrival time of adjacent vehicles are given: . The MPSO algorithm is compared with not only other PSO algorithms(PSO 37 , IPSO 38 , CPSO 39 ) but also four stateof-the-art meta-heuristic methods(WOA 47 , IA 48 , DE 49 , ABC 50 ) on the vehicle scheduling problem.Iterative number     It can be seen from Fig. 15 and Table 8 that the simulation time of the MPSO algorithm is significantly better than that of PSO, CPSO, and IPSO algorithms.The maximum, minimum, and average values of the 30 operations of the MPSO algorithm are better than the other three algorithms, and the standard deviation of the calculation time is only less than the CPSO algorithm.The maximum value of simulation time by MPSO is 10.94 seconds, it can improve the algorithm's computation time profit by 42.9% compared with the basic PSO.The minimum value of simulation time by MPSO is 8.82 seconds, the algorithm's computation time is improved by 37.4% compared with the basic PSO.Thus, MPSO can improve the algorithm's computation time profit by From Fig. 16 and Table 8, the final objective function value (J) of the MPSO algorithm is significantly lower than that of PSO, CPSO, and IPSO algorithms, and the standard deviation SD of the final objective function value of the MPSO algorithm has the best stability.The maximum value of objective function value (J) by MPSO is 2020.47, it can improve an ore company's profit by 48.5% compared with the basic PSO.The minimum value of J by MPSO is 702.03, and the ore company's profit is raised by 71.8% compared with the basic PSO.In summary, MPSO can improve an ore company's profit by 48.5%-71.8%compared with the basic PSO.Thus, the MPSO algorithm can obtain the best optimization scheduling results, save resource consumption for enterprises, and effectively reduce the workload of vehicle scheduling.
In Fig. 17, the MPSO algorithm can converge well in the early stage, and its distribution proves that the MPSO algorithm can quickly escape the local optimum.They can also verify the effectiveness of avoiding "precociousness" by related proposed improvement strategies in the MPSO algorithm.
In general, MPSO outperforms other PSO algorithms on the VSP optimal problem.The reason for this behavior is likely that MPSO is able to choose the most suitable strategy for different search stages.The adaptive weighting strategy of dynamic weight is given to improve the global search speed.Besides, and the criterion of falling into the local optimum and a "jump out" strategy are interactive to overcome the "premature" problem.

Comparison of MPSO and other meta-heuristic algorithms
In order to determine the place of the proposed MPSO method, the proposed MPSO method is compared with 4 state-of-the-art meta-heuristic methods (WOA 47 , IA 48 , DE 49 , ABC 50 ) on the vehicle scheduling problem.The parameters of these algorithms are listed in Table 9.Each algorithm was tested 30 times independently to reduce statistical errors.
The comparison of simulation time and final cost value between MPSO and other meta-heuristic methods is shown in Table 10, in which the mean, maximum, minimum, and standard difference of simulation results were recorded and shown.The best results are shown in bold type.As one can see in Table 10, by utilizing the proposed strategy based on the MPSO, the lowest final cost value is obtained.The simulation time of WOA is the lowest.By contrast, MPSO spends some computational cost to perform execution on the criterion of falling   The convergence graph of each algorithm is shown in Fig. 18.In Fig. 18, the MPSO algorithm is more successful than all of the other optimization approaches, and the algorithm determines the global optimal solution after approximately 30 generations.

Conclusions
In this paper, the MPSO algorithm was proposed to solve a vehicle scheduling optimization problem with soft time window constraints for a certain ore company.The multiple swarm scheme, which combines the "elite reverse" strategy, an improved adaptive strategy, and the local optimal "jump out mechanism", was introduced into the MPSO algorithm, The validity and feasibility of the MPSO were verified by 12 classical benchmark functions and an ore vehicle scheduling optimization simulation.The following conclusions are given.
• The benchmark results indicate that the MPSO algorithm has superior performance than other PSO algo- rithms (PSO, IPSO, CPSO).• The MPSO algorithm can improve an ore company's profit by 48.5%-71.8%compared with the basic PSO.It can obtain the best optimization scheduling results, save resource consumption for enterprises, and effectively reduce the workload of vehicle scheduling.
Consequently, the paper verifies the feasibility of the MPSO algorithm and the success of solving a vehicle scheduling optimization problem for a certain ore company and provides a theoretical basis for subsequent research.Next, the following three issues will be studied: Firstly, the tasks and load balancing should be considered during the modeling process.Secondly, the performance of the proposed MPSO strategy can be improved by introducing other intelligent algorithms, such as the differential evolution algorithm.Finally, the proposed algorithm will be applied in a real ore company environment.

Algorithm 1 :
The pseudo-code of the MPSO algorithm 1:

Figure 1 .
Figure 1.The overall flowchart of the MPSO algorithm.

Figure 2 .Figure 3 .
Figure 2. Comparison of performances of function F1 by four PSO algorithms.

Figure 4 .
Figure 4. Comparison performances of function F3 by four PSO algorithms.

Figure 5 .
Figure 5.Comparison of performances of function F4 by four PSO algorithms.

Figure 6 .Figure 7 .
Figure 6.Comparison of performances of function F5 by four PSO algorithms.

Figure 8 .Figure 9 .
Figure 8.Comparison of performances of function F7 by four PSO algorithms.

Figure 13 .
Figure 13.Comparison of performances of function F12 by four PSO algorithms.

Figure 14 .
Figure 14.Boxplot comparing of cost function by four different PSO algorithms.

Figure 18 .
Figure 18.Comparison of MPSO with other optimization algorithms.

Table 1 .
Time interval matrix between different types of vehicles.

Table 2 .
The pseudo-code of the MPSO algorithm.

Table 3 .
Description of unimodal and multimodal benchmark functions.

Table 4 .
Parameters of other PSO algorithms.

Table 5 .
Results of benchmark functions.

Table 6 .
Results of the p-value for the Wilcoxon rank-sum test on benchmark functions.

Table 7 .
The earliest and latest arrival times of each vehicle.

Table 8 .
The calculation results of each algorithm for vehicle scheduling simulation.The best values are shown in bold.

Table 9 .
Parameters of other optimization algorithms.

Table 10 .
The calculation results of each algorithm for vehicle scheduling simulation.The best values are shown in bold.www.nature.com/scientificreports/into the local optimum and the "jump out" strategy.However, the final cost function value of WOA is the highest, which means it is inherently unreliable by having traded speed for accuracy.Table10proves that MPSO can obtain the lowest cost function value and the simulation time is also lower than the other three meta-heuristic methods.By comprehensive comparison, the solution of the MPSO algorithm gives the best value.