Introduction

In project management, optimization is a highly useful tool to satisfy desired objectives under specific constraints. The productivity of different components of a project can be increased by optimization. The importance of optimization in a construction project has been emphasized for decades as it is used to find the ideal plan and schedule for completing a project. Cost optimization, time optimization, and Pareto front are three common forms of time–cost trade-off problems. The objective of the cost optimization problem is to minimize the total cost under specific conditions, including project implementation time and penalty costs for delays. Meanwhile, the time optimization problem is aimed at choosing alternative solutions to shorten the project implementation time while ensuring that the project cost does not exceed the revenue on the early operation of the project. The Pareto front is a multi-objective optimization problem to simultaneously optimize both project cost and time1.

Mirjalili, Mirjalili2 proposed a multi-verse optimizer (MVO) algorithm inspired by the Big Bang theory to satisfy the need for solving single-and multi-objective optimization problems. For result assessment, MVO is compared with other metaheuristic algorithms, such as particle swarm optimization (PSO), Genetic Algorithm (GA), Ant colony optimization (ACO), etc. The results show that the MVO algorithm can provide competitive, even superior results than those of other algorithms in most tested optimization problems. However, MVO has issues in balancing the exploration and exploitation mechanism of the search area and limitations in the search area exploitation during fast convergence, thus resulting in local optimization3.

The Sine Cosine Algorithm (SCA)4 was developed for focusing on the exploration and exploitation of the search space during optimization. The results of the test problems show that SCA can explore different regions of the search space, avoid local optimization, converge towards global optimization, and effectively exploit the promising region of the search space during optimization. In addition, the study shows that SCA converges significantly faster than PSO, GA, ACO, etc. SCA has been utilized to address optimization challenges in diverse domains since 20165. Like MVO, SCA has limitations. Specifically, its search area exploitation mechanism is not clearly expressed; therefore, it easily encounters fast convergence6, which results in local optimization.

Two algorithms with opposite advantages and disadvantages motivated us to develop a hybrid algorithm between MVO and SCA for optimal exploration and exploitation of the search area based on the strengths of each algorithm to achieve a balance between the two mechanisms. The hDMVO algorithm was developed by preserving MVO's mechanisms of white and black holes to ensure good exploration of the search area by MVO. Concurrently, good search area exploitation by the algorithm is guaranteed by SCA through the fact that the value closest to the global optimum is stored in a variable as the target and is never lost during the optimization. Therefore, hDMVO will achieve a reasonable balance between the exploration and the exploitation phases, which ensures that the algorithm can achieve global optimization and become an appropriate metaheuristic method for solving the DTCTP.

The resolution of large-scale DTCTPs is a crucial aspect in the management of any construction project. Despite the availability of several existing methods, they are not fully equipped to solve large-scale DTCTPs. Therefore, a hybrid multi-verse optimizer model (hDMVO) was developed by combining the MVO and the SCA to provide efficient solutions for medium- and large-scale DTCTPs and other optimization problems that can be applied in actual construction projects. This model also significantly enhances the decision-making ability of decision-makers.

The rest of this paper is organized as follows. Section "Literature review" summarizes the literature on the time–cost trade-off problems. Section "Model development" outlines the development of our hybrid multi-verse optimizer model. Section "Computational experiments" presents the results from the validation and application of our model. Finally, Sections "Conclusion" and "Recommendations for future work" conclude the study and outline future research directions.

Literature review

The stochastic optimization method is widely used in many fields of study7,8, which develops meta-heuristic techniques. Some popular meta-heuristic methods are inspired by animals in nature. For example, ant lion optimization (ALO) algorithm is modeled after the hunting behavior of antlions in nature. The Dragonfly algorithm (DA) is based on the static and dynamic swarming behaviors observed in dragonflies9. Africa Wild Dog Optimization Algorithm (AWDO) originates from the hunting mechanism of Africa wild dogs in nature10. Meanwhile, Genetic Algorithm (GA) is inspired by evolutionary principles, such as heredity, mutation, natural selection, and crossover11.

The development of new algorithms or improvement of current algorithms has recently attracted immense interest from researchers, which is related to the No Free Lunch (NFL) theorem12. Evidently, the NFL has enabled researchers to improve and adapt current algorithms for solving different problems or propose new algorithms to provide competitive results against current algorithms. There are a significant number of developed hybrid metaheuristic algorithms, including the ant colony system-based decision support system (ACS-SGPU)13, dragonfly algorithm–particle swarm optimization model14, quantum-based sine cosine algorithm15, the improved sine–cosine algorithm based on orthogonal parallel information16, the hybrid sine cosine algorithm with multi-orthogonal search strategy17.

The time–cost trade-off is extended to the discrete version, including various realistic assumptions and solved by the exact, heuristic, and metaheuristic methods. PSO and GA are metaheuristic methods commonly used in the DTCTP. Bettemir18 found that among eight metaheuristic methods, including a sole genetic algorithm, four hybrid genetic algorithms, PSO, ant colony optimization, and electromagnetic scatter search, PSO was one of the leading algorithms together with the hybrid genetic algorithm with quantum annealing for the large-scale cost optimization. Zhang and Xing19 proposed an algorithm combining PSO and fuzzy sets theory to solve the fuzzy time–cost–quality trade-off problem. Aminbakhsh and Sonmez20 developed the discrete particle swarm optimization method to solve the large-size time–cost trade-off problem. Aminbakhsh and Sonmez21 used Pareto front particle swarm optimizer (PFPSO) to simultaneously optimize the time and cost of large-scale projects. Sonmez and Bettemir22 presented a hybrid strategy based on GAs, simulated annealing, and quantum simulated annealing techniques for the cost optimization problem. Zhang et al.23 proposed a GA for the DTCTP in repetitive projects. Naseri and Ghasbeh24 used GA for the time–cost trade off analysis to compensate for project delays. Network analysis algorithm is also metaheuristic techniques used to solve the DTCTP25. Son and Khoi26 presented a slime mold algorithm model to solving time–cost–quality trade-off problem.

Despite their wide applications in solving the DTCTP, the metaheuristic techniques have several limitations. Therefore, hybrid metaheuristic methods are being developed and widely used in the DTCTP. An adaptive-hybrid genetic algorithm was proposed by Zheng27 for time–cost–quality trade-off problems. Said and Haouari28 developed a model wherein the simulation–optimization strategy and the mixed-integer programming formulation were used to solve the DTCTP. Tran, Luong-Duc29 presented an opposition multiple objective symbiotic organisms search (OMOSOS) model for time, cost, quality, and work continuity trade-off in repetitive projects. Eirgash et al.30 proposed a multi-objective teaching–learning-based optimization algorithm integrated with a nondominated sorting concept (NDS–TLBO), which is successfully applied to optimize the medium- to large-scale projects. A hybrid GALP algorithm combined with GA and linear programming (LP) was proposed by Alavipour and Arditi31 for time–cost tradeoff analysis. Albayrak32 developed an algorithm combining PSO and GA to solve the time–cost trade-off problem for resource-constrained construction projects. A population-based metaheuristics approach, nondominated sorting genetic algorithm III (NSGA III) was developed by Sharma and Trivedi33 to ensure the quality and safety in time–cost trade-off optimization. Li et al.34 presented an epsilon-constraint method-based genetic algorithm for uncertainty multimode time–cost–robustness trade-off problem.

This paper presents a hybrid multi-verse optimizer (hDMVO) model based on MVO and SCA, which can provide high-quality solutions for large-scale discrete time–cost trade-off optimization problems.

Model development

Discrete time–cost trade-off problem

The common objective of discrete time–cost tradeoff problem (DTCTP) is to minimize the total direct and indirect costs and such costs can be formulated as follows35:

$$C = min\mathop \sum \limits_{j = 1}^{S} \mathop \sum \limits_{k = 1}^{m\left( j \right)} \left( {dc_{jk} x_{jk} } \right) + D \times ic$$
(1)

subject to:

$$\mathop \sum \limits_{k = 1}^{m\left( j \right)} x_{jk} = 1, \forall j = \left\{ {1, \ldots ,S} \right\}$$
(2)
$$\mathop \sum \limits_{k = 1}^{m\left( j \right)} d_{jk} x_{ik} + St_{j} \le St_{l} , \forall l \in Sc_{j} \; and\; \forall j = \left\{ {1, \ldots ,S} \right\}$$
(3)
$$D \ge St_{S + 1}$$
(4)

where C is the project cost; dcjk is the direct cost of mode k for activity j; xjk is a 0–1 variable which is 1 if mode k is selected for executing activity j, and 0 otherwise; ic is the daily indirect cost; D is the project duration; djk is the duration of mode k for activity j; Stj is the start time for activity j; and Scj is the set of immediate successors for j.

Hybrid multi-verse optimizer model for DTCTP

Multi-verse optimizer—MVO

The MVO algorithm is inspired by concepts which theoretically exist in astronomy, including white holes, black holes, and worm holes. White holes are the elements which form the universes and have been unobservable until now. Meanwhile, black holes are observable and characterized by a giant gravitational force which attracts all surrounding objects. The last element can exchange objects between different universes or different parts of a universe. In the MVO, the above three elements are mathematically modeled to develop an optimal method, simulate the teleportation and exchange of objects between universes through white/black and worm hole tunnels. In addition, the idea of the inflation of the universe is also applied to the MVO based on the inflation rate.

The model of the MVO algorithm is shown in Fig. 1. In this figure, the universe with a higher inflation rate will have a white hole, while a universe with a lower inflation rate will have a black hole. The objects will then be transferred from the white holes of the source universe to the black holes of the target universe. In order to improve the overall inflation rate of single universes, an assumption was made that the universes with high inflation rate would be more likely to have white holes. In contrast, the universes with low inflation rate are more likely to have black holes. In Fig. 1, the white points represent celestial bodies travelling through the worm holes. It can be observed that worm holes stochastically change celestial bodies regardless of their inflation rates.

Figure 1
figure 1

Conceptual model of the proposed MVO algorithm.

The roulette wheel mechanism will be used. (Eq. 6) to mathematically model white or black hole tunnels and exchange celestial objects between universes. When optimization problems are solved with the maximized objective function, –NI will be changed into NI. In each iteration, universes will be rearranged based on their inflation rate (fitness value), and by the roulette wheel mechanism, one universe will be selected in the occurrence of white hole, assume that:

$$U = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {x_{1}^{1} } & {x_{1}^{2} } \\ \end{array} } & \ldots & {x_{1}^{d} } \\ {\begin{array}{*{20}c} {x_{2}^{1} } & {x_{2}^{2} } \\ \end{array} } & \ldots & {x_{2}^{d} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} \ldots \\ {x_{n}^{1} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots \\ {x_{n}^{2} } \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots \\ \ldots \\ \end{array} } & {\begin{array}{*{20}c} \ldots \\ {x_{n}^{d} } \\ \end{array} } \\ \end{array} } \right]$$
(5)

where d is the number of parameters (variables) and n is the number of universes (candidate solutions):

$$x_{i}^{j} = \left\{ {\begin{array}{*{20}c} {x_{k}^{j} r_{1} < NI\left( {U_{i} } \right)} \\ {x_{i}^{j} r_{1} \ge NI\left( {U_{i} } \right)} \\ \end{array} } \right.$$
(6)

where \(x_{i}^{j}\) indicates the jth parameter of ith universe, Ui shows the ith universe, NI(Ui) is normalized inflation rate of the ith universe, r1 is a random number in [0, 1], and \({x}_{k}^{j}\) indicates the jth parameter of kth universe selected by a roulette wheel selection mechanism.

With the above-mentioned mechanism, universes keep the objects exchanged without interference. For accurate determination of the diversity of universes and exploitation, each universe has a wormhole to stochastically transport its objects through space. In order to provide local changes for each universe and improve inflation rate by using wormholes, worm hole tunes are assumed to always be established between a universe and a best universe formed so far. This mechanism is presented as follows:

$$x_{i}^{j} = \left\{ {\begin{array}{*{20}c} {\left\{ {\begin{array}{*{20}c} {X_{J} + TDR \times \left( {\left( {ub_{j} - lb_{j} } \right) \times r_{4} + lb_{j} } \right) r_{3} < 0.5} \\ {X_{J} - TDR \times \left( {\left( {ub_{j} - lb_{j} } \right) \times r_{4} + lb_{j} } \right) r_{3} \ge 0.5} \\ \end{array} r_{2} < WEP} \right.} \\ {x_{i}^{j} r_{2} \ge WEP} \\ \end{array} } \right.$$
(7)

where Xj indicates the jth parameter of best universe formed so far, TDR is a coefficient, WEP is another coefficient, lbj shows the lower bound of jth variable, ubj is the upper bound of jth variable, \({x}_{i}^{j}\) indicates the jth parameter of ith universe, and r2, r3, r4 are random numbers in [0, 1].

Two main coefficients, namely the wormhole existence probability (WEP) and travelling distance rate (TDR) can be seen in Eq. (7). The coefficient WEP was used to determine the wormhole existence probability in the universe. Such coefficient will linearly increase over the iterations (Eq. 8).

$$WEP = min + l \times \left( { \frac{max - min}{L}} \right)$$
(8)

where the min variable is the minimum value, the max variable is the maximum value, l presents the number of the current iteration, and L presents the termination criteria (the maximum number of iterations).

TDR is a factor to determine the distance rate (variation) by which an object can be displaced by a wormhole around the best universe formed so far. (Eq. 9). In contrast to WEP, TDR decreases over iterations for more precise exploitation or local search around the best universe formed so far (Fig. 2).

$$TDR = 1 - \frac{{l^{1/p} }}{{L^{1/p} }}$$
(9)

where p presents the exploitation rate through the iterations. The larger p, the earlier and more precise exploitation/local search.

Figure 2
figure 2

Wormhole existence probability (WEP) versus travelling distance rate (TDR).

In the MVO algorithm, the optimization process starts with generating a set of random universes. At each iteration, objects in universes with higher inflation rates tend to travel to universes with lower inflation rates through white or black holes. Meanwhile, every universe has to face random processes of celestial bodies through wormholes to reach the best universe. This process is repeated until the termination criteria are satisfied (such as a predetermined maximum number of iterations).

Sine cosine algorithm-SCA

Stochastic population-based techniques have in common is to divide the optimization process into two phases: exploration and exploitation36. In the exploration phase, the optimization algorithm will abruptly combine solutions with a high random rate to find the promising region of the search space. However, in the exploitation phase, there will be gradual changes in the stochastic solutions, and the stochastic variations will be significantly less than those in the exploration phase.

In SCA, the mathematical equations for updating positions are given for both phases, see Eqs. (10) and (11):

$$X_{j}^{t + 1} = X_{j}^{t} + \alpha_{1} \times \sin \left( {\alpha_{2} } \right) \times \left| {\alpha_{3} D_{j}^{t} - X_{j}^{t} } \right|$$
(10)
$$X_{j}^{t + 1} = X_{j}^{t} + \alpha_{1} \times \cos \left( {\alpha_{2} } \right) \times \left| {\alpha_{3} D_{j}^{t} - X_{j}^{t} } \right|$$
(11)

where \(X_{j}^{t}\) is the position of the current solution in ith dimension at tth iteration, α1, α2 and α3 are random numbers, \(D_{j}^{t}\) is position of the destination point in ith dimension, and || indicates the absolute value.

The above two formulas are combined into a general formula as follows:

$$X_{j}^{t + 1} = \left\{ {\begin{array}{*{20}c} {X_{j}^{t} + \alpha_{1} \times \sin \left( {\alpha_{2} } \right) \times \left| {\alpha_{3} P_{j}^{t} - X_{j}^{t} } \right| \alpha_{4} < 0.5} \\ {X_{j}^{t} + \alpha_{1} \times \cos \left( {\alpha_{2} } \right) \times \left| {\alpha_{3} P_{j}^{t} - X_{j}^{t} } \right| \alpha_{4} \ge 0.5} \\ \end{array} } \right.$$
(12)

where α4 is a random number in [0,1].

In Eq. (12), it can be seen that SCA has 4 main parameters: α1, α2, α3, and α4. α1 defines the movement direction, α2 determines how far the movement should be towards or outwards the destination, α3 denotes random weights for destination. Finally, the parameter α4 switches between the sine and cosine components in Eq. (12).

A general model in Fig. 3 shows the effectiveness of the sine and cosine functions in the range [− 2, 2]. This figure shows how the range of sine and cosine changes in order to update the location of a response. Randomness is also achieved by determining a random number for α2 in [0, 2π] (Eq. (12)). Therefore, this mechanism ensures the exploration of the search space.

Figure 3
figure 3

Sine and cosine with the range in [− 2, 2] allow a solution to go around (inside the space between them) or beyond (outside the space between them) the destination.

In each iteration, the range of Sine and Cosine functions in Eqs. (10)–(12) will be changed to balance the exploitation and exploration phases in order to find the promising regions of the search space and finally achieve the global optimization by Eq. (13):

$$\alpha_{1} = v - t\frac{v}{T}$$
(13)

where v is a constant, t is the current iteration and T is the maximum number of iterations. Figure 4 shows the reduction in the range of the sine and cosine functions over the course of iterations.

Figure 4
figure 4

Decreasing pattern for range of sine and cosine.

Hybrid multi-verse optimizer model for DTCTP

By taking advantage of SCA and MVO, the hDMVO is built to change the MVO's exploitation mechanism by the SCA's exploitation mechanism while preserving the MVO’s mechanisms of roulette wheel selection Eq. (14), thereby improving the hDMVO's search area exploration and exploitation.

$$x_{i}^{j} = \left\{ {\begin{array}{*{20}c} {x_{k}^{j} \delta_{1} < NI\left( {U_{i} } \right)} \\ {x_{i}^{j} \delta_{1} \ge NI\left( {U_{i} } \right)} \\ \end{array} } \right.$$
(14)

where \({x}_{i}^{j}\) indicates the jth parameter of ith universe, Ui shows the ith universe, NI(Ui) is normalized inflation rate of the ith universe, δ1 is a random number in [0, 1], and \({x}_{k}^{j}\) indicates the jth parameter of kth universe selected by a roulette wheel selection mechanism.

A new formula which combines two algorithms MVO and SCA will be developed from Eq. (9) and Eq. (12) as follows:

$$x_{i}^{j} = \left\{ {\begin{array}{*{20}c} {\left\{ {\begin{array}{*{20}c} {X_{J} + TDR \times \sin \left( {\delta_{5} } \right) \times \left( {\left( {ub_{j} - lb_{j} } \right) \times \delta_{4} + lb_{j} } \right) \delta_{3} < 0.5} \\ {X_{J} - TDR \times \cos \left( {\delta_{5} } \right) \times \left( {\left( {ub_{j} - lb_{j} } \right) \times \delta_{4} + lb_{j} } \right) \delta_{3} \ge 0.5} \\ \end{array} \delta_{2} < WEP} \right.} \\ {x_{i}^{j} \delta_{2} \ge WEP} \\ \end{array} } \right.$$
(15)

where Xj indicates the jth parameter of best universe formed so far. TDR is wormhole existence probability was calculated by Eq. (8) with min = 0.2 and max = 3. WEP is travelling distance rate was calculated by Eq. (9) with p = 10. lbj shows the lower bound of jth variable, ubj is the upper bound of jth variable, \({x}_{i}^{j}\) indicates the jth parameter of ith universe, and δ2, δ3, δ4 are random numbers in [0, 1]. δ5 are also random numbers in [0, 2π]. The parameter δ5 defines how far the movement should be towards or outwards the destination. The pseudo-code and flowchart of our hDMVO method is given in Figs. 5 and 6. The set of parameters that are summarized in Table 1 provided an adequate combination for the hDMVO, MVO and SCA.

Figure 5
figure 5

Pseudo-code of the proposed hDMVO algorithm.

Figure 6
figure 6

Flowchart of the proposed hDMVO algorithm.

Table 1 Parameter settings of the hDMVO, SCA and MVO.

The complexity of the hDMVO algorithm is influenced by various factors such as the number of activities, schedules, iterations, roulette wheel selection mechanism, and sorting mechanism. The roulette wheel selection method, which is applied for every activity in each solution over the course of iterations, has a complexity of O(log N). The sorting of solutions is carried out at each iteration by utilizing the Quicksort algorithm, which has a complexity of O(N^2) in the worst-case scenario. As a result, the overall computational complexity is:

$$O\left( {hDMVO} \right) = O\left( {T\left( {O\left( {Quicksort\, algorithm} \right) + N \times n \times \left( {O\left( {roulette\, wheel\, selection} \right)} \right)} \right)} \right)$$
(16)
$$O\left( {hDMVO} \right) = O\left( {T\left( {N^{2} + N \times n \times \log N} \right)} \right)$$
(17)

where n is the number of activities, N is the number of schedules, and T is the maximum iterations.

In the optimization process, the solution a is evaluated to be better than solution b if:

$$a > b\, if\, C_{a} < C_{b}$$
(18)

In case the project cost is equal (Ca = Cb), The option with the shortest completion time is considered the best one:

$$a > b\, if \left\{ {\begin{array}{*{20}c} {C_{a} = C_{b} } \\ {D_{a} < D_{b} } \\ \end{array} } \right.$$
(19)

In case both options a and b have the same project cost and project duration, the optimal solution will be stochastically selected.

The exploration and exploitation of the search space will be significantly improved thanks to the hDMVO algorithm. Such hybrid algorithm not only searches for the optimal solution from the sets of solutions stochastically generated at the initial phase, but can also exploit the space between solutions through each iteration to find new promising regions of the search space.

Computational experiments

Convergence behaviours

To assess the optimization capabilities of hDMVO, a comprehensive analysis was conducted using twenty-three well-known benchmark test functions. Comparisons were made to the results obtained from four other optimization algorithms, including MVO, SCA, DA, and ALO. These benchmark functions are grouped into three categories: unimodal, multimodal, and fixed-dimension multi-modal functions, as detailed in Tables 2, 3 and 4.

Table 2 Uni-modal test functions.
Table 3 Multi-modal test functions.
Table 4 Fixed-dimension multi-modal test functions.

In order to ensure a fair comparison, all of the algorithms were executed 30 times for each benchmark function. Statistical results, including the mean value (ave) and standard deviation (std), were collected from 30 runs of the algorithm. It is important to note that 30 search agents and a maximum of 500 iterations were utilized in the analysis. The statistical results of the hDMVO algorithm, as well as those of other comparative algorithms (DA, ALO, SCA, and MVO), can be found in Tables 5, 6 and 7.

Table 5 Results of unimodal benchmark functions.
Table 6 Results of multi-modal benchmark functions.
Table 7 Results of composite benchmark functions.

It should be noted that unimodal functions possess a single global optimum, making them an appropriate choice for evaluating the exploitation mechanism. An examination of the results presented in Table 5 reveals that hDMVO exhibited superior exploitability when compared to other swarm-based optimization algorithms (DA, ALO, SCA, and MVO) in the unimodal test functions, as demonstrated by its performance in 7 out of 7 for MVO and DA, 5 out of 7 for ALO and 4 out of 7 for SCA.

Unlike unimodal functions, multimodal benchmark functions possess a global optimization point in addition to many local optima. Therefore, multimodal test functions are well-suited for evaluating the exploration capabilities of hDMVO. The results for multimodal test functions (Table 6) show that hDMVO performs better than MVO, DA and ALO, and is comparable to SCA (6 out of 6 for MVO, ALO and DA, 5 out of 6 for SCA). Therefore, the ability of hDMVO to effectively avoid local optima and explore the search space has been demonstrated through its performance.

The composite test functions, as the name implies, are a combination of various unimodal and multimodal test functions, which include variations such as rotation, shifting, and bias. These composite test functions have a similar real search space with multiple local optima, which is beneficial for testing the balance between exploration and exploitation of the search space. The results of the hDMVO algorithm's performance with composite test functions (F14–F23) are presented in Table 7. The results indicate that the hDMVO outperforms other population-based optimization algorithms in terms of average values, thus demonstrating its ability to effectively balance search space exploration and exploitation.

The performance of the hDMVO algorithm in terms of convergence, in comparison to other state-of-the-art algorithms (DA, ALO, SCA, and MVO), is depicted in Figs. 7, 8 and 9. Through the use of 10 agents and 150 iterations, the study generated convergence curves which demonstrate that hDMVO has a higher likelihood of reaching optimal convergence on a majority of the benchmark test functions.

Figure 7
figure 7

Convergence curves of MVO, SCA, ALO, DA, and hDMVO variants for unimodal functions.

Figure 8
figure 8

Convergence curves of MVO, SCA, ALO, DA, and hDMVO variants for multimodal functions.

Figure 9
figure 9

Convergence curves of MVO, SCA, ALO, DA, and hDMVO variants for composite functions.

Medium-scale instances of DTCTP

The medium-scale instances include two 63-activity problems wherein each activity has maximum five modes22. The network diagram of this problem is shown in Fig. 10. The time–cost alternatives for these instances are listed in Table 8. The medium-scale instances, including 1.37 × 1042 possible solutions will be tested at two different levels of indirect costs. The indirect cost in the first problem (63a) is 2300USD/day, while that in the second problem (63b) is 3500USD/day. The optimal solutions for these two problems are 5,421,120USD and 6,176,170USD, respectively. The hDMVO algorithm has been implemented in Python and is compatible with Visual Studio Code. The testing of all instances of DTCTP were performed on a personal computer featuring an Intel Core i7-8750H 2.20 GHz CPU and 8.0 GB of RAM.

Figure 10
figure 10

Activity on the node (AoN) representation of the 63-activity network (Aminbakhsh and Sonmez20).

Table 8 Data for the 63 activity time–cost trade-off problem.

hDMVO obtains exceptional results in medium-scale experimental problems, wherein the best values among the ten runs are 5,444,670USD for problems 63a (Table 9) and 6,211,720USD for problems 63b (Table 10). The distribution of percentage deviations for hDMVO, MVO, and SCA are illustrated in Figs. 11 and 12, using ten trials of the 63a and 63b DTCTP problems. The figures demonstrate that hDMVO has a smaller deviation percentage compared to MVO and SCA, indicating its ability to effectively balance exploration and exploitation when solving medium-scale DTCTP problems.

Table 9 Analysis results of problem 63a.
Table 10 Analysis results of problem 63b.
Figure 11
figure 11

Percentage deviations of hDMVO, MVO and SCA in problem 63a.

Figure 12
figure 12

Percentage deviations of hDMVO, MVO and SCA in problem 63b.

The average percent deviation (APD) of hDMVO from the global optimal for problems 63a and 63b is summarized in Table 11. The results show that hDMVO outperforms ACO, GA, and electromagnetism mechanism (EMS)18, sole genetic algorithm (GA), hybrid genetic algorithm (HA)22, modified adaptive weight approach with genetic algorithms (MAWA-GA), modified adaptive weight approach with particle swarm optimization (MAWA-PSO) and modified adaptive weight approach with teaching learning based optimization (MAWA-TLBO)37. hDMVO also outperforms the two original algorithms named MVO and SCA when its ADPs are 0.61% and 0.71% for problems 63a and 63b, respectively, within 50,000 schedules. As evident from Table 11, hDMVO outperforms both native algorithms (MVO and SCA) in medium-scale instances. By searching only 50,000 solutions out of 1.37 × 1042 potential solutions, hDMVO can identify high-quality solutions that were highly close to the optimal value.

Table 11 Average percent deviation from the optimal for problem 63a and 63b.

Large-scale instances of DTCTP

The large-scale instances include two 630-activity problems, wherein each activity has a maximum of five modes, including 2.38 × 10421 possible solutions22. These cases represent the size of an actual construction project. The indirect costs for large-scale instances (630a and 630b) are similar to those of medium-scale instances (2300USD/day and 3500USD/day for 63a and 63b, respectively).

The hDMVO algorithm also shows its effectiveness in large-scale experimental problems; the best values for ten runs in problems 630a and 630b are 54,816,950USD (Table 12) and 62,505,580USD (Table 13), respectively. Figures 13 and 14 show the boxplots of ten percentage deviations of hDMVO, MVO and SCA by testing problems 630a and 630b. From Figs. 13 and 14, the percentage deviations of hDMVO are much smaller than those of MVO and SCA. The results thus demonstrated the stability of hDMVO when solving the large-scale DTCTP problem.

Table 12 Analysis results of problem 630a.
Table 13 Analysis results of problem 630b.
Figure 13
figure 13

Percentage deviations of hDMVO, MVO and SCA in problem 630a.

Figure 14
figure 14

Percentage deviations of hDMVO, MVO and SCA in problem 630b.

For large-scale instances, hDMVO provides superior results (Table 14) against GA, genetic algorithm with simulated annealing (GASA), hybrid genetic algorithm with quantum simulated annealing (HGAQSA), genetic memetic algorithm with simulated annealing (GMASA), genetic algorithm with simulated annealing and variable neighborhood search (GASAVNS), PSO, electromagnetic scatter search (ESS)18, and GA and HA22. hDMVO completely outperforms SCA and performs slightly better than MVO. hDMVO also yields better results than NDS–TLBO30 in problem 630b; for problem 630a, hDMVO achieves APD of 1.27% when searching for 50,000 solutions, while NDS–TLBO achieves APD of 1.1% when searching for 250,000 solutions. hDMVO's ADP values for problems 630a and 630b are 1.27% and 1.28%, respectively (Table 8), which were significantly superior to those of the two original algorithms, MVO and SCA. By just searching for 50,000 solutions out of 2.38 × 10421 potential solutions, hDMVO can achieve high-quality solutions for large-scale instances. These results indicate that hDMVO has overcome the disadvantages of MVO and SCA in search space exploration and exploitation to achieve the optimal value.

Table 14 Average percent deviation from the optimal for problem 630a and 630b.

Conclusion

This study presents a combined model of MVO and SCA for global optimization. The combination's objective is to make use of the exploration of MVO and the search space exploitation of SCA to achieve an effective balance between the two phases during optimization. hDMVO is developed to combine the search space exploitation mechanism of MVO and SCA while preserving MVO's mechanisms of roulette wheel selection, thereby improving hDMVO's search exploration and exploitation. hDMVO was comprehensively evaluated by twenty-three benchmark optimization problems. The results indicate that hDMVO is more likely to achieve global optimization compared with SCA and MVO. In this study, hDMVO is proposed to solve the discrete time–cost trade-off problem in construction projects. The results of the computational experiments reveal that hDMVO can achieve high-quality solutions for medium- and large-scale DTCTP and can be used to optimize the cost–time problems for actual projects. With the obtained results, hDMVO is seen as an appropriate metaheuristic method for solving the DTCTP problem as well as other optimization problems.

Recommendations for future work

In this study, the application of hDMVO is limited to solving DTCTP problems with the finish-to-start relationship. However, in actual construction projects, DTCTP problems in construction projects often include the start-to-start, finish-to-finish, and start-to-finish relationships. Therefore, in future studies, hDMVO will be used to solve DTCTP problems with complicated relationships simultaneously and on a large scale to obtain more comprehensive solutions for project management. The hDMVO model has been shown to effectively balance exploration and exploitation when compared to other state-of-the-art swarm-based optimization algorithms (DA, ALO, SCA, and MVO). Additionally, the hDMVO model also demonstrates competitive performance in medium- and large-scale DTCTPs. However, limitations in local optima avoidance are also clearly demonstrated by hDMVO when applied to large-scale problems. To overcome these limitations, future research will involve the development of a combination model that incorporates hDMVO with other techniques such as modified adaptive weight approach and opposition-based learning, to enhance its performance in solving optimization problems in the construction industry and other technical fields.