Single-machine scheduling with periodic maintenance and learning effect

This paper discusses a single-machine scheduling problem with periodic maintenance activities and position-based learning effect to minimize the makespan. To obtain exact solutions of small-scale problems, one new two-stage binary integer programming model is formulated. In addition, a branch and bound algorithm combining boundary method and pruning rules is also proposed. According to the property of the optimal solution, a special search neighborhood is constructed. A hybrid genetic-tabu search algorithm based on genetic mechanism with tabu technique as an operator is proposed to solve medium-scale and large-scale problems. Moreover, to improve the efficiency of genetic algorithm and hybrid genetic-tabu search algorithm, Taguchi method is used for parameter tuning. Furthermore, computational experiments are carried out to compare the efficiency and performance of these algorithms.


Problem description
In this study, we consider a single-machine scheduling with periodic maintenance activities and learning effect simultaneously.The objective is to minimize the makespan.The details of the problem are as follows.There are n nonresumable jobs J = {J 1 , J 2 , . . ., J n } to be processed on a single machine.Nonresumable job means that if the process of a job could not be completed before the next maintenance activity, then it must be restarted after maintenance activity.All jobs are independent of each other and available at time zero.Each job J i has a normal processing time p i .However, due to the learning effect, the actual processing time p ir of J i is less than or equal to p i , which is related to its position r in schedule, refer to 19 .The actual processing time p ir is given by the fol- lowing formula.
The machine must be shut down for preventive maintenance after a period interval T .The time required to perform each maintenance is t .The machine cannot handle any job during maintenance.
In a schedule, jobs processed continuously form a batch, represented by B .Therefore, a feasible schedule can be denoted as a series of batches.The total actual processing time of jobs in each batch cannot exceed T , and there is a periodic maintenance after a fixed period T .The Gantt chart of the current problem is shown in Fig. 1, where M l is the l th maintenance and B l is the l th batch.L represents the number of batches required to process n jobs.I l denotes the machine idle time between the finishing of the last job in B l and the beginning of M l .We suppose that there are n l jobs in batch B l .For convenience, let J [i] be the job at the i th position in the schedule.
(1) p ir = p i r a , i, r = 1, 2, . . ., n(a ≤ 0 is a constant learning index).Consider the feasible schedule π ′ derived from π by exchanging the positions of job J i and job J j .Obviously, with the exception of J i and J j in B h , the actual processing times of other jobs in π will not be affected.For con- venience, we assume that the position of J i in π is r , then the position of J j in π is r + 1 .In optimal schedule π , the sum of the actual processing times of J i and J j is equal to p i r a + p j (r + 1) a .In schedule π ′ , the sum of the actual processing times of J i and J j is equal to p j r a + p i (r + 1) a .p i r a + p j (r + 1) a ≤p j r a + p i (r + 1) a .Therefore, p i − p j r a ≤ p i − p j (r + 1) a and p i > p j , thus r a ≤ (r + 1) a , (a < 0) .This is a contradictory inequality.Therefore π is not an optimal schedule.
Repeat this operation until the jobs in the same batch are in non-decreasing order of their normal processing times, and the theorem can be confirmed.□ Note that when the single machine is continuously available, Problem 1|pm, nr − le|C max will degenerate into the general makespan problem with learning effect and the property still holds true.
Although Property 2 gives the optimal sequence within each batch, idle times should be inserted in the optimal sequence to obtain the optimal schedule.

Property 3
In the problem 1|pm, nr − le|C max , there exists an optimal schedule in which Proof (by contradiction) Suppose π is an optimal schedule, in which there exists a job J i ∈ B h such that I l ≥ p i (n 1 + n 2 + • • • + n l ) a and h > l , that is, π does not satisfy this conclusion.
Consider the feasible schedule π ′ derived from π by deleting job J i from its original position and then insert- ing it into the end of B l .Obviously, with the exception of job J i and the jobs sequenced precede job J i in B s (s = l + 1, . . ., h ), the actual processing times of other jobs in π will not be affected.The completion time of job J i will be reduced by at least t in π ′ .The actual processing times of the jobs scheduled earlier than J i in B s (s = l + 1, . . ., h ) will be reduced due to the learning effect, and the corresponding completion times will be reduced.The completion times of the jobs scheduled later than J i in B h will be reduced due to the removing of job J i from its original position and the reducing of the actual processing times of the jobs scheduled before J i in the same batch.
Therefore the completion time of the job in schedule π ′ is less than or equal to the completion time of the corresponding job in schedule π .Repeat this operation yields the theorem.□Property 4 For the problem 1|pm, nr − le|C max , there are two partial schedules π 1 and π 2 , which are composed of the same jobs.If the makespan of π 1 is smaller than π 2 , then π 2 is dominated by π 1 .
Proof Let π 3 be the partial schedule composed of the remaining jobs.Since the partial schedule π 1 has the smaller makespan than π 2 , the makespan of the full schedule (π 1 , π 3 ) is not greater than that of (π 2 , π 3 ) .Thus, π 2 is dominated by π 1 .□ The dominance rules given in Property 2-4 can eliminate some partial sequences and reduce unnecessary searches.

Two-stage BIP model
To obtain the optimal schedule, the actual processing times, the periodic maintenance times and idle times must be considered.The sum of periodic maintenance times is (L − 1)t , while the sum of idle times is L−1 i=1 I i .The makespan of 1|pm, nr − le|C max can be expressed as Therefore, the sum of the total actual processing time and the total idle time as well as the number of batches must be minimized in the optimal schedule.We develop a two-stage BIP model to drive the optimal schedule.The BIP model in the first stage is to ascertain the minimum number of batches needed to process n jobs.The second stage is to minimize the sum of the total machine idle time and the total actual processing time.Once the minimum sum is determined in the second stage, the optimal solution can be obtained correspondingly.
Regarding the fact that the jobs in each batch cannot be predetermined, the maximum possible number of batches is taken as the initial condition.Therefore, if n jobs are to be processed, there are at most n batches.How- ever, after sequencing the jobs in combination with the objective function, there will be batches assigned multiple jobs, which will result in some batches without assigned jobs.Then these empty batches will be omitted in our programming model.Recognizing this relation, we define two binary decision variables x ijl and y l in our model.
The first-stage BIP model is as follows: Equation (3) describes the objective of the BIP model in the first stage, that is, to minimize the number of batches needed to process n jobs.Constraints (4) guarantee that there is only one job at each position.Constraints (5) guarantee that each job is assigned to only one position.Constraints (6) ensure that the total actual processing time of jobs in each batch must not exceed the allowable maximum time T .Constraints (7) calculate the number of jobs in each batch.In constraints (8), M is a very large positive number.Constraints (8) guarantee that if there are no jobs in batch B l , then there are no jobs in batch B l+1 .Constraints ( 9) and ( 10) set the restriction that decision variables y l and x ijl are binary variables, respectively.
The second-stage BIP model is as follows: (2) 1, J i is sequenced in B l and position r 0, otherwise ,  11) describes the objective of the BIP model in the second stage, that is, to minimize the makespan.Constraints ( 12)-( 16) have the same meaning as Constraints ( 4)- (8).Constraints (17) calculate the completion time of J [r] .Constraints ( 18) and ( 19) set up the restrictions for C [0] and x ijl .

Proposed algorithms
Property 1 shows that the problem 1|pm, nr − le|C max is NP-hard in the strong sense.In view of the complexity of the problem, three algorithms, namely B&B algorithm, GA and HGTSA, are proposed in this section.

B&B algorithm.
Since the problem 1|pm, nr − le|C max is NP-hard in the strong sense.Implicit enumeration techniques can be used to obtain optimal solutions for small-scale problems.In this subsection, we present a B&B algorithm incorporating with boundary method and several pruning rules.In search tree, each node represents a partial schedule, and each branch represents the addition of a new job to the partial schedule.
Upper bound.In the B&B algorithm, enumeration reduction is accomplished by calculating and comparing the upper bounds and the lower bounds.The better the initial upper bound, the more nodes (i.e.partial schedules) we can eliminate in the initial stage of the B&B algorithm, so that the less searching time.
Index the jobs in shortest normal processing time order: The procedure for solving the initial upper bound is as follows.
Step 1: Arrange the jobs in -sharp order of their normal processing times, Step 2: Create the first batch and put the first job in the sequence into the batch.
Step 3: Construct a candidate batch set.A job can be placed in a certain batch only if the cumulative actual processing time of the jobs (including the current job) assigned to the certain batch so far does not exceed time T.
Step 4: (1) If the candidate set is empty, create a new batch and assign the current job to the new batch; otherwise, (2) Select a batch from the candidate set such that the difference between T and the cumulative actual processing time of the jobs (including the current job) assigned to the batch so far is the smallest.
Step 5: Repeat Step 3 and Step 4 until all jobs are sequenced.
Step 6: Calculate the makespan of the obtained sequence.
Lower bound.At any given node D , {J 1 , J 2 , . . ., J n } are divided into two categories: jobs scheduled and jobs unscheduled.At node D , assume that n D jobs have been scheduled and assigned to positions 1 to n D .Let S D = (J [1] , J [2] , . . ., J [n D ] ) be the partial schedule composed of n D scheduled jobs, and US D be the set of n − n D unscheduled jobs.Let tt denote the total actual processing time of jobs in the last batch in S D .Let z L (D) be the lower bound of node D , z 1 (D) be the makespan of n D scheduled jobs, and z 2 (D) be the makespan of n − n D unscheduled jobs.z 1 (D) can be obtained directly, z 2 (D) needs to be estimated.Suppose π ′ is a partial schedule corresponding to the set US D , then π ′ starts at the position n D + 1 .Here batch B ′ i is indexed from the beginning of π ′ .To obtain a lower bound of node D , it is suf- ficient to ascertain a lower bound of π ′ .z 2 (D) is the sum of the total actual processing time, total maintenance (13) Vol:.( 1234567890 www.nature.com/scientificreports/time and total idle time corresponding to n − n D unscheduled jobs.Mosheiov had proved that the minimum makespan of the single-machine scheduling with learning effect can be obtained by shortest processing time rule when there is no preventive maintenance 32 .Accordingly, the total actual processing time is given as follows where p ′ 1 , p ′ 2 , . . ., p ′ n−n D are indexed in the non-decreasing order of the normal processing times of jobs in set US D .In addition to this, the makespan of n − n D unscheduled jobs is minimum when there is no idle time.Therefore, the needed maintaining time is given as follows Then Consequently, we have a lower bound of node D as follows Pruning rules.Pruning rules can reduce unnecessary searches, which can greatly improve the search speed and the computational efficiency.In this subsection, we will propose three pruning rules.
Rule 1 If the lower bound corresponding to node D is greater than the current upper bound, then D should be deleted.
For any job J i ∈ US D , has normal processing time p i .A new node D i can be obtained from node D by attach- ing J i at the end of S D , i.e. at the position of n D + 1.
Rule 2 In the current last batch tt then D i should be eliminated.Rule 2 is based on the Property 2 of the optimal schedule.According to Property 2, the jobs in the same batch should be arranged in non-decreasing order of their normal processing times.
Rule 3 In the current last batch tt then D i should be eliminated.For Rule 3, job J i will result in a new batch.Condition follows Property 3.
Algorithm steps.In B&B algorithm, there are many nodes in search tree to be selected.We use depth-first strategy by which we can avoid too many nodes to be saved in computer memory.The detailed steps of the B&B algorithm are described as follows.
Step 1: Use the makespan obtained by the method proposed in "Upper bound" as the initial upper bound of the B&B algorithm.
Step 3: If the search tree is already empty, then stop.Otherwise, select an unsearched node D.
Step 4: If D is a leaf node, that is, US D = , then a complete schedule has been obtained.According to formula (2), calculate the makespan.If it is less than the current upper bound, use it to update the current upper bound.Otherwise, eliminate the leaf node.Go to Step 3.
Step 5: For any job J i ∈ US D , new node D i can be obtained from node D by attaching J i at the end of S D .
Step 6: In the case of tt

GA
GA, first proposed by Holland 33 , is an artificial evolutionary algorithm.One of the main differences between the algorithm and other meta-heuristic algorithms is that it uses a set of solutions rather than a single one.Because of the efficiency of GA in solving discrete optimization problems, some researchers have developed it to solve scheduling problems.The mechanisms of the GA are briefly described as follows: (1) Chromosome structure.In the considered single-machine scheduling problem, each feasible schedule is encoded as a chromosome containing n integers, in which each integer stands for a job.Integer i at the j -th position of the chromosome indicates that J i is at the j-th position of the scheduling sequence.The chromosome structure indicates the sequence of jobs and their normal processing times.www.nature.com/scientificreports/length will lead to frequent repeated searches.Instead, it will rule out the search paths that might generate good solutions.The larger the problem, the longer the length should be.In the current problem, we set the length of the tabu list according to the size of the problem, as follows.
where TL and n denote the length of the tabu list and the size of the problem, respectively.Update the tabu list using the First In First Out rule.
(4) Aspiration criterion.Aspiration criterion is the key step for the algorithm to realize global optimization.If there is a neighborhood solution superior to the current solution, whether or not the corresponding swap is in the tabu list, we select it as the current solution, then update the tabu list and the current best solution.(5) Stopping criterion.In the current problem, set termination conditions according to the objective control principle.If the current best solution is not improved within continuous MNTS steps, the algorithm will be terminated.Here, MNTS is a given number, and the value of MNTS will be tuned for a specific test instance by tradeoff between the solution quality and CPU time.www.nature.com/scientificreports/search methods together.Therefore, HGTSA has powerful searching ability which ensures global optimization and fast convergence rate.The flowchart of the HGTSA framework is shown in Fig. 3.The main steps of the HGTSA are as follows:

Hybrid algorithm. TS has capability
Step 1: Set the parameters of the HGTSA; Choose n pop best solutions out of 2000 randomly generated solutions as the initial population.
Step 2: Evaluate the fitness function value of each chromosome.
Step 3: Select parent chromosomes using Roulette Wheel method to execute crossover operation, mutation operation.Select the best solution from the selected parent chromosomes to execute TS.
Step 3.1: Set the best solution from the selected parent chromosomes as the current solution π c of TS.Step 3.2: If the termination criterion of TS is satisfied, output the best solution π * of TS, and go to Step4; otherwise, go to Step 3.3.
Step 3.3: Generate candidate solutions according to the pairwise swap rule and construct neighborhood N(π c ) applying the dominance Property 4. If the neighborhood N(π c ) is a non-empty set, evaluate the solutions in neighborhood N(π c ) , then go to Step 3.4, otherwise, select the best non-tabu solution from the set UN(π c ) as the current solution π c , then go to Step 3.7.
Step 3.4: If the aspiration criterion of TS is satisfied, go to Step 3.  where best solution is the solution obtained by GA or HGTSA, and optimal solution is the solution obtained by B&B or two-stage BIP model.The results for solving the test instances are shown in Tables 4 and 5. Columns 1, 2, 3 and 4 describe job size, maintenance period, maintenance duration time and learning index respectively.In Table 4, columns 5-12, 14-15 describe the mean execution time (in seconds) for solving each problem and the best objective values obtained by the two-stage BIP model, B&B, GA and HGTSA respectively.Columns 13 and 16 in Table 4 present the percentages of errors obtained from GA and HGTSA respectively.In Table 5, columns 5-8, 10-11 describe the mean execution time (in seconds) for solving each problem and the best objective values obtained by B&B, GA and HGTSA respectively.Columns 9 and 12 in Table 5 present the percentages of errors obtained from GA and HGTSA respectively.The symbol "-" indicates that the corresponding algorithm cannot solve the problem.In order to more intuitively evaluate the accuracy and efficiency of the proposed methods, Figs. 5, 6, 7, 8 are presented based on Tables 4 and 5. Figure 5 shows the mean execution times of the proposed methods for ( 27) error% = best solution − optimal solution optimal solution × 100%,   and HGTSA, for all test instances, are depicted in Fig. 7.Although the mean execution time of HGTSA is longer than that of GA, both algorithms can solve large-scale instances of up to 1000 jobs within the maximum running time restriction.Figure 8 shows the optimization ability of GA and HGTSA in solving all test instances.The best solution obtained by HGTSA is superior to the best solution obtained by GA for the instances with more than 100 jobs.The computational results also demonstrated that HGTSA, combined the advantages of GA and TS, has strong global search ability.Figure 9 presents the convergence of GA and HGTSA for 1000 jobs with learning index 0.3 and demonstrates that the TS with special neighborhood makes HGTSA search deeply and efficiently much more than GA in each step of the iteration.Figures 8 and 9 indicate the optimization ability of HGTSA is better than that of GA for the medium-size problems and large-size problems, which reflects that GA is prone to fall into local optimal, and also reflects that HGTSA based on GA and TS has strong local search ability and powerful global search ability.

Conclusions
In this paper, we investigated a single-machine scheduling with periodic maintenance and learning effect while the makespan is minimized.We presented one new two-stage BIP to model the problem.Additionally, we derived some properties about optimal schedule and applied them to the B&B algorithm.Considering that the problem is NP-Hard in the strong sense, we introduced the GA and HGTSA for medium and large-scale problems.Extensive computational experiments have demonstrated the effectiveness of the proposed methods.The proposed twostage BIP model is powerful enough to solve up to 20 or 25 jobs.The B&B algorithm can find optimal solutions for problems with up to 30 jobs within the maximum running time restriction.Meanwhile, Taguchi method was used for specifying the optimal parameter level to improve the performance of GA and HGTSA.Computational results showed that the proposed HGTSA, combined the advantages of TS and GA, is a promising and effective algorithm.HGTSA performed better than GA with more accurate solutions and faster convergence rate in solving medium and large-scale problems.For interested researchers, it is suggested to extend the problem to more machines or uncertain environments in the future.In addition, HGTSA can also be combined with other methods to solve the multi-objective problem in the production field.

Figure 1 .
Figure 1.Gantt chart of the problem with periodic maintenance.

Figure 2 .
Figure 2. The basic workflow of TS.

Figure 4 .
Figure 4. Main effects plot of each parameter.

Figure 7 .Figure 8 .Figure 9 .
Figure 7.Comparison of the mean execution times of HGTSA and GA for all test instances (5-1000 jobs). ) (22)new node D i satisfies Rule 2, eliminate it.Otherwise, calculate the lower bound corresponding to D i according to formula(22).If the lower bound satisfies Rule 1, eliminate it.Otherwise, put the new node D i in the search tree.Let tt = tt+ p i • (n D + 1) a , n D = n D + 1 .Go to Step 3.Step 7: In the case of tt + p i • (n D + 1) a > T , if the new node D i satisfies Rule 3, eliminate it.Go to Step 3.Step 8: In the case of tt + p i • (n D + 1) a > T for any J i ∈ US D , assigning job J i into the next new batch, new node D i can be obtained.Calculate the lower bound corresponding to D i according to formula(22).If the lower bound satisfies Rule 1, eliminate it.Otherwise, put the new node D i in the search tree.Let tt = p i • (n D + 1) a , n D = n D + 1 .Go toStep 3.
5, otherwise, go to Step 3.6.Step 3.5: Select the best solution as the current solution π c .Update the best known solution π * of TS, then go to Step3.7.Step 3.6: Select the best solution from non-tabu solutions as the current solution π c , go to Step3.7.Step 3.7: Update the tabu list, then go to Step3.2.Step 4: Merge the current population and the offspring chromosomes generated by the crossover operator, mutation operator and TS.Step 5: Evaluate chromosomes according to their fitness values.Choose n pop best solutions as the next genera- tion and update the best known solution.Step 6: If the termination criterion of GA is satisfied, output the best solution and stop; otherwise, go to Step 3.

Table 3 .
Best levels for the parameters.

Table 4 .
Performance comparison of two-stage BIP method, B&B, GA and HGTSA for test problems.