Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem

The scheduling problem is one of the most challenging problems faced in many diﬀerent areas of everyday life. This problem can be formulated as a combinatorial optimization problem, and it has been solved with various methods using meta-heuristics and intelligent algorithms. We present in this paper a solution to the scheduling problem using two diﬀerent heuristics namely Simulated Annealing and Ant Colony Optimization. A study comparing the performances of both solutions is described and the results are analyzed.


Introduction
A wide variety of scheduling (timetabling) problems have been described in the literature of computer science during the last decade. Some of these problems are: the weekly course scheduling done at schools and universities [1], examination timetables [2], airline crew and flight scheduling problems [3], job and machine scheduling [4], train (rail) timetabling problems [5] and many others. . . Many definitions of the scheduling problem exist. A general definition was given by A. Wren [6] in 1996 as: "The allocation of, subjects to constraints, of given resources to objects being placed in space and time, in such a way as to satisfy as nearly as possible a set of desirable objectives". Cook [7] proved the timetabling problem to be NP-complete in 1971, then Karp [8] showed in 1972 that the general problem of constructing a schedule for a set of partially ordered tasks among k -processors (where k is a variable) is NP-complete. Four years later, a proof [9] showing that even a further restriction on the timetabling problem (e.g the restricted timetable problem) will always lead to an NP-complete problem. This means that the running time for any algorithms currently known to guarantee an optimal solution is an exponential function of the size of the problem. The complexity results of the some of the scheduling problems were classified and simulated in [10] and their reduction graphs were plotted.
The exam scheduling is still done manually in most universities, using solutions from previous years and altering them in such a way to meet the present constraints related to the number of students attending the upcoming exams, the number of exams to be scheduled, the rooms and time available for the examination period. We propose an improved methodology where the generation of the examination schedules is feasible, automated, faster and less error prone.
Even though we present in this paper our solution in the context of the exam scheduling problem, we can generalize it to solve many different scheduling problems with some minor modifications regarding the variables related the the problem in hand and the resources available.
An informal definition of the exam scheduling problem is the following: A combinatorial optimization problem that consists of scheduling a number of examinations in a given set of exam sessions so as to satisfy a given set of constraints. The basic challenge faced when solving this problem is to "schedule examinations over a limited time period so as to avoid conflicts and to satisfy a number of side-constraints" [2]. These constraints can be split into hard and soft constraints where the hard constraints must be satisfied in order to produce a feasible or acceptable solution, while the violation of soft constraints should be minimized since they provide a measure of how good the solution is with regard to the requirements [11].
The main hard constraints [12] are given below: 1. No student is scheduled to sit for more than one exam simultaneously. So any two exams having students in common should not be scheduled in the same period.
2. An exam must be assigned to exactly one period.
3. Room capacities must not be violated. So no exam could take place in a room that has not a sufficient number of seats.
As for the soft constraints, they vary between different academic institutions and depend on their internal rules [13]. The most common soft constraints are: 1. Minimize the total examination period or, more commonly, fit all exams within a shorter time period.
2. Increase students' comfort by spacing the exams fairly and evenly across the whole group of students. It is preferable for students not to sit for exams occurring in two consecutive periods.
3. Schedule the exams in specific order, such as scheduling the maths and sciences related exams in the morning time.
4. Allocate each exam to a suitable room. Lab exams for example, should be held in the corresponding labs. 5. Some exams should be scheduled in the same time.
We will attempt to satisfy all the hard-constraints listed above. Regarding the soft-constraints, we will address the first two points, whereby we try to enforce the following: 1. Shorten the examination period as much as possible 2. A student has no more than one exam in two consecutive time-slots of the same day.
In other words, we will try to create an examination schedule spread among the shortest period of time and therefore use the minimum number of days to schedule all the exams, we also spread the exams shared by students among the exam schedule in such a way that they are not scheduled in consecutive time-slots. This paper is organized as follows. Section 2 presents the mathematical formulation of the exam scheduling problem; Section 3 describes the main structures of the SA (Simulated Annealing) and ACO (Ant Colony Optimization) algorithms. We list and describe in Section 4 some of the heuristics and algorithms that are used in solving the scheduling problem. In Section 5 and 6 we present and discuss our own approach to solving the exam scheduling problem using the Simulated Annealing and Ant Colony Optimization algorithms respectively and the empirical results of both solutions are shown in Section 7. Section 8 is a performance analysis and a comparative study between the two solutions. Finally we give a brief conclusion.

Problem Formulation
We will use a variation of D. de Werra's [14] definition of the timetabling problem. Note that a class consists of a set of students who follow exactly the same program. So Let: • C={c 1 ,. . . ,c n } be a set of classes Since all the students registered in a class c i follow the same program, we can therefore associate for each class c i an exam e i to be included in the examination schedule. So all the students registered in class c i will be required to pass the exam e i . We will use the notation below: We shall assume that all exam sessions have the same duration (say one period of two hours). We recap that the problem is, given a set of periods, we need to assign each exam to some period in such a way that no student has more than one exam at the same time, and the room capacity is not breached. We therefore have to make sure that the equations (constraints) below are always satisfied [14]: 2.
∀s j ∈ S, ∀e i ∈ E, and one p k ∈ : 3.
*where |R| is equal to the number of rooms allocated for exams during p u .

4.
∀c j ∈ C ∃ e i ∈ E, p u ∈ P : ep iu = 1 These equations reveal only the hard constraints which are critical for reaching a correct schedule. They must always evaluate to true otherwise we will end up by an erroneous schedule. We aim to find a schedule that meets all the hard constraints and try to adhere as much as possible to the soft constraints.

Simulated Annealing
The idea behind simulate annealing (SA) comes from a physical process known as annealing [15]. Annealing happens when you heat a solid past its melting point and then cool it. If we cool the liquid slowly enough, large crystals will be formed, on the other hand, if the liquid is cooled quickly the crystals will contain imperfections. The cooling process [16] works by gradually dropping the temperature of the system until it converges to a steady, frozen state. SA exploits this analogy with physical systems in order to solve combinatorial optimization problems.
We define S to be the solution space, which is the finite set of all available solutions of our problem, and f as the real valued cost function defined on members of S. The problem [17] is to find a solution or state, i ∈ S, which minimizes f over S. SA is a type of local search algorithm that starts with an initial solution usually chosen at random and generates a neighbour of this solution, and then the change in the cost f is calculated. If a reduction in cost is found, the current solution is replaced by the generated neighbour. Otherwise (unlike local search and descent algorithms, like the hill climbing algorithm), if we have an uphill move that leads to an increase in the value of f, which means that, if a worse solution is found, the move is accepted or rejected depending on a sequence of random numbers, but with a controlled probability. This is done so that the system does not get trapped in what is called a local minimum [18] (as opposed to the global minimum where the near optimal solution is found). The probability of accepting a move which causes an increase in f is called the acceptance function and is normally set to: where T is a control parameter which corresponds to the temperature in the analogy with physical annealing [17]. We can see from (5) that, as the temperature of the system decreases, the probability of accepting a worse move is decreased, and when the temperature reaches zero then only better moves will be accepted which makes simulated annealing act like a hill climbing algorithm [16] at this stage. Hence, to avoid being prematurely trapped in a local minimum, SA is started with a relatively high value of T. The algorithm proceeds by attempting a certain number of neighborhood moves at each temperature, while the temperature parameter is gradually dropped. Algorithm 1 shows the SA algorithm as listed in [17].

Algorithm 1 Simulated Annealing Algorithm
Select an initial solution i ∈ S Select an initial temperature T 0 > 0 Select a temperature reduction function α Repeat Set repetition counter n = 0 Repeat Generate state j, a neighbor of i There are many important concepts in SA which are crucial for building efficient solutions. Some of these concepts are listed below: There are many ways in which these parameters are set, and they are highly dependent on the problem in hand and the quality of the final solution we aim to reach.

Ant Colony Optimization Algorithm
The second algorithm (ACO) is a meta-heuristic method proposed by Marco Dorigo [19] in 1992 in his PhD thesis about optimization, learning and natural algorithms. It is used for solving computational problems where these problems can be reduced to finding good paths through graphs. The ant colony algorithms are inspired by the behavior of natural ant colonies whereby ants solve their problems by multi-agent cooperation using indirect communication through modifications in the environment via distribution and dynamic change of information (by depositing pheromone trails). The weight of these trails reflects the collective search experience exploited by the ants in their attempts to solve a given problem instance [20]. Many problems were successfully solved by ACO, such as the problem of satisfiability, the scheduling problem [21], the traveling salesman problem [22], the frequency assignment problem (FAP) [23]. . . The ants or agents apply a stochastic local decision policy when they move. This policy is based on two parameters, called trails and attractiveness [24]. Therefore, each ant incrementally constructs a solution to the problem each time it moves and when it completes a solution, the ant evaluates it and modifies the trail value on the components used in its solution which helps in directing the search of the future ants [24]. Another mechanism used in ACO algorithms is trail evaporation. This mechanism decreases all trail levels after each iteration of the AC algorithm. Trail evaporation ensures that unlimited accumulation of trails over some component are avoided and therefore the chances to get stuck in local optimums are decreased [25]. Also another optional mechanism that can be found in ACO algorithms is daemon actions. "Daemon actions can be used to implement centralized actions which cannot 212 Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem be performed by single ants, such as the invocation of a local optimization procedure, or the update of global information to be used to decide whether to bias the search process from a non-local perspective" [25].
The pseudo-code [26] for the ACO is shown below:

Algorithm 2 Ant Colony Optimization Algorithm
Set parameters, Initialize pheromone trails while termination condition not met do ConstructAntSolutions ApplyLocalSearch (optional) UpdatePheromones end while

Combinatorial Optimization Problem
Before going into the details of ACO, we will define a model [26] P = (S, Ω, f ) of a combinatorial optimization problem (COP). The model consists of the following: • a search space S defined over a finite set of discrete decision variables X i : i = 1, . . . , n; • a set Ω of constraints among the variables; • an objective function f : S → R to be minimized.
is a complete assignment of values to variables that satisfies all constraints in Ω" [26].

The Pheromone Model
We will use the model of the COP described in Subsection 3.2.1 above to derive the pheromone model used in ACO [27]. In order to do this we have to go through the following steps: 1. We first instantiate a decision variable X i = v j i (e.g a variable X i with a value v j i assigned from its domain D i ) 2. We denote this variable by c ij and we call it a solution component 3. We denote by C The set of all possible solution components.
4. Then we associate with each component c ij a pheromone trail parameter T ij 5. We denote by τ ij the value of a pheromone trail parameter T ij and we call it the pheromone value 6. This pheromone value is updated later by the ACO algorithm, during and after the search iterations.
These values of the pheromone trails will allow us to model the probability distribution of the components of the solution. The pheromone model of an ACO algorithm is closely related to the model of a combinatorial optimization problem. Each possible solution component, or in other words each possible assignment of a value to a variable defines a pheromone value [26]. As described above, the pheromone value τ ij is associated with the solution component c ij , which consists of the assignment X i = v j i .
ACO uses artificial ants to build a solution to a COP by traversing a fully connected graph G C (V, E) called the construction graph [26]. This graph is obtained from the set of solution components either by representing these components by the set of vertices V of G c or by the set of its edges E.
The artificial ants move from vertex to vertex along the edges of the graph G C (V, E), incrementally building a partial solution while they deposit a certain amount of pheromone on the components, that is, either on the vertices or on the edges that they traverse.
The amount ∆τ of pheromone that ants deposit on the components depends on the quality of the solution found. In subsequent iterations, the ants follow the path with high amounts of pheromone as an indicator to promising regions of the search space [26].
To model the ants' behaviour formally, we consider a finite set of available solution components set C [27]. We start from an empty partial solution s p = ∅ and extend it by adding a feasible solution s p using the components from the set N (s p ) ⊆ C (where N (s p ) denotes the set of components that can be added to the current partial solution s p without violating any of the constraints in Ω). It is clear that the process of constructing solutions can be regarded as a walk through the construction graph G C = (V, E). The choice of a solution component from N (s p ) is done probabilistically at each construction step. Before describing the rules controlling the probabilistic choice of the solution components, we next describe an experiment that was run on real ants which derived these probabilistic choices, called the double-bridge experiment .

The Double-Bridge Experiment
In the double-bridge experiment, we have two bridges connecting the food source to the nest, one of which is significantly longer than the other. The ants choosing by chance the shorter bridge are of course, the first to reach the nest [27]. Therefore, the short bridge receives pheromone earlier than the long one. This increases the probability that further ants select it rather than the long one due to the higher pheromone concentrations over it (Fig. 1).   Based on this observation, a model was developed to depict the probabilities of choosing one bridge over the other. So assuming that at a given moment in time m 1 ants have used the first bridge and m 2 ants have used the second bridge, the probability p 1 for an ant to choose the first bridge is shown in the equation [27] below: where k and h are variables depending on the experimental data.
Obviously the probability p 2 of choosing the second bridge is: p 2 = 1 − p 1 .
As stated above, the choice of a solution component is done probabilistically at each construction step. Although the exact rules for the probabilistic choice of solution components vary across the different ACO variants, the best known rule is the one of ant systems (AS).

Ant System (AS)
The Ant System [28,25] is the first ACO algorithm where the pheromone values are updated at each iteration by all the |m| ants that have built a solution in the iteration itself. Using the traveling salesman problem (TSP) as an example model, the pheromone τ ij is associated with the edge joining cities i and j, and it is updated as follows: where ρ is the evaporation rate, m is the number of ants, and ∆τ k ij is the quantity of pheromone laid on edge (i,j) by ant k and where Q is a constant used as a system parameter for defining a high quality solutions with low cost, and L k is the length of the tour constructed by ant k [28]. In the construction of a solution, each ant selects the next city to be visited through a stochastic mechanism. When ant k is in city i and has so far constructed the partial solution s p , the probability of going to city j is given by:

214
Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem or zero otherwise. The parameters α and β control the relative importance of the pheromone versus the heuristic information η ij , which is given by: where d ij is the distance between cities i and j.

Ant Colony System (ACS)
In ACS [28] (e.g ACO) we introduce the local pheromone update mechanism in addition to the offline pheromone update performed at the end of the construction process. The local pheromone update is performed by all the ants after each construction step. Each ant applies it only to the last edge traversed using the following function: where φ ∈ (0, 1] is the pheromone decay coefficient, and τ 0 is the initial value of the pheromone. Local pheromone update decreases the pheromone concentration on the traversed edges in order to encourage subsequent ants to choose other edges and, hence, to produce different solutions [28].
The offline pheromone update is applied at the end of each iteration by only one ant, which can be either the iteration-best or the best-so-far as shown below: The next subsection describes in more detail the Ant System algorithm and briefly presents its complexity bounds.

Complexity Analysis of Ant Colony Optimization
Now that we have described the characteristics of Ant Systems (AS) we give their algorithm (Algorithm 3). We use the following notation: for some state i, j is any state that can be reached from i ; η ij is a heuristic information between i and j calculated depending on the problem in hand, and τ ij is the pheromone trail value between i and j. Walter J. Gutjahr [29] analyzed the runtime complexity of two ACO algorithms: the Graph-Based Ant System (GBAS) and the Ant System. In both analysis, the results showed computation times of order O(m logm) for reaching the optimum solution, where m is the size of the problem instance. The results found were based on basic test functions.
After we have describe the SA and ACS in the previous sections, the two algorithms will be applied on the exam scheduling problem. We might not find the optimal solution (NP problem) where constraints are all satisfied completely, but we will attempt to reach a near-optimal solution after several iterations.

Related Work
During the past years, many algorithms and heuristics were used in solving the timetabling problem. The algorithms vary from simple local search algorithms to variations of genetic algorithms and graph representation problems. We list some of the recognized techniques which proved to find acceptable solutions concerning the scheduling problem: 1. Simulated Annealing [30] 2. Ant-Colony Optimization [31] 3. Tabu Search [32] 4. Graph Coloring [33] 5. Hybrid Heuristic Approach [34,32] Simulated Annealing (SA) and Ant Colony Optimization (ACO) algorithms where described in the previous Section.
Tabu Search (TS) is a heuristic method originally proposed by Glover [35] in 1986 that is used to solve various combinatorial problems. TS pursues a local search whenever it encounters a local optimum by allowing nonimproving moves. The basic idea is to prevent cycling back to previously visited solutions by the use of memories, called tabu lists, that record the recent history of the search. This is achieved by declaring tabu (disallowing) moves that reverse the effect of recent moves.
As for the Graph Coloring problem, it can be described as follows: suppose we have as an input a graph G with vertex set V and edge set E, where the ordered pair (R,S) ∈ E if and only if an edge exists between the vertices R and S. A k-coloring of graph G is an assignment of integers {1,2,. . .,k} (the colors) to the vertices of G in such a way that neighbors receive different integers. The chromatic number of G is the smallest k such that G has a k-coloring. That is, each vertex of G is assigned a color (an integer) such that adjacent vertices have different colors, and the total number of colors used (k ) is minimum. The problem of Optimizing Timetabling Solutions using Graph Coloring is to partition the vertices into a minimum number of sets in such a way that no two adjacent vertices are placed in the same set. Then, a different color is assigned to each set of vertices.
In the Hybrid Approach, the idea is to combine more than one algorithm or heuristic and apply them on the same optimization problem in order to reach a better and more feasible solution. Sometimes the heuristics are combined into a new heuristic and then the problem is solved using this new heuristic. In other cases the different heuristics are used in phases, and every phase consists of applying one of these heuristics to solve a part of the optimization problem. Duong T.A and Lam K.H [12] presented a solution method for examination timetabling, consisting of two phases: a constraint programming phase to provide an initial solution, and a simulated annealing phase with Kempe chain neighborhood. They also refined mechanisms that helped to determine some crucial cooling schedule parameters. Reference [33] shows a method using Graph Coloring was developed for optimizing solutions to the timetabling problem. The eleven course timetabling test data-sets were introduced by Socha K. and Sampels M.

Our Approach Using SA
In this research paper, we provide a general solution that allows to produce examination schedules that meet the various academic rules of the universities. We will apply our method to a simple instance of the scheduling problem. Therefore we propose the example examination schedule with the following attributes: 1. There are exactly 4 examination periods (time-slots) in each examination day.

216
Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem 2. We have a fixed number of rooms equal to 3 which can be used to hold the exams.
3. There are 24 exams to be scheduled in a total examination duration of 2 days. 4. Room allocation is maximized. Available rooms during the examination period should be allocated to hold exams.
All the following work was implemented using Matlab. We start by building an initial solution for the schedule. The schedule is represented by an m × n matrix denoted by Sched [i,j]. It holds a set S ij of exams scheduled at day d i and period p j , where i=1, 2. . . ,m and j= 1, 2,. . . ,n. Hence the matrix Sched[i,j] will have the following properties: • A number of columns n = 4 since we have exactly 4 examination periods (time-slots) per day.
• A number of rows m ≥ 1 depending on the number of exams to be scheduled.
• |S ij | = 3 since we have 3 examination rooms that we wish to use at each examination period.
In our example, the 24 exams are scheduled in 2 examination days over 4 examination periods each day. This is depicted in Fig. 2  The exams are denoted by the Letter E concatenated to the course code. For example if we have a course code CS111 then exam code for this course will be ECS111. Exam E9 in the picture above is scheduled from 11:00 am to 1:00 pm of Day 1 . As stated before, in each day we have 3 exams at each period, this is because we have 3 examination rooms available and we wish to maximize their utilization by always allocating non-scheduled exams to the empty rooms.
Depending on the number of exams to be scheduled, a case frequently appears where we sometimes end up by having rooms which are not allocated to any exam at the final examination day. This is logical since the number of exams might not be a multiple of the size of the matrix Sched [i,j]. We accommodate for this by adding virtual exams in the remaining empty cells while building our initial solution. These exams have no conflicts whatsoever with any of the other exams. This is done to allow for the algorithm to run on Matlab since it is necessary to fill-in all matrix cells.
In some other cases, the examination rooms are vast halls, and might hold more than one exam at one period. To account for this change, we consider the hall to be multiple examination rooms (the exact multiple depends on the number of seats in the Hall). 1 We have provided a function ReturnConflicts(Exam1, Exam2) that takes 2 exams as parameter and returns an integer equal to the number of students taking these exams in common. Using the notation of Subsection 2.1 the function is be described as follows: We start by inserting the exams to be scheduled into the matrix Sched[i,j] randomly and we calculate the cost of this random solution.
The cost of an examination schedule is the sum of: • the cost of its hard constraints returned by checking for any students having exam clashing (more than one exam in the same time-slot), and • a fraction of the cost of its soft constraints. This is done by multiplying the cost of these soft constraints by a decimal ε ∈ ]0,1[ This is illustrated below: T otal Cost = cost of hard constraints + ε · cost of sof t constraints (11) Still remains to explain how to calculate the cost of the hard and soft constraints of the examination schedule. The cost of the hard constraints is the sum of the values returned after running the function ReturnConflicts() for all pairs of exams occurring in the same time-slot of the same day in the examination schedule. As for the soft constraints, we need to make sure that we space the exams fairly and evenly across the whole group of students thus, we need to check if a student has more than one exam in two consecutive time-slots of the same day. The cost of the soft constraints is the sum of the values returned after running the function ReturnConflicts() for all pairs of exams occurring in the consecutive time-slots of the same day. To control the relative importance of the hard constraints over the soft constraints, we multiplied the cost of the soft constraints by a decimal ε ∈ ]0, 1[.
Once we have defined how to calculate the cost of the solution in hand, we can now use the SA algorithm to iterate over neighbor solutions in the aim of reaching better cost solutions. This is describe in the following subsections.

Choosing a starting temperature
We used the Tuning for initial temperature method as described in [41], whereby we start at a very high temperature and then cool it rapidly until about 60% of worst solutions are being accepted. we then use this temperature as T 0 .

Temperature Decrement
We used a method which was first suggested by Lundy [42] in 1986 to decrement the temperature. The method consists of doing only one iteration at each temperature, but to decrease the temperature very slowly. The formula that illustrates this method is the following: where β is a suitably small value and T i is the temperature at iteration i. In our test case we will take β to be equal to 0.001. Another possible solution to the temperature decrement is to dynamically change the number of iterations as the algorithm progresses [16].

Final Temperature
As a final temperature T f , we chose a suitably low temperature where the system gets frozen at. We used similar results as those found in [12] where many experiments using SA were run to solve the university timetabling problem. In [12] each experiment was done using different final temperatures, namely [0.5, 0.05, 0.005, 0.0005, 0.00005] and a fixed value of T f = 0.005 was chosen since this temperature returned the best solution cost. In our solution, we used the same T f even though any temperature T ∈ [0.005,. . . ,1] would have given very close results.
But our stopping criterion did not only depend on T f . A check was also made on consecutive solutions where no moves appeared to be improving the cost afterwards and, whenever we received the same cost over more than (T 0 /4) number of iterations we stopped since we probably have reached a best-case solution. Of course we would also stop whenever we reach a schedule with cost equal zero since that would be an optimal solution.

Neighborhood structure
In SA, a neighbor solution s' of s is usually any acceptable solution that can be reached from s. In the context of the scheduling problem, a neighbor s' of the current schedule s is a another schedule where the exams have been distributed differently starting from s. This works in practice, but we have improved it by constraining some schedule configurations which return very high and impractical costs. This was done by adding these configurations to a black-list in such a way that whenever such configurations appeared during the running time of the algorithm, they were skipped and a search for new neighbors with different configuration was launched. Although this is naturally controlled in the SA algorithm by the acceptance probability of neighbor solutions, but constraining such high cost solutions can save many unsatisfactory iterations and therefore a big amount of computer processing time. The results of running the SA algorithm on the scheduling problem are described in Section 7.

218
Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem 6 ACO Applied To The Scheduling Problem

Our Approach Using ACO
In this approach we use the same definition of the scheduling problem as the one described in Section 1 and 2. Furthermore, we consider the same example problem as in Subsection 5.1 whereby Sched[i,j] denotes the examination schedule we are building, and ReturnConflicts(Exam1, Exam2) is the function returning an integer equal to the number of students taking Exam1, Exam2 in common. We also recap that we are trying to schedule 24 exams in a period of 2 examination days.
To be able to use the ACO algorithm on this problem, we create a 24 × 24 matrix to hold the pheromone values between the exams and call it PhMatrix.
The pheromone values in the timetabling problem will be used differently than what has been done in the TSP example described before since the relative position of an exam does not only depend on its direct predecessor or its direct successor in the schedule, but also on all the exams that are to be scheduled within the same time-slot of the same day (that is in the same set S ij ) and therefore the costs are calculated differently in these two problems. The differences are highlighted in the example below: To calculate the cost of going from a city i to another city j in a TSP, at each step we only need to check for the cost related to the distance between these two cities without considering the cost of moving between these cities and the other cities we have previously visited. On the other hand, to calculate the cost of scheduling exams in the same set S ij we must check for all the conflicts between all the exam in this set, at each step when adding an exam to the set.
In the context of the scheduling problem, the ants will therefore decide which exams are feasible to be placed in the same set S ij of the schedule. We know that we can have as many exams in each S ij as there are available rooms.
PhMatrix is first initialized so that the values τ ij : i, j ∈ {1, . . . , n} are all equal to 1. The attractiveness η ij is defined as follows: where E i , E j refer to exams i and j respectively. At each iteration the ants will start from a new source (exam) and build their solution (schedule). The ants choose an exam as a source; they move to the next exam which has a highest probability according to (8). It is clear that during the first iteration the ants will choose the next exam having the minimum number of conflicts (highest η ij ) with the previous one, since the pheromone values are all equal.
Once the next exam is chosen by ant k, the previous one in the schedule is put in the tabu list of ant k, that is a list containing all moves which are infeasible for ant k. This is done to ensure that the same exam is not scheduled twice in the timetable. Ant k continues its colony, and chooses the next exam in the same way.
Choosing the next exam having the minimum number of conflicts (highest η ij ) with the previous one might work for adjacent exams in the schedule, but might also lead to conflicts with other exams scheduled in the same time-slot. So even when η ij is optimal between two consecutive exams, it leads in some cases to high costs returned from conflicts with other exams scheduled within the same S ij . It is possible to account for this problem by elevating the defined parameter α in (8), but this is does not solve the problem completely. Therefore, a global pheromone evaluation rule is proposed where an ant k at exam i that has to decide about the next exam j of the permutation, makes the selection probability: • Considering every exam j not in the tabu list of k • Depending on the sum of all pheromone values between, the exams already scheduled in the same set as i (denoted by l ) and the candidate exam j, which is: So we end up by the following equation: This makes sure that, starting from past scheduled exam, we have calculated the probabilities and costs of moving to the next exam, between all the exams already scheduled in the same time-slot of the current day and the candidate exam to be added to the schedule, before choosing it.
The steps above are repeated until every ant completes its solution asynchronously from other ants. During this construction phase, the ants evaluate their solution and modify the pheromone trail values on the components of this solution. This pheromone information will direct the search of the future ants. We next show how the pheromone update is done in our solution.

Pheromone Update
After scheduling a set of exams (one cell) in the timetable Sched[i,j] , we do a local pheromone update whereby we update the pheromone values between all pairs of scheduled exams according to (10).
On the other hand, to avoid unfeasible distributions (with conflicting exams that lead to dead-end configuration) from being placed in the same set S ij , we have decided to induce negative pheromone values between the exams leading to such distributions in such a way that, if exam i and j lead to future conflicting configurations in the timetable, they will be assigned a negative pheromone value τ ij even if i and j have no conflicts with each other.
The negative pheromone update equation is the inverse of (10) where the addition is replaced by a substraction. The equation is the following: (15) where φ ∈ (0, 1] is the pheromone decay coefficient, and τ old is the old value of the pheromone. This negative pheromone update can be induced directly after the pheromone initialization between the exams resulting in conflicting configurations, therefore before the ants start building their solution. This will ensure that for ant k the probability of choosing these exams in the same set is very low.
Using the above results of our new approach, the extended form of algorithm 3 would look like the following:

Empirical Results
We have run the SA algorithm on the exam scheduling problem using the variables and values defined in the preceding sections, and plotted the results in Matlab. A plot showing the variation in the cost values of the examination schedule with respect to a temperature decrement going down from 100 to 0 is shown in Fig. 3. This plot corresponds to an example of running the SA algorithm on an exam schedule without black-listing neighbor schedule configurations that might return high and impractical costs.
We can see that the cost value starts high at 78 when the temperature is near 100, and it drops gradually with the temperature until it reaches a value equal to 3. At some points of the plot, the cost increases even though the temperatures are decreasing. These are exactly the uphill steps that appear in SA where worse moves are allowed to be taken to escape from local minimum. One more thing to notice in the plot is that the probability of accepting a worse move is decreased when the temperature decreases just as expected from (5).

220
Using Simulated Annealing and Ant-Colony Optimization Algorithms to Solve the Scheduling Problem Another plot is shown in Fig. 4, where the SA algorithm is started from the same initial configuration and run within the same range of temperatures, but now using the improved version that consists of constraining unfeasible neighbor configurations that might return high and impractical costs. We can see that the differences between the costs in adjacent temperatures is narrower than those in Fig. 3. Even though the final cost has almost the same value in both examples, but the cost function drops more strictly with respect to temperature when using the improved version. This resulted from the fact that many non-useful iterations were avoided due to the restricting unfeasible configurations.
On the other hand, the results of running the ACO algorithm on the example described previously were also plotted on Matlab. Fig. 5 below shows the variation in the cost of the schedule at several iterations. At each iteration we start from a different nest (source) and build a complete solution. Note that in this example the schedule instance chosen contains a huge number of conflicts between students' exams. We can see that the cost drops significantly from a value around 120 to almost 5 after 100 iterations. We have also run the ACO algorithm on a different instance having a lower number (still a considerable number) of conflicts between exams and the results are plotted in Fig. 6. The initial solution has a cost equal to 20 and it drops to zero after only 45 iterations, therefore an optimal solution was reached.

Performance Analysis: ACO vs. SA
Each of the Simulated Annealing and Ant Colony Optimization algorithms was tested by performing 15 trials in the aim of building complete examination schedules, starting from different initial configurations and using different numbers of exams. The first 5 trials consisted of generating schedules for 24 exams (timetable size = 24) in the minimum possible timescale. In the next 5 trials, we generated timetables of size equal to 32, while in the last 5 trials we generated timetables of size equal to 38. Note that the schedules chosen in these trials have a percentage of conflicting exams that varies between 55% and 65% (so more than half of the exams in these schedules have conflicts with each other). Also note that ACO was run using 50 ants, and each ant chose a random exam as its starting point (source). The results with the lowest cost were recorded at each trial, together with their corresponding CPU running time. The standard deviation (σ) of every group of 5 trials was also calculated. All the trials were run on a Core2-Duo PC with 2.0GHz CPU and 2.0GB of RAM. The results are shown in Table 1 below. We have made the following observations: 1. The running times of ACO are better than those of SA in all 15 trials. SA annealing takes more time to discover and evaluate the neighbor solutions at each iteration (temperature). ACO uses information from prior iterations to guide subsequent colonies to new states (neighbors) which reduces the processing time needed to calculate the cost of such moves.
2. ACO found the least cost solution in all three timetable sizes, even though it sometimes lead to high cost solutions compared to those found in SA. The standard deviations of the timetables' costs produced using SA are lower than those found in the case of ACO which means that SA provided tight results where the cost values of the solutions are close to each other, while ACO gives broad results where the difference between the cost values might be high. Optimization Algorithms to Solve the Scheduling Problem 4. When the number of conflicting exams chosen is too high in such a way that more than 70% of the exams have conflicts with each other (not shown in table 1), the ACO algorithm outperforms the SA algorithm. We had to highly increase the number of iterations done at each temperature (using the static strategy of temperature decrement) to allow for the SA algorithm to explore enough neighbors so that it is able to find a better move (neighbor).
5. When the number of conflicting exams chosen is very loose in such a way that less than 20% of the exams have conflicts with each other (not shown in table 1), both approaches lead to near-optimal solutions. Although ACO converged to a near-optimal solution using 50 ants, its running time was higher than that of SA, since SA reached the stopping criteria in only very few iterations.

Conclusion and Future Work
We have used two algorithms namely SA (simulated annealing) and ACO (ant-colony algorithm) to solve the Scheduling problem. We first introduced the problem and provided its mathematical formulation in Section 2 and then we described the SA and ACO algorithms and illustrated the way they are used to solve combinatorial optimization problems. We presented our approach to solving the scheduling problem using these algorithms in Section 5 and 6. All the results were implemented using Matlab, and a comparison between the performance and running times of SA and ACO in producing different examination schedules over several trials was depicted in Table 1. The solution we provided was based on an exam scheduling problem model that could be implemented in different academic institutions. It is not difficult to generalize our solution to solve different scheduling problems by performing some minor modifications regarding the variables related the the problem in hand and the resources available. Our future work consists of parallelizing these algorithms in order to improve their running times and also on using a hybrid Ant Colony -Simulated Annealing approach to improve the cost of our solution. We intend to parallelize the two algorithms at the level of data whereby we work on solving sub-problems and then combine them into a bigger low cost problem. We also intend to achieve parallelism at the level of ants when using the ACO algorithm in such a way that ants can work in parallel to find their solution.