Introduction

For the last two decades, intelligent and efficient transportation systems have been developing, and therefore, control methods for cooperative management of such systems have become increasingly important1,2,3. In particular, the adaptive traffic signal operation reflecting the traffic conditions is crucial for avoiding stagnation of traffic flows4,5. Various methods, which employ techniques such as genetic algorithm6, swarm intelligence7, neural networks8, and reinforcement learning9,10, have been proposed for such adaptive control11,12,13,14,15,16. In these studies, local control, where the state of each signal is determined from neighboring information, is considered, which hardly achieves a global optimum for managing the traffic conditions of the entire city. Solving a large-scale combinatorial optimization, however, is necessary in order to achieve such a global optimum. The difficulty of finding an optimal solution of the latter scales exponentially with the size of the city, because of the computational complexity of the combinatorial optimization.

Similar computational difficulty frequently appears in other fields. Accordingly, in recent years, various dedicated algorithms and hardware have been developed for solving this issue17,18,19. Their main strategy is to focus on solving particular combinatorial optimization problems, which can be transformed into an Ising model. Examples of the specialized hardware include the Coherent Ising Machine provided by NTT Corporation20,21, the Simulated Bifurcation Machine by Toshiba Corporation22, and the Digital Annealer by Fujitsu Corporation23,24. Among them, the Quantum Annealer 2000Q from D-Wave Systems Inc. has been attracting much attention for its being the world’s first hardware implementation of quantum annealing25 using a quantum processor unit. In the quantum annealing, a phenomenon called quantum fluctuation is used to simultaneously search candidate solutions of the given problem, which is expected to enable fast and accurate solution search compared with other heuristic search methods25,26. In this paper, we refer to the method using the 2000Q as the quantum annealing. Although the quantum annealing is expected to be an effective prescription for the large-scale combinatorial optimization problems, it is not a panacea because the advantage over the classical simulated annealing methods is reduced depending on types of the transformed Ising model. Besides the hardware constraints hinder the number of available variables and the class of solvable problems27. Hence, the search for compatible applications which exploit the quantum annealing power is becoming an active research area28,29,30,31,32,33,34.

In this paper, we propose a method for globally controlling traffic signals in an urban city using the quantum annealing. We consider a situation in which many cars moving on a lattice network are controlled via traffic signals installed at each intersection. To analytically handle this network, we consider a simplified situation in which two states are assumed for each signal: traffic is allowed in either the north–south direction or the east–west direction. The cars moving on the lattice are assumed to choose whether to make a turn or to go straight at an intersection with a given probability. We then formulate the signal operation problem as a combinatorial optimization problem. The objective function of the formulated problem is shown to be formally consistent with the Hamiltonian of the Ising model. The Ising model is a statistical ferromagnetism physics model that represents the behavior of a spin system, and it captures the relation between the microscopic state of spins and the macroscopic phenomena of magnetic phase transitions35,36,37,38. Importantly, the problem reformulated by means of the Ising model, with the aid of a graph embedding technique, is compatible with the class of problems that the 2000Q accepts; hence, one can apply the quantum annealing to solve the signal optimization problem.

By reformulating the problem using Ising minimization, this study makes three contributions to signal optimization. First, by performing numerical experiments, we confirm the engineering effectiveness of the proposed method using quantum annealing. Results of experiments using a large city consisting of \(50 \times 50\) intersections show that the proposed method achieves high quality signal operation, compared with the results of a conventional local control method39. The reformulated optimization problem is also solved using a classical simulated annealing method, but the quantum annealing method is found to give a better solution in a specific parameter domain. Second, a theoretical correspondence between the local and global control methods is found. We analytically show that the conventional local control is consistent with the solution of the global signal optimization problem at the limit where the probability of cars going straight is equal to the probability of them turning. This result provides a theoretical basis for the numerical prediction of the previous study39, where the local control is found to cause phase transitions similar to those of the Ising model. The last contribution is the knowledge gained for the cooperative operation of traffic signals. Our numerical experiments show a strong correlation between a signal and its neighboring signals. In addition, a strong temporal correlation of signals emerges, that is, the signal display at a certain time is correlated with the displays in the previous several steps. This spatio-temporal correlation becomes stronger as the straight driving probability of cars increases. Our results suggest the necessity of signal cooperation for smooth traffic flow, with variation of cooperation strength depending on the rate at which vehicles drive straight.

Results

Traffic signal optimization problem

Consider \(L\times L\ (L\in {{\mathbb{N}}})\) roads arranged in east–west and north–south directions with a periodic boundary condition. Each road consists of two lanes, one in each direction. Traffic signals are located at each intersection to control the flow of vehicles traveling on the roads. The signal at each node i has one of two states: \(\sigma _{i}=+1\), which allows vehicle flow only in the north–south direction, and \(\sigma _{i}=-1\), which allows vehicle flow only in the east–west direction. Each car goes straight through each intersection at fixed probability \(a\in [0,1]\) and otherwise turns to the left or right with equal probabilities, that is, \((1-a)/2\) for each direction. Figure 1 illustrates this situation.

Figure 1
figure 1

Traffic signal model. (a) Grid pattern of roads. (b) The two states of traffic signals at each intersection. In the case of \(\sigma =+1\), the vehicles coming from the horizontal direction stop, and the vehicles coming from the vertical direction go straight at the rate of a, turn right at the rate of \((1-a)/2\), and turn left at the rate of \((1-a)/2\). The rate \(1-a\) shown for the horizontal direction is the sum of the vehicles from the two vertical directions. In the case of \(\sigma =-1\), the roles of the vertical and horizontal directions are reversed. This problem setting is basically following Ref.39.

Reference39 shows that the number of vehicles \(q_{ij}\in {{\mathbb{R}}}_{+}\) in the traffic lane from intersection j to i evolves according to the following difference equation:

$$\begin{aligned} q_{ij}(t+1) = q_{ij}(t) + \frac{s_{ij}}{2}(-\sigma _{i} + \alpha \sigma _{j}), \end{aligned}$$
(1)

where \(\alpha :=2a-1\), and \(s_{ij}\in \{\pm 1\}\) is the direction of the lane from node j to i; here, \(s_{ij}=+1\) denotes north–south and \(s_{ij}=-1\) denotes east–west. We note here that \(q_{ij}\) is normalized by the numbers of cars passing per unit of time. Precisely, in terms of the mean flux of moving cars \(Q_{\mathrm{av}}\) and the dimensional time unit \(\Delta t\), \(t=t^{*}/\Delta t\) and \(q_{ij}=q^{*}_{ij}/(Q_{\mathrm{av}}\Delta t)\), where \(t^{*}\) is the dimensional time and \(q^{*}_{ij}\) is the number of vehicles in a lane. See “Methods” for the detailed derivation of Eq. (1). We define a quantity that represents the deviation of the north-south flow and the east–west flow at each intersection i as

$$\begin{aligned} x_{i}(t) := \sum _{j\in {{\mathcal{N}}}(i)} \frac{s_{ij}q_{ij}(t)}{2}, \end{aligned}$$
(2)

where \({{\mathcal{N}}}(i)\) represents the index of the four intersections adjacent to intersection i. Equation (2) transforms Eq. (1) into a time evolution equation for the flow bias x(t) as follows:

$$\begin{aligned} {\mathbf{x}}(t+1) = {\mathbf{x}}(t) + \left( -I + \frac{\alpha }{4} A \right) \pmb {\sigma }(t), \end{aligned}$$
(3)

where the flow bias vector is defined as \({\mathbf{x}}:=[x_{1}, \ldots , x_{{L^{2}}}]^{\top }\) and the signal state vector is defined as \(\pmb {\sigma }:=[\sigma _{1}, \ldots , \sigma _{{L^{2}}}]^{\top }\). The matrix \(A\in {{\mathbb{R}}}^{L^{2}\times L^{2}}\) is the adjacent matrix of the periodic lattice graph.

Next, we define the following objective function to evaluate traffic conditions at each time step:

$$\begin{aligned} H(\pmb {\sigma }(t)) := {\mathbf{x}}(t+1)^{\top } {\mathbf{x}}(t+1) + \eta (\pmb {\sigma }(t) - \pmb {\sigma }(t-1))^{\top } (\pmb {\sigma }(t) - \pmb {\sigma }(t-1)), \end{aligned}$$
(4)

where the first term on the right-hand side suppresses the flow bias during the next time step at each intersection, the second term prevents the traffic signal state at each intersection from switching too frequently, and \(\eta \in {{\mathbb{R}}}_{+}\) is a weight parameter for determining the ratio of the two terms. The traffic signal state \(\sigma _{i}(t)\) at each time step is determined so that the objective function (4) is minimized; that is, we want to find the value of \(\pmb {{\bar{\sigma }}}(t)\) that satisfies

$$\begin{aligned} \pmb {{\bar{\sigma }}}(t) = \mathop {\mathrm{arg min}}\limits _{\pmb {\sigma }\in \{\pm 1\}^{{L^{2}}}} H(\pmb {\sigma }(t)). \end{aligned}$$
(5)

Ising formulation and optimization

Substituting Eq. (3) into Eq. (4) gives the following representation:

$$\begin{aligned} H(\pmb {\sigma }(t))&= \left( {\mathbf{x}}(t) + \left( -I + \frac{\alpha }{4}A \right) \pmb {\sigma }(t)\right) ^{\top }\left( {\mathbf{x}}(t) + \left( -I + \frac{\alpha }{4}A \right) \pmb {\sigma }(t)\right) \nonumber \\&\quad + \eta (\pmb {\sigma }(t) - \pmb {\sigma }(t-1))^{\top } (\pmb {\sigma }(t) - \pmb {\sigma }(t-1)) \end{aligned}$$
(6)
$$\begin{aligned}&= \pmb {\sigma }(t)^{\top } \left( \left( -I + \frac{\alpha }{4}A \right) ^{\top } \left( -I + \frac{\alpha }{4}A \right) + \eta I \right) \pmb {\sigma }(t)\nonumber \\&\quad + \left( 2{\mathbf{x}}(t)^{\top }\left( -I + \frac{\alpha }{4}A \right) - 2\eta \pmb {\sigma }(t-1)^{\top } \right) \pmb {\sigma }(t) + c(t), \end{aligned}$$
(7)

where c(t) is a constant term that does not include \(\pmb {\sigma }(t)\). By defining the variables

$$\begin{aligned} J&:= \left( -I + \frac{\alpha }{4}A \right) ^{\top } \left( -I + \frac{\alpha }{4}A \right) + \eta I ,\end{aligned}$$
(8)
$$\begin{aligned} h&:= 2{\mathbf{x}}(t)^{\top }\left( -I + \frac{\alpha }{4}A \right) - 2\eta \pmb {\sigma }(t-1)^{\top } , \end{aligned}$$
(9)

we can represent the objective function (6) as follows:

$$\begin{aligned} H(\pmb {\sigma }(t))= \pmb {\sigma }(t)^{\top } J \pmb {\sigma }(t) + h \pmb {\sigma }(t) + c(t). \end{aligned}$$
(10)

Equation (10) is a quadratic form with variables \(\{\pm 1\}\), which matches the Hamiltonian form of the Ising model35. Hence, solving the signal optimization problem of the objective function (4) is regarded as equivalent to the problem of finding the spin direction \(\sigma _{i}\in \{\pm 1\}\) that minimizes the Ising Hamiltonian of Eq. (10). Because the Ising Hamiltonian is compatible with the class of problems that the 2000Q accepts, the quantum annealing can be applied to solve the signal optimization problem.

We use a city consisting of \(50 \times 50\) intersections to consider the signal operation problem, and we compare the results of numerical experiments on the following three methods for traffic control:

  • Local control, which determines the signal display at each time step with the following local rules:

    $$\begin{aligned} {\left\{ \begin{array}{ll} \sigma _{i}(t) \leftarrow +1 &{}\quad {\text{if }} x_{i}(t) \ge +\theta ,\\ \sigma _{i}(t) \leftarrow -1 &{}\quad {\text{if }} x_{i}(t) \le -\theta . \end{array}\right. } \end{aligned}$$
    (11)

    Equation (11) switches the display of the traffic signals to reduce the flow bias when the magnitude of the bias becomes larger than the threshold value \(\theta \in {{\mathbb{R}}}_{+}\) at each intersection. To compare the local control with the optimal control, the value of the switching parameter \(\theta\) is determined such that the common objective function (4) is minimized. For details, refer to “Methods” section.

  • Optimal control with simulated annealing, which reduces Eq. (10) at each time step using the simulated annealing. The simulated annealing is an algorithm for finding a solution by examining the vicinity of the current solution at each step and probabilistically determining whether it should stay in the current state or switch to a vicinity state. See Ref.40 for details of the simulated annealing. We used the neal library provided by D-Wave for executing this algorithm.

  • Optimal control with quantum annealing, which reduces Eq. (10) by using the quantum annealing with the D-Wave 2000Q. Because the problem size exceeds the size of problems that 2000Q can solve, it is subdivided by the graph partitioning technique. We used the ocean library provided by D-Wave for executing this algorithm. See “Methods” for the detailed procedure.

Figure 2 shows snapshots of the signal display at time \(t = 100\) for \(\alpha = 0.8\) and \(\eta =1.0\), where \(\alpha\) is the parameter related to vehicle’s straight driving probability and \(\eta\) is the weight parameter in the objective function (4). The flow bias distribution at the initial time \({\mathbf{x}}(0)\) is generated as random numbers following a uniform distribution in \([-5.0,\ 5.0]\), and the signal states at the initial time \(\pmb {\sigma }(0)\) are generated as random numbers following a binomial distribution of \(\{\pm 1\}\). In Fig. 2, blue dots mean that the cars are allowed to pass in the east–west direction, and red dots mean that the cars are allowed to pass in the north–south direction. We observe the synchronization of proximity signals under optimal control (see Fig. 2b,c), while the two direction states are distributed rather uniformly under local control (see Fig. 2a). The correlation of proximity signal states is quantitatively analyzed in “Discussion”.

Figure 2
figure 2

Snapshots of traffic signals under different control methods. (a) Local controller using Eq. (11), (b) Global controller optimizing Eq. (10) with the simulated annealing, and (c) Global controller optimizing Eq. (10) with the D-Wave 2000Q. Red and blue dots represent vertical and horizontal directions allowed at each crossing, respectively. Parameters \(\alpha , \eta\), and L are fixed as \(\alpha =0.8, \eta =1.0\), and \(L=50\), respectively. For the D-Wave method, the Hamiltonian is divided into 42 groups and the optimization problem is solved in parallel. See “Methods” for details. (The data are plotted with software Python/matplotlib).

Figure 3a plots the time evolution of the Hamiltonian of Eq. (10) for each method in the case of \(\alpha = 0.8\) and \(\eta =1.0\). In all three methods, the signals change rapidly over time to reduce the Hamiltonian. The value of the Hamiltonian in the steady state is the smallest in the quantum annealing method, followed by the simulated annealing method, and it is the largest under local control. That is, the optimal control using the quantum annealing exhibits the best performance among the these methods. An attempt to compare with the exact solution has also been made for the same simulation using Gurobi package. Although the full exact solution for the entire time interval was not feasible in a reasonable time because of the large number of variables (2500 variables), the Hamiltonian averaged over first three steps showed the same order of accuracy. In Fig. 3, the response of the quantum annealing and the simulated annealing is more oscillatory than that of the local control. This is because the objective function of Eq. (4) only contains states up to one step ahead. An optimal value at one time is not necessarily consistent with the optimal values for long time behavior, resulting in more oscillatory response. If we use an objective function including more than two steps ahead, the oscillatory phenomenon should be suppressed, although the latter makes the formulation more complex, hindering the direct use of the quantum annealing.

Figure 3
figure 3

Hamiltonian of Eq. (10) under different control methods. (a) Time evolution of the Hamiltonian, where the parameters \(\alpha , \eta\), and L are fixed as \(\alpha =0.8, \eta =1.0\), and \(L=50\), respectively. (b) Time average of Hamiltonian as functions of \(\alpha\), where the parameters \(\eta\) and L are the same as those in (a).

We examine the effect of changing the parameter \(\alpha\), the vehicle’s straight driving probability, on the Hamiltonian of Eq. (10). The time average of the Hamiltonian of Eq. (10), denoted as \({\bar{H}}\), is plotted in Fig. 3b. As \(\alpha\) approaches zero, the values of the Hamiltonian for the local and optimal control methods converge to a common value. This suggests that the local control gives the solution to the signal optimization problem at the limit of \(\alpha \rightarrow 0\). The validity of this conjecture is explored in “Discussion”. In the interval of \(\alpha \in [0.2,\ 0.8]\), the Hamiltonian under optimal control is smaller than that under local control, showing that the optimum control method exhibits performance better than that of the local control in this range. However, in the simulated annealing method at \(\alpha > 0.8\), the value of the Hamiltonian is larger than that under the local control method, suggesting that the simulated annealing does not reach the global optimal solution. Conversely, under the quantum annealing method, the value of the Hamiltonian is smaller than those under the other two methods, which means that the solution is closer to the global optimum. Here, we briefly discuss the slightly better value of \({\bar{H}}\) for the simulated annealing in a parameter domain of \(\alpha \in [0.2,\ 0.8]\), than that for the quantum annealing. In the range of large values of \(\alpha\), obtaining an exact solution of Eq. (4) is hard because of the high impact of the quadratic term. Actually in this parameter range, the quantum annealing gives better optimization results than the simulated annealing. On the other hand, regardless of problem to be solved, the quantum annealing generally contains stochastic fluctuations in the solutions28,32. When the parameter \(\alpha\) is in the intermediate range where the difficulty inherent in the optimization problem is moderate, both the simulated annealing and the quantum annealing give high quality solutions, but the simulated annealing gives slightly better solutions than the quantum annealing because the relative strength of stochastic fluctuations is large.

Discussion

Performance analysis of quantum annealing

The performance of the D-Wave 2000Q is known to vary depending on the structure of the problem. In particular, when the matrix J in Eq. (8) has a sparse structure, the accuracy of the solution is improved21. To check the sparseness of our formulated problem, we examine the value of all components of J in Eq. (8). First, expanding J yields the following expression:

$$\begin{aligned} J = (1+\eta )I - \frac{\alpha }{2}A + \frac{\alpha ^{2}}{16}A^{\top } A, \end{aligned}$$
(12)

where the number of non-zero elements in each column of A is 4, because it is equal to the number of degrees of each node in the lattice graph (see the green nodes in Fig. 4a). Also, the number of non-zero elements in each column of \(A^\top A\) is 9 because it coincides with the number of nodes which are connected with the reference node via two edges in the lattice graph (see the orange nodes in Fig. 4a). Thus, the number of all non-zero elements in J is expressed as \(13L^{2}\). From this, we calculate \(S_{J}(L)\), the sparseness of matrix J, defined as the ratio of the number of 0-valued elements and the number of all elements in the matrix:

$$\begin{aligned} S_{J}(L) = \frac{L^{4}-13L^{2}}{L^{4}}, \end{aligned}$$
(13)

where we confirm that \(S_{J}(L)\rightarrow 1\) as \(L\rightarrow \infty\). In Fig. 4b, we plot \(S_{J}(L)\) given in Eq. (13), to show that the sparseness of matrix J increases as increasing city size. This allows us to expect that the performance of the D-Wave 2000Q is enhanced in the case of the signal optimization problem for rather large cities, such as \(L=50\), the one considered in the present paper.

Figure 4
figure 4

Sparseness of the matrix J in Eq. (8). (a) Nodes neighboring the reference node (green) and two nodes away from the reference node (orange) in a lattice graph. (b) Sparseness \(S_{J}(L)\) of Eq. (13) for different numbers of intersections L.

Local and optimal control correspondence

As shown in Fig. 5, when the parameter \(\alpha\) of Eq. (1) is sufficiently small, the local control of Eq. (11) approaches the optimal control that is the solution of Eq. (5). When \(\alpha \approx 0\) is valid, the term associated with \(\alpha\) in Eq. (10) can be ignored, yielding

$$\begin{aligned} J&\approx (1+\eta )I, \end{aligned}$$
(14)
$$\begin{aligned} h&\approx - 2{\mathbf{x}}(t)^{\top } - 2\eta \pmb {\sigma }(t-1)^{\top }. \end{aligned}$$
(15)

Because J in Eq. (14) is a diagonal matrix, the first term \(\pmb {\sigma }(t)^{\top } J \pmb {\sigma }(t)\) on the right-hand side of Eq. (10) is a constant that does not depend on \(\pmb {\sigma }\). Therefore, the minimizer of \(H(\pmb {\sigma }(t))\) is determined depending only on the sign of h in Eq. (15), that is,

$$\begin{aligned} {\bar{\sigma }}_{i}(t) =&{\left\{ \begin{array}{ll} 1 &{}\quad {\text{if }} x_{i}(t) + \eta \sigma _{i}(t-1)\ge 0,\\ -1 &{}\quad {\text{if }} x_{i}(t) + \eta \sigma _{i}(t-1) < 0, \end{array}\right. } \end{aligned}$$
(16)

for all \(i=1,\ldots ,{L^{2}}\). By transforming Eq. (16), we obtain

$$\begin{aligned} {\bar{\sigma }}_i(t) =&{\left\{ \begin{array}{ll} 1 &{}\quad {\text{if }} x_{i}(t) \ge \eta , \\ -1 &{}\quad {\text{if }} x_{i}(t) \le -\eta ,\\ \sigma (t-1) &{}\quad \text{otherwise}, \end{array}\right. } \end{aligned}$$
(17)

for all \(i=1,\ldots ,{L^{2}}\). The control method of Eq. (17) is equivalent to the local control (11) in Ref.39.

Figure 5
figure 5

Magnetization of Eq. (18) under different control methods. (a) Time evolution of magnetization. Parameters \(\alpha , \eta\), and L are fixed as \(\alpha =0.8, \eta =1.0\), and \(L=50\), respectively. (b) Time average of magnetization as a function of \(\alpha\). Parameters \(\eta\) and L are the same as those in (a).

Because \(\alpha = 0 \Leftrightarrow a = 0.5\) holds, this optimality means that an appropriate vehicle turning rate autonomously eases the flow bias in the local control laws. In addition, the occurrence of this magnetic transition for the signal display, stated in Ref.39, is consistent with the fact that the local control in Eq. (11) actually minimizes the Ising Hamiltonian in Eq. (10). However, note that the optimality of the local control is valid only when \(\alpha \approx 0\), but not when \(\alpha \rightarrow 1\), where the phase transition occurs.

Signal synchronization analysis

To analyze the signal correlation observed in Fig. 2, we calculate the magnetization, which is regarded as an important quantity in the Ising model:

$$\begin{aligned} m(t) := \frac{1}{L^{2}}\sum _{i=1}^{L^{2}} \sigma _{i}(t). \end{aligned}$$
(18)

In the Ising model, this value represents the spin bias of the entire system, and it is an indicator of ferromagnetic transitions in the system. Figure 5a shows the time variation of magnetization m(t) . The value of magnetization remains small under local control, whereas it becomes significantly larger under both optimal control methods (simulated annealing and quantum annealing). For each method, at \(\alpha =0.8\), the response of the magnetization oscillates or fluctuates around zero. To confirm this observation, the time average of the magnetization of Eq. (18), denoted as \({\bar{m}}\), is plotted in Fig. 5b. Here, the ferromagnetic transition at \(\alpha \rightarrow 1\), that is, the finite value of \({\bar{m}}\), is observed for the magnetization under local control, which was originally reported in Ref.39. Also, under optimal control, the time average of the magnetization \({\bar{m}}\) takes a large value when \(\alpha \rightarrow 1\), which shows that a ferromagnetic transition similar to that under local control occurs under optimal control.

In addition to the ferromagnetic transition, the large amplitudes observed under optimal control are indeed a quantification of the synchronization of proximity signals observed in Fig. 2. For further analysis of this synchronization, we also evaluate two types of autocorrelation functions. Figure 6a shows the autocorrelation function obtained from the time-series data of the signal state \(\sigma _{i}(t)\) for \(t\in [0,200]\). Here, the autocorrelation function is computed at all intersections, and the average value is displayed in Fig. 6a. Under local control, there is a negative correlation peak around \(t = 3\), which means that the signals switch approximately every 3 time steps. In contrast, under optimal control, the negative correlation peak is in the interval of \(t = [10,15]\) steps, and the same state is maintained for a time longer than that under local control. In general, excessive signal switching is undesirable from a traffic engineering standpoint, and the optimization-based method overcomes this issue. Next, Fig. 6b shows the correlation between the display of signals at one intersection and another intersection, with the distance between the intersections as a parameter. Here, the correlation function is calculated for all the intersections for fixed time \(t=100\), and the average value thereof is plotted. In Fig. 6b, the distance is normalized to make the distance of adjacent intersections equal to 1. There is almost no correlation between adjacent signals under local control, while there is a positive correlation of up to 4–6 adjacent intersections under optimal control.

Figure 6
figure 6

Time and spatial autocorrelation functions for different control methods. (a) Time autocorrelation function and (b) Radially averaged spatial autocorrelation function. Parameters \(\alpha , \eta\), and L are fixed as \(\alpha =0.8, \eta =1.0\), and \(L=50\), respectively.

Then, we extract quantities from these correlation functions to investigate the effect of \(\alpha\). First, considering that both the temporal and spatial autocorrelations in Fig. 6 decay while oscillating, both functions are fitted with the following equation:

$$\begin{aligned} R(z) = \exp (-\lambda z)\cos (\omega z), \end{aligned}$$
(19)

where \(\lambda\) represents the damping rate coefficient, \(\omega\) represents the vibration frequency coefficient, and \(z\in {{\mathbb{R}}}_{+}\) represents different variables, i.e., the time t for the time autocorrelation function and the distance between intersections for the spatial autocorrelation function. Figure 7a plots \(\omega\) values obtained by fitting Eq. (19) to the time autocorrelation, as a function of \(\alpha\). Under local control, the vibration frequency is \(\omega \approx 1\) regardless of the value of \(\alpha\), while \(\omega\) decreases as increasing \(\alpha\) under optimal control. This suggests that the frequency of signal switching reduces as the vehicle straight driving rate increases in order to guarantee optimality. In view of the large difference in \(\omega\) between the local control and the optimization-based controls for large values of \(\alpha\), we expect that optimization-based signal controls are particularly effective in preventing excessive switching for high vehicle straight driving rates. Next, we show in Fig. 7b the value of \(\lambda\) obtained by fitting Eq. (19) to the spatial autocorrelation, as a function of \(\alpha\). Under local control, the correlation decreases with an attenuation factor of \(\lambda \approx 1.75\), regardless of the value of \(\alpha\). In contrast, under optimal control, \(\lambda\) decreases as \(\alpha\) increases, which means that the signal displays between the more distant intersections remain correlated. These observations show that the synchronization of proximity signals in time and space becomes important for achieving a balanced traffic flow as the probability of vehicles going straight increases.

Figure 7
figure 7

Parameters extracted from time and spatial autocorrelations, as functions of \(\alpha\) for different control methods. (a) Time autocorrelation function frequency \(\omega\) versus \(\alpha\) and (b) Radially averaged autocorrelation decay rate \(\lambda\) versus \(\alpha\). Parameters \(\eta\) and L are fixed as \(\eta =1.0\) and \(L=50\), respectively.

Effect of parameter \(\eta\)

Here we examine the effect of parameter \(\eta\), which controls the priority of the smoothness of the entire traffic flow to the signal switching frequency in the objective function of Eq. (4). The objective function is designed such that the priority is to smooth the flow of the car for small value of \(\eta\), and inversely for a large value of \(\eta\), preventing excessive signal switching is prior. The time average of the objective function \({\bar{H}}\) is obtained for various values of \(\eta\); \(\eta \in \{ 0.125,0.25,0.5,1,2,4,8\}\). We show the results in Fig. 8 for the simulated annealing (\({\bar{H}}_{\mathrm{SA}}\)) and the quantum annealing (\({\bar{H}}_{\mathrm{QA}}\)), where the ratios \({\bar{H}}_{\mathrm{QA}}/{\bar{H}}_{\mathrm{SA}}\) is plotted; \({\bar{H}}_{\mathrm{QA}}/{\bar{H}}_{\mathrm{SA}}<1\) means that the quantum annealing is better than the simulated annealing, and vice versa. The quantum annealing method shows better performances for \(\eta\) larger than 0.5, and the simulated annealing is better for \(\eta\) smaller than 0.5. This suggests that the quantum annealing works better when the priority is on preventing excessive signal switching.

Figure 8
figure 8

The time averaged values of the Hamiltonian for the simulated annealing (\({\bar{H}}_{\mathrm{SA}}\)) and the quantum annealing (\({\bar{H}}_{\mathrm{QA}}\)) as a function of \(\eta\). Parameters \(\alpha\) and L are fixed as \(\alpha =0.8\) and \(L=50\).

Future improvements

Here we discuss three possible improvements of the results obtained in this study. First, we expect that the solution is improved by using the most recently released D-Wave’s machine. The D-Wave machine used in this study has 2048 qubits that are connected with the chimera structure, in which closely connected 8-qubit units are arranged41. Since the chimera structure is sparser than the fully-connected structure, the representation of arbitrary Ising problems requires a process called minor-embedding to map logical variables to physical qubits. This process however significantly reduces the number of available qubits, and also deteriorates the computational accuracy. Very recently, D-Wave has updated the hardware with a new graph structure called the pegasus structure. The number of qubits has increased from 2048 to 5024, and the maximum number of connections in the graph structure has increased from 8 to 1542. These improvements allow us to deal with much larger problems, and to realize efficient embedding with sacrificing less physical qubits, than the previous D-Wave machine. For the proposed method, this enhancement will significantly reduce the number of divisions of Hamiltonian (see “Methods” for details), which contributes to fast and high-accurate computations.

Next, adjusting the hyper-parameters of the solver could improve the performance of our method. The D-Wave machine contains a few hyper-parameters, such as, number of samplings, chain strength, and post processing. We left most of the parameters at their default values because we focus on examining ability of the quantum annealing to solve the traffic signal optimization problem. However, as these hyperparameters affect the optimized result, more careful tuning of these parameters may achieve faster and more accurate calculations. An error mitigation scheme proposed in Ref.43 could enhance the performance. We remark here that the problem formulated in this study has the form of Ising model that is solvable by using several dedicated computers other than the D-Wave machine20,21,22,23. Since the development of these dedicated machines is expected to further accelerate, the proposed framework for traffic flow control will be more generally available in the future.

We finally discuss a further improvement toward application to a real city. The parameter \(\alpha\) in our model is expected to be relatively large in a city with many rational players, because in a grid-like city, each vehicle can reach its destination from any starting point with only one right or left turn. In our experiments, the size of the grid L is empirically determined as \(L=50\) so that it would be comparable to the size of typical grid cities in the world (Kyoto, Japan; Barcelona, Spain; La Plata, Argentina, etc.). It is however desirable to identify these parameters in advance using real-world data. Since the constant probability of each vehicle driving straight ahead and the grid topology of the city are both idealistic assumptions, our traffic signal control method has to be further improved by relaxing these assumptions.

Methods

Derivation of traffic model

Here we derive the model shown in Eq. (1). Let \(q_{ij}^{*}(t)\) be the numbers of cars that exist between the intersections i and j and \(\Delta t\) be the minimum time interval at which a signal is switched. We denote by \(Q_{\mathrm{av}}\) the average flow rate of cars passing during \(\Delta t\). Then the change in the numbers of cars from time \(t^{*}\) to the next time \(t^{*}+\Delta t\) is represented as

$$\begin{aligned} q_{ij}^{*}(t^{*}+\Delta t) = q_{ij}^{*}(t^{*}) + \frac{s_{ij}}{2}(-\sigma _{i}(t^{*})+\alpha \sigma _{j}(t^{*})) Q_{\mathrm{av}} \Delta t, \quad q_{ij}^{*}(0) = q_{ij}^{*0}. \end{aligned}$$
(20)

By normalizing Eq. (20) with \(t:=t^{*}/\Delta t\) and \(q:=q^{*}/(Q_{\mathrm{av}} \Delta t)\), we obtain the following equation:

$$\begin{aligned} q_{ij}(t+1) = q_{ij}(t) + \frac{s_{ij}}{2}(-\sigma _{i}(t) + \alpha \sigma _{j}(t)), \quad q_{ij}(0) = q_{ij}^{*0}/(Q_{\mathrm{av}} \Delta t), \end{aligned}$$
(21)

which is essentially identical to Eq. (1). In this paper, we consider the result of solving Eq. (21). The dimensional time and the actual numbers of cars are recovered with inverse transformation of the above normalization.

We remark on the numbers of cars, speed, and minimum signal switching interval on the model. First, a solution of the model in Eq. (1) is valid for an arbitrary numbers of cars. For example, the solution for \({\bar{q}}^{*}(0)=\gamma q^{*}(0)\) with some \(\gamma \in {{\mathbb{R}}}_{+}\), is obtained by setting \({\bar{Q}}_{\mathrm{av}} = \gamma Q_{\mathrm{av}}\) because the average flow rate is defined by “vehicle density” \(\times\) “average speed”. Second, let us consider the case of the average speed multiplied by \(\gamma\). The average flow rate \(Q_{\mathrm{av}}\) should then be \({\tilde{Q}}_{\mathrm{av}} = \gamma Q_{\mathrm{av}}\), while \(q^{*}(0)\) remains the same. Therefore, while Eq. (21) normalized by \(\tilde{Q}_{\mathrm{av}}\) does not apparently change, the initial value should be appropriately adjusted \({\tilde{q}}_{ij}(0) = q_{ij}^{*0}/(\tilde{Q}_{\mathrm{av}} \Delta t) = q_{ij}(0)/\gamma\). Similarly, for the case of \(\Delta {\hat{t}} := \gamma \Delta t\), the normalized Eq. (21) does not apparently change, but the initial value should be \({\hat{q}}_{ij}(0) = q_{ij}^{*0}/(Q_{\mathrm{av}} \Delta {\hat{t}}) = q_{ij}(0)/\gamma\).

Parameter identification for objective function

As stated in “Discussion”, a direct correspondence between the optimal control and local control is established for small values of \(\alpha\), with the apparent relation \(\theta = \eta\) between the local control switching constant \(\theta\) in Eq. (11) and the optimal control weight parameter \(\eta\) in Eq. (4). To make a systematic comparison for an arbitrary value of \(\alpha\), however, we still need to construct a protocol to determine the values of \(\theta\) and \(\eta\). The strategy is described as follows. Given a value of \(\eta\), we select a value of \(\theta\), denoted by \({\hat{\theta }}\), from a candidate set \(\Theta\) via the following auxiliary numerical analysis:

  1. 1.

    For one value of \(\theta\) in the set \(\Theta\), numerical simulation using local control (11) is performed to obtain time series data \({\mathbf{x}}(t)\) and \(\pmb {\sigma }(t)\). The value of the objective function (4) using the given \(\eta\) is calculated from the obtained time series data. This time average is denoted as \({\bar{H}}(\theta )\).

  2. 2.

    Step 1 is performed for all \(\theta\) in \(\Theta\) to find \({\hat{\theta }}\) that minimizes the time average \({\bar{H}}\), that is, \({\hat{\theta }} = \mathop {\mathrm{arg min}}\nolimits _{\theta \in \Theta }{\bar{H}}(\theta )\).

We plot the result of the above procedure in Fig. 9. Figure 9a shows \({\bar{H}}\) against \(\theta\) when \(\eta\) is fixed as \(\eta = 1.0\). When \(\alpha = 0\), \({\bar{H}}\) is a convex function and indeed \({\hat{\theta }} \approx \eta\) is satisfied. However, for larger values of \(\alpha\), \({\bar{H}}\) becomes non-convex, and particularly for \(\alpha =0.995\), the relation \({\hat{\theta }} = \eta\) no longer holds. Figure 9b shows the value of \({\hat{\theta }}\) that minimizes H versus \(\eta\) for the interval \(\eta \in [0.0, \ 3.0]\). When \(\alpha = 0\), the linear relation \({\hat{\theta }} = \eta\) approximately holds, but when \(\alpha \ne 0\), this relation breaks down and some discontinuities appear. These discontinuities correspond to the changes in the local minima observed in Fig. 9a.

Figure 9
figure 9

Correspondence between \(\eta\) and \(\theta\). (a) Time average of the objective function \({\bar{H}}\) versus \(\theta\), when the value of \(\eta\) is fixed as \(\eta =1.0\). The cases with \(\alpha \in \{0.0,0.5,0.995\}\) are shown. (b) \({\hat{\theta }}\) versus \(\eta\) for \(\alpha \in \{0.0,0.5,0.995\}\).

Implementation on D-wave 2000Q

All experiments in this study are conducted on a Linux computer with 64 GB of memory and a clock speed of 3.70 GHz. All methods are implemented using the programming language Python (version 3.7).

We use DW_2000Q_VFYC_5 as a machine solver with the aid of D-Wave’s ocean library for the actual implementation. Here, the VFYC solver partially emulates some qubits that are temporarily unavailable because of hardware failures44. The number of samplings is specified through a parameter named num_reads, which we set 100 in all experiments. The validity of this parameter setting is confirmed by preliminary experiments using several candidate parameters.

For embedding the logical variables to the physical qubits on the D-Wave machine, we use a tool called minorminer (Apache license 2.0), which is a heuristic embedding method in ocean library45. We perform embedding operation every time when the problem is sent to 2000Q in order to average out the bias of the embedding quality.

For the simulated annealing, neal solver in ocean library is used. This solver also allows us to specify the number of samplings through a parameter called num_reads, which we set 100, i.e., the same value as the one in the quantum annealing.

For the chimera structure in the 2000Q, \(N^{2}/4\) physical qubits are necessary for embedding N-variable problem for the worst case, which means that the maximum number of variables that the 2000Q is capable of handling is as small as 64. This implies that \(L^{2}\le 64\Leftrightarrow L\le 8\) must be satisfied for the number of roads L in our problem setting. A method exists for solving a problem that exceeds the size limitation: to divide the Hamiltonian variable of Eq. (10) into several groups and minimize the approximate Hamiltonian for each group. We define the traffic signal state vector of the jth group as \(\pmb {\sigma }^{j} := [\sigma _{i_{1}}, \sigma _{i_{2}}, \ldots , \sigma _{i_{m}}]^{\top }\), where \(i_{1}, i_{2}, \ldots , i_{m}\) are subscripts of variables included in the jth group. Then, we define the Hamiltonian of the group j as

$$\begin{aligned} H^{j}(\pmb {\sigma }^{j}(t)) := \pmb {\sigma }^{j}(t)^{\top } J_{jj}\pmb {\sigma }^{j}(t) + (h_{j} + \pmb {\sigma }^{{\bar{j}}}(t)^{\top } J_{{\bar{j}} j} )\pmb {\sigma }^{j}(t), \end{aligned}$$
(22)

where \(J_{jj}\) is a matrix extracting the (jj) th components of matrix J in Eq. (10). Similarly, \(h_{j}\) is a vector obtained by extracting the jth component of h. The index \({\bar{j}}\) represents the set of variables not belonging to group j. One naive approximation is to regard the variables outside group j as constant. This allows the annealing machine to deal with a Hamiltonian exceeding the limitation, but at the same time this approach degrades the control performance. To reduce such errors, the variables having a large interaction should be in the same group, and the variable interaction between different groups should be small. Such a problem is called a graph partitioning problem, which is known to be an NP-hard problem, but there are some approximation methods with adequate accuracy. For the actual implementation, we used the Metis software (Apache license 2.0), which is a widely used solver for graph partitioning problems, to break up the large-scale problem into several groups having fewer than 64 variables46. Figure 10 shows the result of the graph partitioning of the city of \(L = 50\) into 42 groups using Metis, where we certainly see that the adjacent intersections, i.e., the strongly interacting variables, are included in the same group.

Figure 10
figure 10

Graph partitioning using Metis. Each node represents a component of the Hamiltonian coefficient matrix J in Eq. (10), and the color of each node indicates the group to which the component belongs. (The data obtained using Metis 5.1.0 are plotted with software Python/matplotlib).

We evaluate the effect of this graph partitioning method. To this end, the simulated annealing optimization on a system with \(L=8\) is performed, and results are compared between the no-partitioned and quadruple-partitioned cases. Figure 11 shows the time average of the objective function for various \(\alpha\). The values with partitioning are larger than those without partitioning, where the difference between these values represents the error caused by partitioning. The error increases with larger \(\alpha\), indicating that the larger the straight driving rate of vehicles, the more the partitioning has a negative effect. In Fig. 3, the quantum annealing has advantage at large \(\alpha\), and thus a higher performance signal control should be achieved once the D-wave without partitioning is realized.

Figure 11
figure 11

Time average of the objective function for the various \(\alpha\), where \(L=8\) and \(\eta =1.0\). Orange-squares: with graph partitioning, blue-circles: without graph partitioning.