Arm order recognition in multi-armed bandit problem with laser chaos time series

By exploiting ultrafast and irregular time series generated by lasers with delayed feedback, we have previously demonstrated a scalable algorithm to solve multi-armed bandit (MAB) problems utilizing the time-division multiplexing of laser chaos time series. Although the algorithm detects the arm with the highest reward expectation, the correct recognition of the order of arms in terms of reward expectations is not achievable. Here, we present an algorithm where the degree of exploration is adaptively controlled based on confidence intervals that represent the estimation accuracy of reward expectations. We have demonstrated numerically that our approach did improve arm order recognition accuracy significantly, along with reduced dependence on reward environments, and the total reward is almost maintained compared with conventional MAB methods. This study applies to sectors where the order information is critical, such as efficient allocation of resources in information and communications technology.

www.nature.com/scientificreports/ there may be situations where compromises must be made, i.e., other channels will be selected. Now it is obvious that particular channel performance ranking information would be useful when considering non-best channels. Conversely, when there are no other users, a player (the single user) can simultaneously utilize top-ranking options to accelerate the communication ability, similar to the channel bonding in local area networks 10 . The purpose of this study is to accurately recognize the order of the expected rewards of different arms using a chaotic laser time series and to minimize the reduction of accumulated rewards due to too detailed exploration.

Principles
Definition and assumption. We consider a MAB problem in which a player selects one of K slot machines, where K = 2 M and M is a natural number. The K slot machines are distinguished by identities numbered from 0 to K − 1 , which are also represented in M-bit binary code given by S 1 S 2 . . . S M with S i ∈ {0, 1} ( i = 1, . . . , M ). For example, when K = 8 (or M = 3) , the slot machines are numbered by S 1 S 2 S 3 = {000, 001, . . . , 110, 111} . In this study, we assume that µ i = µ j if i = j , and we define the k-th max and k-th argmax operators as max k {} and arg max k {} . The variables used in the study are defined as described below: • X i (n) : Obtained reward from arm i at time step n independent at each time step. x i (n) is observed value. We estimate the arm order of reward expectations by calculating the sample mean of the accumulated reward at each time step. Specifically, the sample means of rewards obtained from arm i by time step n is calculated as follows: In each time step n, we estimated the arm j := arg max k iμ i (n) as the k-th best arm.
Time-division multiplexing of laser chaos. The proposed method is based on the MAB algorithm reported in 2018 6 . This method consists of the following steps: [STEP 1] decision making for each bit of the slot machines, [STEP 2] playing the selected slot machine, and [STEP 3] updating the threshold values.
[STEP 1] Decision for each bit of the slot machine. First, the chaotic signal s(t 1 ) measured at t = t 1 is compared to a threshold value denoted as TH 1 . If s(t 1 ) ≥ TH 1 , then bit S 1 is assigned 1. Otherwise, S 1 is assigned 0. To determine the value of S k (k = 2, . . . , M) , the chaotic signal s(t k ) measured at t = t k (> t k−1 ) is compared to a threshold value denoted as TH k,S 1 ...S k−1 . If s(t k ) ≥ TH k,S 1 ...S k−1 , then bit S k is assigned 1. Otherwise, S k is assigned 0. After this process, a slot machine with the number represented in a binary code S 1 . . . S M is selected.
[STEP 2] Slot machine play. Play the selected slot machine.
[STEP 3] Threshold values adjustment. If the selected slot machine yields a reward, then the threshold values are adjusted in a way that the same decision will be more likely to be selected. For example, if S 1 is assigned 0 and the player gets a reward, then TH 1 should be increased because doing so increases the likelihood of getting S 1 = 0 again. All of the other threshold values involved in determining the decision (i.e. TH 2,S 1 , . . . , TH M,S 1 ...S M−1 ) are updated in the same manner. If the selected slot machine does not yield a reward, then the threshold values are adjusted to make the same decision less likely to take place. For example, if S 1 is assigned 1 and the player does not get a reward, then TH 1 should be increased because of the decreased likelihood of getting S 1 = 1 . Again, all of the other threshold values involved in determining the decision (i.e. TH 2,S 1 , . . . , TH M,S 1 ...S M−1 ) are updated in the same manner.
Arm order recognition algorithm with confidence intervals. Confidence intervals. An overview of our proposed algorithm is shown in Fig. 1a. For each threshold value TH j,b 1 ...b j−1 ( j ∈ {1, . . . , M} , b 1 , . . . , b j−1 ∈ {0, 1} ) and z ∈ {0, 1} , the following values P (z; n) and C(z; n) are calculated: represents a subset of machine arms. If machine i can be selected when the signal s(t j ) is more than www.nature.com/scientificreports/ machine i can be selected when the signal s(t j ) is less than or equal to TH j,b 1 ...b j−1 , then i is included in I j,b 1 ...b j−1 (0) . Otherwise, i is not included in I j,b 1 ...b j−1 (0) . For example, in the case of an eight-armed bandit problem ( Fig. 1b): represents the sample means of rewards obtained from machines in I j,b 1 ...b j−1 (z) . C j,b 1 ...b j−1 (z; n) represents the confidence interval width of the estimated value P j,b 1 ...b j−1 (z; n) . The lower C(z; n), the higher the estimation accuracy. Parameter γ indicates the degree of exploration : a higher γ means that more exploration is needed to reach a given confidence interval width.
Coarseness/fineness of exploration adjustments by confidence intervals. At each threshold TH j,b 1 ...b j−1 , if the two intervals are overlapped, we suppose there is a likelihood of a change in the order relationship between P (0; n) and P (1; n) ; that is, the order of P (0; n) and P (1; n) is not known yet. Therefore, the exploration process should be executed more carefully. Hence, the threshold value should be closer to 0, which is a balanced situation, or we should perform further exploration, so that the threshold adjustment becomes finer. Conversely, if the two intervals are not overlapped, then we suppose a low likelihood of a wrong estimate of the order relationship between P (0; n) and P (1; n) . Hence, we should continue exploration more coarsely so that the threshold adjustment will be accelerated (Fig. 1c).

Results
Experimental settings. We have evaluated the performance of the methods for two cases: a four-armed bandit and an eight-armed bandit. First, the reward probability of each arm is assumed to follow the Bernoulli distribution: www.nature.com/scientificreports/ following conditions: In this experiment, a variety of assignments of reward probabilities ν satisfying the above conditions were prepared, and the performance was evaluated under every reward environment ν . We have defined the reward, regret, and correct order rate (COR) as metrics to quantitatively evaluate the performance of the method.
where n denotes number of time steps, t i (n) is the number of selections of arm i up to time step n, and l m represents the number of measurements in one reward environment ν . For the accuracy of arm order recognition, we considered the estimation accuracy of the top four arms regardless of the total number of arms. We prepared all 144 reward environments ν (all combinations satisfying the above conditions and max i =j |µ i − µ j | = 0.3 ) for the four-armed bandit problems and 100 randomly selected reward environments for the eight-armed bandit problems. The performances of four methods were compared: RoundRobin (all arms are selected in order at each time step), UCB1 (method for maximizing the total rewards proposed in 2002 11 ), Chaos (previous method using the laser chaos time series 6 , only finding the best arm, not recognizing the order), and Chaos-CI (proposed method using laser chaos time series and with confidence intervals). The details of UCB1 used in the present study are described in the Methods section. The purpose of this study is to extend the existing Chaos method to recognize the arm order. We should consider the trade-off between order recognition and reward maximization. As introduced above, RoundRobin and UCB1 were considered to examine quantitative performance analysis. RoundRobin systematically accomplishes the order recognition whereas UCB1 is known to achieve O(log n) regret at time n. We consider that these are appropriate and contrasting representative methods in the literature to examine the trade-off and the essential interest of the present study. Meanwhile, comparison to other bandit algorithms such as Thompson sampling 12 or arm elimination 13 is expected to trigger stimulating future discussions, leading to further improvement of the proposed Chaos-CI algorithm.
Evaluation under one reward environment. The curves in Fig. 2a and b show the time evolutions of regret(n) and COR(n) , respectively, over l m = 12, 000 measurements under specific reward environments ν = (µ 0 , . . . , µ K−1 ) . Specifically, columns (i) and (ii) pertain to the four-armed bandit problems defined by ν = (0.9, 0.8, 0.7, 0.6) x (l) a (l) (s) (s) , . The curves were colour coded for an easy method comparison. In the arm order recognition, Chaos-CI and RoundRobin presented high accuracy in the early time step. In terms of total reward, Chaos and UCB1 achieved the greatest rewards. The postconvergence behavior of Chaos and Chaos-CI is not necessarily the same. The reason is that the parameters and that determine the scale of threshold change are fixed values in Chaos, whereas they change adaptively according to past reward information in Chaos-CI.
Evaluation of the whole reward environments. Figure 3a summarizes the relationship between total rewards and order estimation accuracy: x-axis represents the normalized reward reward † (n) , whereas y-axis represents the COR COR(n) . Here, a normalized reward is defined as follows: Each plot in the graph indicates reward † (n) and COR(n) at time step n = 10,000 under one reward environment ν: Figure 3b shows the time evolution of the average value of each metric over the whole ensemble of reward environments from n = 1 to n = 10,000: Compared to UCB1, Chaos-CI can recognize the arm order faster. On the other hand, Chaos-CI can get more rewards than RoundRobin.
, (1 ≤ n ≤ 10,000). (reward † ν (10,000), COR ν (10,000)) . The more the scatter plot is at the top of the graph, the higher the order estimation accuracy is, and the more the scatter plot is at the right, the greater the obtained reward is. (b) Time evolution of the average value of each metric over the whole ensemble of reward environments ( 1 ≤ n ≤ 10,000).

Discussion
Difficulty of maximizing rewards and arm order recognition. The results of the numerical simulations on the four-armed and eight-armed bandit problems show similar trends: there is a trade-off between the maximized total rewards and arm order recognition. As RoundRobin selects all arms equally, we always achieve a perfect COR at a time step n = 10,000 for any given reward environment. However, we cannot maximize rewards because regret linearly increases with time. On the contrary, in Chaos, we achieved normalized rewards of almost unity at the time step of n = 10,000 with respect to many types of reward environments. However, we can observe inferior performances regarding the arm order recognition accuracy because the arm selection is greatly biased to the best arm. In terms of the COR, the COR on RoundRobin and Chaos-CI (proposed method) quickly converged to unity. In terms of the total rewards, Chaos (previous method) and UCB1 are more active in using the exploitation principle to obtain greater rewards. The proposed method, Chaos-CI, achieves an outstanding performance on the arm order recognition and reward.
Number of arm selections. Figure 4a  www.nature.com/scientificreports/ in a linear order. Therefore, the arm order recognition accuracy is faster than UCB1. Although the selections of non-top arms in the linear order cause regret to increase in a linear order, the slope of the linear-order regret is significantly decreased compared with that of RoundRobin by selecting better arms more often or by prioritizing the search (i.e. T [1] (n) > · · · > T [K] (n)). Figs. 3 and 4, the performances of Chaos are very different depending on reward environments ν 1 and ν 2 . This finding is clearly linked with the arm selection number T i (n) . In reward environment ν 1 , all T i (n) evolve in a linear order, but in reward environment ν 2 , T i (n) (i = i * ) is approximately 100 at time step n = 50,000 . Thus, the performance of Chaos heavily depends on the given reward environment. Table 1 summarizes the sample variance of metrics over 100 reward environments in an eight-armed bandit. Our proposed algorithm (Chaos-CI) depends on the order of the arms. If the difference between the average reward of each branch is large, we consider that the ranking estimation is easy. However, from Fig. 3(a) and Table 1, we observe that the proposed method always estimates the rankings with high accuracy regardless of the order of the arms. This estimation accuracy outperforms the performance of UCB1, which is an algorithm that does not depend on the order of the arms. In terms of obtained rewards, Chaos-CI has a larger variance than UCB1 and Chaos but is more stable than RoundRobin.

Environment dependency. As shown in
In the experiments, the expected reward µ i is limited so that the difference in the estimated difficulty does not vary drastically from problem to problem because the larger the difference in the expected reward value of each arm, the easier the problem becomes to solve. Meanwhile, if the difference in reward expectation for each arm becomes even smaller (specifically, smaller than 0.1), the correct order recognition in its exact sense will be significantly challenging. At the same time though, such a case means that there are not significant reward differences regardless of which arm is pulled. Hence, we consider that the evaluation method or the definition of correct order recognition may need revision. We expect these points to form the basis of interesting future studies.
On the other hand, if the reward distribution is more diverse, for example, [0.95, 0.9, 0.6, 0.5], UCB1, which aims to maximize the cumulative reward, will stop selecting the lower-reward-probability arms at an early exploration phase, leading to the degradation of the accuracy of rank recognition. Conversely, since the proposed method adjusts the thresholds based on the confidence intervals in all branches, it is expected that the rank recognition accuracy will not be degraded.

Conclusions
In this study, we have examined ultrafast decision making with laser chaos time series in reinforcement learning (e.g. MAB) and set a goal to recognize the arm order of reward expectations by expanding the previous method, that is, time-division multiplexing of laser chaos recordings. In the proposed method, we have introduced exploration-degree adjustments based on confidence intervals of estimated rewards. The results of the numerical simulations based on experimental time series show that the selection number of each arm increases linearly, leading to a high and rapid order recognition accuracy. Furthermore, arms with higher reward expectations are selected more frequently; hence, the slope of regret is reduced, although the selection number of an arm still linearly increases. Compared with UCB1 and Chaos, Chaos-CI (proposed method) is less dependent on the reward environment, indicating its potential significance in terms of robustness to environmental changes. In other words, Chaos-CI can make more accurate and stable estimates of arm order. Meanwhile, expressing the accuracy of rank estimation in terms of earned rewards in a single metric is an interesting, important, and challenging problem. We plan to explore this in our future research. Such an order recognition is useful in applications, such as channel selection and resource allocation in information and communications technology, where compromise actions or intelligent arbitrations are expected.

Methods
Optical system. The device used was a distributed feedback semiconductor laser mounted on a butterfly package with optical fibre pigtails (NTT Electronics, KELD1C5GAAA). The injection current of the semiconductor laser was set to 58.5 mA (5.37I th ), where the lasing threshold I th was 10.9 mA. The relaxation oscillation frequency of the laser was 6.5 GHz, and its temperature was maintained at 294.83 K. The optical output power was 13.2 mW. The laser was connected to a variable fibred reflector through a fibre coupler, where a fraction of light was reflected back to the laser, generating high-frequency chaotic oscillations of optical intensity 3,14,15 . The length of the fibre between the laser and reflector was 4.55 m, corresponding to a feedback delay time (round trip) of 43.8 ns. Polarization maintaining fibres were used for all of the optical fibre components. The optical signal was detected by a photodetector (New Focus, 1474-A, 38 GHz bandwidth) and sampled using a digital oscilloscope (Tektronics, DPO73304D, 33 GHz bandwidth, 100 GSample/s, eight-bit vertical resolution). The RF spectrum of the laser was measured by an RF spectrum analyzer (Agilent, N9010A-544, 44 GHz bandwidth). www.nature.com/scientificreports/ USB1 algorithm. In UCB1 11 , we select the arm j that maximize the score based on where X j (n) is the expected average reward obtained from arm j, T j (n) is the number of times machine j is played so far, and n is the total numbers of plays so far.

Details of the time-division multiplexing algorithm.
Parameters setting. In the experiments, we set the parameters in Algorithm 1 as follows: α = 0.99 , = 1 , = 0.1 . These are the same values as the previous experiment 6 . The signal s(τ ) is represented by an 8-bit integer type: −128 ≤ s(τ ) < 128.
Convergence of Algorithm 1 based on uniform distribution. This discussion on the convergence concerns only the two-armed bandit, while the random sequences are uniformly distributed and independent each timesomething that does not concern chaotic time sequences. We assume that K = 2 and the time series used for comparison with thresholds follows a uniform distribution of [−1/2, 1/2] at an arbitrary time. We define the value of threshold TH 1 at the beginning of time step n as w(n). The time evolution of w(n) can be represented as The expectation of w(n) is represented as follows.
Because we assume that s(t) follows a uniform distribution, if max{n�, n�} < 1/2, In this case, one of the arms will be selected intensively as time passes.
The above discussion shows that the convergence and performance of Algorithm 1 depend on learning rate α , exploration degree (�, �) , and reward environment (µ 0 , µ 1 ).
Dependency for scale parameter of confidence intervals. Figure 5 shows the influence of the parameter of confidence intervals. γ is a parameter related to the width of confidence intervals. We can see that the correct order rate becomes higher and the obtained rewards smaller as γ becomes smaller, and vice versa. When γ = √ 2 , the reward and the correct order rate are relatively high; we use this value for γ in Chaos-CI described in the main text.
Convergence of the proposed method based on uniform distribution. As described above, we have found that the performance of the algorithm proposed is heavily dependent on parameters (�, �) . Therefore, in the proposed method, exploration-degree adjustments based on confidence intervals are added to Algorithm 1: if the exploration itself is not sufficient, then thresholds are set close to 0 and values of (�, �) decrease, so thresholds are less likely to diverge, which leads to improved accuracy. If exploration is applied sufficiently, then the values of (�, �) increase, so the thresholds are more likely to diverge, which leads to an intensive selection of a better arm and slow increase of regret.

Data availability
The datasets generated during the current study are available from the corresponding author on reasonable request.