Introduction

Multi-agent coordination is prevalent in many real-world applications, such as traffic light control1, warehouse commissioning2 and wind farm control3,4. Often, such settings can be formulated as coordination problems in which agents have to cooperate in order to optimize a shared team reward5.

Handling multi-agent settings is challenging, as the size of the joint action space scales exponentially with the number of agents in the system. Therefore, an approach that directly considers all agents’ actions jointly is computationally intractable. This has made such coordination problems the central focus in the planning literature6,7,8,9. Fortunately, in real-world settings agents often only directly affect a limited set of neighbouring agents. This means that the global reward received by all agents can be decomposed into local components that only depend on small subsets of agents. Exploiting such loose couplings is key in order to keep multi-agent decision problems tractable10.

In this work, we consider learning to coordinate in multi-agent systems. For example, consider a wind farm control task, which is comprised of a set of wind turbines, and we aim to maximize the farm’s total productivity. When upstream turbines directly face the incoming wind stream, energy is extracted from wind. This reduces the productivity of downstream turbines, potentially damaging the overall power production. However, turbines have the option to rotate, in order to deflect the turbulent flow away from turbines downwind11. Due to the complex nature of the aerodynamic interactions between the turbines, constructing a model of the environment and deriving a control policy using planning techniques is extremely challenging12. Instead, a joint control policy among the turbines can be learned to effectively maximize the productivity of the wind farm13. The system is loosely coupled, as redirection only directly affects adjacent turbines.

While most of the literature only considers approximate reinforcement learning methods for learning in multi-agent systems, it has recently been shown14 that it is possible to achieve theoretical bounds on the regret (i.e., how much reward is lost due to learning). In this work, we use the multi-agent multi-armed bandit problem definition, and improve upon the state of the art. Specifically, we propose the multi-agent Thompson sampling (MATS) algorithm15, which exploits loosely-coupled interactions in multi-agent systems. The loose couplings are formalized as a coordination graph, which defines for each pair of agents whether their actions depend on each other. We assume the graph structure is known beforehand, which is the case in many real-world applications with sparse agent interactions (e.g., wind farm control).

Our method leverages the exploration-exploitation mechanism of Thompson sampling (TS). TS has been shown to be highly competitive to other popular methods, e.g., UCB16. Recently, theoretical guarantees on its regret have been established17, which renders the method increasingly popular in the literature. Additionally, due to its Bayesian nature, problem-specific priors can be specified. We argue that this has strong relevance in many practical fields, such as advertisement selection16 and influenza mitigation18,19.

We provide a finite-time Bayesian regret analysis and prove that the upper regret bound of MATS is low-order polynomial in the number of actions of a single agent for sparse coordination graphs (Corollary 1). This is a significant improvement over the exponential bound of classic TS, which is obtained when the coordination graph is ignored17. We show that MATS improves upon the state of the art in various synthetic settings. Finally, we demonstrate that MATS achieves high performance on a realistic wind farm control task, in which multiple wind turbines have to be jointly aligned to maximize the total power production.

Problem Statement

In this work, we adopt the multi-agent multi-armed bandit (MAMAB) setting14,20. A MAMAB is similar to the multi-armed bandit formalism21, but considers multiple agents factored into groups. When the agents have pulled a joint arm, each group receives a reward. The goal shared by all agents is to maximize the total sum of rewards. Formally,

Definition 1. A multi-agent multi-armed bandit (MAMAB) is a tuple \(\langle {\mathcal{D}},{\mathcal{A}},f\rangle \) where

  • \({\mathcal{D}}\) is the set of \(m\) enumerated agents. This set is factorized into \(\rho \), possibly overlapping, subsets of agents \({{\mathcal{D}}}^{e}\).

  • \({\mathcal{A}}={{\mathcal{A}}}_{1}\times \ldots \times {{\mathcal{A}}}_{m}\) is the set of joint actions, or joint arms, which is the Cartesian product of the sets of actions \({{\mathcal{A}}}_{i}\) for each of the \(m\) agents in \({\mathcal{D}}\). We denote \({{\mathcal{A}}}^{e}\) as the set of local joint actions, or local arms, for the group \({{\mathcal{D}}}^{e}\).

  • \(f({\boldsymbol{a}})\) is a stochastic function providing a global reward when a joint arm, \({\boldsymbol{a}}\in {\mathcal{A}}\), is pulled. The global reward function is decomposed into \(\rho \) noisy, observable and independent local reward functions, i.e., \(f({\boldsymbol{a}})={\sum }_{e\mathrm{=1}}^{\rho }{f}^{e}({{\boldsymbol{a}}}^{e})\). A local function \({f}^{e}\) only depends on the local arm \({{\boldsymbol{a}}}^{e}\) of the subset of agents in \({{\mathcal{D}}}^{e}\).

We denote the mean reward of a joint arm as \(\mu ({\boldsymbol{a}})={\sum }_{e\mathrm{=1}}^{\rho }{\mu }^{e}({{\boldsymbol{a}}}^{e})\). For simplicity, we refer to the \({i}^{{\rm{th}}}\) agent by its index \(i\).

The dependencies between the local reward functions and the agents are described as a coordination graph8.

Definition 2. A coordination graph is a bipartite graph \(G=\langle {\mathcal{D}},\{{f}^{e}{\}}_{e\mathrm{=1}}^{\rho },E\rangle \), whose nodes \({\mathcal{D}}\) are agents and components of a factored reward function \(f={\sum }_{e\mathrm{=1}}^{\rho }\,{f}^{e}\), and an edge \(\langle i,{f}^{e}\rangle \in E\) exists if and only if agent \(i\) influences component \({f}^{e}\).

The dependencies in a MAMAB can be described by setting \(E=\{\langle i,{f}^{e}\rangle |i\in {{\mathcal{D}}}^{e}\}\).

In this setting, the objective is to minimize the expected cumulative regret22, which is the cost incurred when pulling a particular joint arm instead of the optimal one.

Definition 3. The expected cumulative regret of pulling a sequence of joint arms until time step \(T\) according to policy \(\pi \) is

$${\mathbb{E}}[R(T,\pi )]\mathop{=}\limits^{\Delta }{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|\pi ]$$
(1)

with

$$\begin{array}{c}\Delta ({{\boldsymbol{a}}}_{t})\mathop{=}\limits^{\Delta }\mu ({{\boldsymbol{a}}}_{\ast })-\mu ({{\boldsymbol{a}}}_{t})=\mathop{\sum }\limits_{e=1}^{\rho }{\mu }^{e}({{\boldsymbol{a}}}_{\ast }^{e})-{\mu }^{e}({{\boldsymbol{a}}}_{t}^{e}),\end{array}$$
(2)

where \({{\boldsymbol{a}}}_{\ast }\) is the optimal joint arm and \({{\boldsymbol{a}}}_{t}\) is the joint arm pulled at time \(t\). For the sake of brevity, we will omit \(\pi \) when the context is clear.

Cumulative regret can be minimized by using a policy that considers the full joint arm space, thereby ignoring loose couplings between agents. This leads to a combinatorial problem, as the joint arm space scales exponentially with the number of agents. Therefore, loose couplings need to be taken into account whenever possible.

Multi-agent Thompson sampling

We propose the multi-agent Thompson sampling (MATS) algorithm for decision making in loosely-coupled multi-agent multi-armed bandit problems. Consider a MAMAB with groups \({{\mathcal{D}}}^{e}\) (Definition 1). The local means \({\mu }^{e}({{\boldsymbol{a}}}^{e})\) are treated as unknown. According to the Bayesian formalism, we exert our beliefs over the local means \({\mu }^{e}({{\boldsymbol{a}}}^{e})\) in the form of a prior, \({Q}_{{{\boldsymbol{a}}}^{e}}^{e}(\cdot )\). At each time step \(t\), MATS draws a sample \({\mu }_{t}^{e}({{\boldsymbol{a}}}^{e})\) from the posterior for each group and local arm given the history, \({ {\mathcal H} }_{t-1}\), consisting of local actions and rewards associated with past pulls:

$$\begin{array}{cc}{\mu }_{t}^{e}({{\boldsymbol{a}}}^{e}) & \sim {Q}_{{{\boldsymbol{a}}}^{e}}^{e}(\cdot |{{\mathcal{H}}}_{t-1}),\text{with}\\ {{\mathcal{H}}}_{t-1} & \mathop{=}\limits^{\Delta }{\cup }_{i=1}^{t-1}{\cup }_{e=1}^{\rho }\{\langle {{\boldsymbol{a}}}_{i}^{e},{f}_{i}^{e}({{\boldsymbol{a}}}_{i}^{e})\rangle \}.\end{array}$$
(3)

Note that during this step, MATS samples directly the posterior over the unknown local means, which implies that the sample \({\mu }_{t}^{e}({{\boldsymbol{a}}}^{e})\) and the unknown mean \({\mu }^{e}({{\boldsymbol{a}}}^{e})\) are independent and identically distributed at time step \(t\).

Thompson sampling (TS) chooses the arm with the highest sample, i.e.,

$${{\boldsymbol{a}}}_{t}=\mathop{{\rm{a}}{\rm{r}}{\rm{g}}\,{\rm{\max }}}\limits_{{\boldsymbol{a}}}{\mu }_{t}({\boldsymbol{a}}).$$
(4)

However, in our case, the expected reward is decomposed into several local means. As conflicts between overlapping groups will arise, the optimal local arms for an agent in two groups may differ. Therefore, we must define the argmax-operator to deal with the factored representation of a MAMAB, while still returning the full joint arm that maximizes the sum of samples, i.e.,

$${{\boldsymbol{a}}}_{t}=\mathop{{\rm{a}}{\rm{r}}{\rm{g}}\,{\rm{\max }}}\limits_{{\boldsymbol{a}}}\mathop{\sum }\limits_{e\mathrm{=1}}^{\rho }{\mu }_{t}^{e}({{\boldsymbol{a}}}^{e}).$$
(5)

To this end, we use variable elimination (VE), which computes the joint arm that maximizes the global reward without explicitly enumerating over the full joint arm space8. Specifically, VE consecutively eliminates an agent from the coordination graph, while computing its best response with respect to its neighbours. VE is guaranteed to return the optimal joint arm and has a computational complexity that is combinatorial in terms of the induced width of the graph, i.e., the number of neighbours of an agent at the time of its elimination. However, as the method is typically applied to a loosely-coupled coordination graph, the induced width is generally much smaller than the size of the full joint action space, which renders the maximization problem tractable8,9. Approximate efficient alternatives exist, such as max-plus23, but using them will invalidate the proof for the Bayesian regret bound (Theorem 1).

Finally, the joint arm that maximizes Eq. 5, \({{\boldsymbol{a}}}_{t}\), is pulled and a reward \({f}_{t}^{e}({{\boldsymbol{a}}}_{t}^{e})\) will be obtained for each group. MATS is formally described in Algorithm 1.

Algorithm 1
figure a

MATS.

MATS belongs to the class of probability matching methods24.

Definition 4. Probability matching is a decision strategy which chooses an arm with the same probability as that arm being optimal, given history \({ {\mathcal H} }_{t-1}\), i.e.,

$$P({{\boldsymbol{a}}}_{t}=\cdot |{{\mathcal{H}}}_{t-1})=P({{\boldsymbol{a}}}_{\ast }=\cdot |{{\mathcal{H}}}_{t-1}),$$
(6)

where \({{\boldsymbol{a}}}_{\ast }\) is the optimal arm and \({{\boldsymbol{a}}}_{t}\) is the pulled arm at time \(t\).

Intuitively, MATS samples the local mean rewards according to the beliefs of the user at each time step, and maximizes over those means to find the optimal joint arm according to Definition 1. This process is conceptually similar to traditional TS21.

Bayesian regret analysis

Many multi-agent systems are composed of locally connected agents. When formalized as a MAMAB (Definition 1), our method is able to exploit these local structures during the decision process. We provide a regret bound for MATS that scales sublinearly with a factor \(\tilde{A}T\), where \(\tilde{A}\) is the number of local arms.

Consider a MAMAB \(\langle {\mathcal{D}},{\mathcal{A}},f\rangle \) with \(\rho \) groups and the following assumption on the rewards:

Assumption 1. The global rewards have a mean between 0 and 1, i.e.,

$$\mu ({\boldsymbol{a}})\in [0,1],{\rm{\forall }}{\boldsymbol{a}}\in {\mathcal{A}}.$$

Assumption 2. The local rewards shifted by their mean are \(\sigma \)-subgaussian distributed, i.e., \({\rm{\forall }}e\in [1..\rho ],{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}\),

$${\mathbb{E}}[{\rm{e}}{\rm{x}}{\rm{p}}(t({f}^{e}({{\boldsymbol{a}}}^{e})-{\mu }^{e}({{\boldsymbol{a}}}^{e})))]\le {\rm{e}}{\rm{x}}{\rm{p}}(0.5{\sigma }^{2}{t}^{2}).$$

We maintain the pull counters \({n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})\) and estimated means \({\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})\) for local arms \({{\boldsymbol{a}}}^{e}\).

Consider the event \({ {\mathcal E} }_{T}\), which states that, until time step \(T\), the differences between the local sample means and true means are bounded by a time-dependent threshold, i.e.,

$${{\mathcal{E}}}_{T}\mathop{=}\limits^{\Delta }({\rm{\forall }}e,{{\boldsymbol{a}}}^{e},t:|{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})-{\mu }^{e}({{\boldsymbol{a}}}^{e})|\le {c}_{t}^{e}({{\boldsymbol{a}}}^{e}))$$
(7)

with

$${c}_{t}^{e}({{\boldsymbol{a}}}^{e})\mathop{=}\limits^{\Delta }\sqrt{\frac{2{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}{{n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})}}.$$
(8)

where \(\delta \) is a free parameter that will be chosen later. We denote the complement of the event by \({\bar{ {\mathcal E} }}_{T}\).

Lemma 1. (Concentration inequality) The probability of exceeding the error bound on the local sample means is linearly bounded by \(\tilde{A}T\delta \). Specifically,

$$P({\bar{ {\mathcal E} }}_{T})\le 2\tilde{A}T\delta \mathrm{}.$$
(9)

Proof. Using the union bound (U), we can bound the probability of observing event \({\bar{ {\mathcal E} }}_{T}\) as

$$\begin{array}{cc}P({\bar{{\mathcal{E}}}}_{T}) & \mathop{=}\limits^{(7)}P({\rm{\exists }}t,e,{{\boldsymbol{a}}}^{e}:|{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})-{\mu }^{e}({{\boldsymbol{a}}}^{e})| > {c}_{t}^{e}({{\boldsymbol{a}}}^{e}))\\ & \mathop{\le }\limits^{({\rm{U}})}\mathop{\sum }\limits_{t=1}^{T}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}P(|{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})-{\mu }^{e}({{\boldsymbol{a}}}^{e})| > {c}_{t}^{e}({{\boldsymbol{a}}}^{e})).\end{array}$$
(10)

The estimated mean \({\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})\) is a weighted sum of \({n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})\) random variables distributed according to a \(\sigma \)-subgaussian with mean \({\mu }^{e}({{\boldsymbol{a}}}^{e})\). Hence, Hoeffding’s inequality (H) is applicable25.

$$\begin{array}{cc}P(|{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})-{\mu }^{e}({{\boldsymbol{a}}}^{e})| > {c}_{t}^{e}({{\boldsymbol{a}}}^{e})|{\mu }^{e}({{\boldsymbol{a}}}^{e})) & \mathop{\le }\limits^{({\rm{H}})}2{\rm{e}}{\rm{x}}{\rm{p}}\left(-,\frac{{n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})}{2{\sigma }^{2}},{({c}_{t}^{e}({{\boldsymbol{a}}}^{e}))}^{2}\right)\\ & \mathop{=}\limits^{(8)}2{\rm{e}}{\rm{x}}{\rm{p}}\left(-,\frac{{n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})}{2{\sigma }^{2}},\frac{2{\sigma }^{2}\log ({\delta }^{-1})}{{n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})}\right)\\ & =\,2{\rm{e}}{\rm{x}}{\rm{p}}(-\log ({\delta }^{-1}))=2\delta .\end{array}$$
(11)

Therefore, the following concentration inequality on \({\bar{ {\mathcal E} }}_{T}\) holds:

$$P({\bar{{\mathcal{E}}}}_{T})\le \mathop{\sum }\limits_{t=1}^{T}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}2\delta \,=\,2\mathop{A}\limits^{ \sim }T\delta .$$
(12)

Lemma 2. (Bayesian regret bound under \({ {\mathcal E} }_{T}\)) Provided that the error bound on the local sample means is never exceeded until time \(T\), the Bayesian regret bound, when using the MATS policy \(\pi \), is of the order

$${\mathbb{E}}[R(T,\pi )|{{\mathcal{E}}}_{T}]\le \sqrt{32{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}.$$
(13)

Proof. Consider this upper bound on the sample means:

$${u}_{t}({\boldsymbol{a}})\mathop{=}\limits^{\Delta }\mathop{\sum }\limits_{e=1}^{\rho }{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})+{c}_{t}^{e}({{\boldsymbol{a}}}^{e}).$$
(14)

Given history \({ {\mathcal H} }_{t-1}\), the statistics \({\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}^{e})\) and \({n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})\) are known, rendering \({u}_{t}(\cdot )\) a deterministic function. Therefore, the probability matching property of MATS (Eq. 6) can be applied as follows:

$${\mathbb{E}}[{u}_{t}({{\boldsymbol{a}}}_{t})|{{\mathcal{H}}}_{t-1}]={\mathbb{E}}[{u}_{t}({{\boldsymbol{a}}}_{\ast })|{{\mathcal{H}}}_{t-1}].$$
(15)

Hence, using the tower-rule (T), the regret can be bounded as

$$\begin{array}{cc}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{\bar{{\mathcal{E}}}}_{T}] & \mathop{=}\limits^{({\rm{T}})}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}{\mathbb{E}}[\mu ({{\boldsymbol{a}}}_{\ast })-\mu ({{\boldsymbol{a}}}_{t})|{{\mathcal{H}}}_{t-1},{{\mathcal{E}}}_{T}]]\\ & ={\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}{\mathbb{E}}[\mu ({{\boldsymbol{a}}}_{\ast })-{u}_{t}({{\boldsymbol{a}}}_{t})|{{\mathcal{H}}}_{t-1},{{\mathcal{E}}}_{T}]\,\,+\mathop{\sum }\limits_{t=1}^{T}{\mathbb{E}}[{u}_{t}({{\boldsymbol{a}}}_{t})-\mu ({{\boldsymbol{a}}}_{t})|{{\mathcal{H}}}_{t-1},{{\mathcal{E}}}_{T}]]\\ & \mathop{=}\limits^{(15)}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}{\mathbb{E}}[\mu ({{\boldsymbol{a}}}_{\ast })-{u}_{t}({{\boldsymbol{a}}}_{\ast })|{{\mathcal{H}}}_{t-1},{{\mathcal{E}}}_{T}]\,\,+\mathop{\sum }\limits_{t=1}^{T}{\mathbb{E}}[{u}_{t}({{\boldsymbol{a}}}_{t})-\mu ({{\boldsymbol{a}}}_{t})|{{\mathcal{H}}}_{t-1},{{\mathcal{E}}}_{T}]].\end{array}$$
(16)

Note that the expression \(\mu ({{\boldsymbol{a}}}_{\ast })-{u}_{t}({{\boldsymbol{a}}}_{\ast })\) is always negative under \({ {\mathcal E} }_{T}\), i.e.,

$$\begin{array}{cc}\mu ({{\boldsymbol{a}}}_{\ast })-{u}_{t}({{\boldsymbol{a}}}_{\ast }) & \mathop{=}\limits^{(14)}\mathop{\sum }\limits_{e=1}^{\rho }{\mu }^{e}({{\boldsymbol{a}}}_{\ast }^{e})-{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}_{\ast }^{e})-{c}_{t}^{e}({{\boldsymbol{a}}}_{\ast }^{e})\\ & \mathop{\le }\limits^{(7)}\mathop{\sum }\limits_{e=1}^{\rho }{c}_{t}^{e}({{\boldsymbol{a}}}_{\ast }^{e})-{c}_{t}^{e}({{\boldsymbol{a}}}_{\ast }^{e})\\ & =0,\end{array}$$
(17)

while \({u}_{t}({{\boldsymbol{a}}}_{t})-\mu ({{\boldsymbol{a}}}_{t})\) is bounded by twice the threshold \({c}_{t}^{e}({{\boldsymbol{a}}}^{e})\), i.e.,

$$\begin{array}{cc}{u}_{t}({{\boldsymbol{a}}}_{t})-\mu ({{\boldsymbol{a}}}_{t}) & \mathop{=}\limits^{(14)}\mathop{\sum }\limits_{e=1}^{\rho }{\hat{\mu }}_{t-1}^{e}({{\boldsymbol{a}}}_{t}^{e})+{c}_{t}^{e}({{\boldsymbol{a}}}_{t}^{e})-{\mu }^{e}({{\boldsymbol{a}}}_{t}^{e})\\ & \mathop{\le }\limits^{(7)}\mathop{\sum }\limits_{e=1}^{\rho }{c}_{t}^{e}({{\boldsymbol{a}}}_{t}^{e})+{c}_{t}^{e}({{\boldsymbol{a}}}_{t}^{e})\\ & =2\mathop{\sum }\limits_{e=1}^{\rho }{c}_{t}^{e}({{\boldsymbol{a}}}_{t}^{e}).\end{array}$$
(18)

Thus, Eq. 16 can be bounded as

$$\begin{array}{cc}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{{\mathcal{E}}}_{T}] & \le 2\mathop{\sum }\limits_{t=1}^{T}\mathop{\sum }\limits_{e=1}^{\rho }{c}_{t}^{e}({{\boldsymbol{a}}}_{t}^{e})\\ & \le 2\mathop{\sum }\limits_{t=1}^{T}\mathop{\sum }\limits_{e=1}^{\rho }\sqrt{\frac{2{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}{{n}_{t-1}^{e}({{\boldsymbol{a}}}_{t}^{e})}}\\ & =2\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}\mathop{\sum }\limits_{t=1}^{T}{\mathcal{I}}\{{{\boldsymbol{a}}}_{t}^{e}={{\boldsymbol{a}}}^{e}\}\sqrt{\frac{2{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}{{n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})}},\end{array}$$
(19)

where \( {\mathcal I} \{\cdot \}\) is the indicator function. The terms in the summation are only non-zero at the time steps when the local action \({{\boldsymbol{a}}}^{e}\) is pulled, i.e., when \( {\mathcal I} \{{{\boldsymbol{a}}}_{t}^{e}={{\boldsymbol{a}}}^{e}\mathrm{\}=1}\). Additionally, note that only at these time steps, the counter \({n}_{T}^{e}({{\boldsymbol{a}}}^{e})\) increases by exactly 1. Therefore, the following equality holds:

$$\mathop{\sum }\limits_{t\mathrm{=1}}^{T} {\mathcal I} \{{{\boldsymbol{a}}}_{t}^{e}={{\boldsymbol{a}}}^{e}\}\sqrt{{({n}_{t-1}^{e}({{\boldsymbol{a}}}^{e}))}^{-1}}=\mathop{\sum }\limits_{k\mathrm{=1}}^{{n}_{T}^{e}({{\boldsymbol{a}}}^{e})}\sqrt{{k}^{-1}}\mathrm{}.$$
(20)

The function \(\sqrt{{k}^{-1}}\) is decreasing and integrable. Hence, using the right Riemann sum,

$$\sqrt{{k}^{-1}}\le {\int }_{k-1}^{k}\sqrt{{x}^{-1}}dx\mathrm{}.$$
(21)

Combining Eqs. 1921 leads to a bound

$$\begin{array}{cc}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{{\mathcal{E}}}_{T}] & \mathop{=}\limits^{(19)}2\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}\mathop{\sum }\limits_{t=1}^{T}{\mathcal{I}}\{{{\boldsymbol{a}}}_{t}^{e}={{\boldsymbol{a}}}^{e}\}\sqrt{\frac{2{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}{{n}_{t-1}^{e}({{\boldsymbol{a}}}^{e})}}\\ & \mathop{=}\limits^{(20)}\sqrt{8{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}\mathop{\sum }\limits_{k=1}^{{n}_{T}^{e}({{\bf{a}}}^{e})}\sqrt{{k}^{-1}}\\ & \mathop{\le }\limits^{(21)}\sqrt{8{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}{\int }_{0}^{{n}_{T}^{e}({{\boldsymbol{a}}}^{e})}\sqrt{{x}^{-1}}dx\\ & =\sqrt{8{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}\sqrt{4{n}_{T}^{e}({{\boldsymbol{a}}}^{e})}.\end{array}$$
(22)

We use the relationship \(||{\bf{x}}{||}_{1}\le \sqrt{n}||{\bf{x}}{||}_{2}\) between the 1- and 2-norm of a vector \({\bf{x}}\), where \(n\) is the number of elements in the vector, as follows:

$$\begin{array}{c}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}|\sqrt{{n}_{T}^{e}({{\boldsymbol{a}}}^{e})}|\le \sqrt{\mathop{A}\limits^{ \sim }}\sqrt{\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}{(\sqrt{{n}_{T}^{e}({{\boldsymbol{a}}}^{e})})}^{2}}.\end{array}$$
(23)

Finally, note that the sum of all counts \({n}_{T}^{e}({{\boldsymbol{a}}}^{e})\) is equal to the total number of local pulls done by MATS until time \(T\), i.e.,

$$\begin{array}{c}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}{n}_{T}^{e}({{\boldsymbol{a}}}^{e})=\rho T.\end{array}$$
(24)

Using the Eqs. 2224, the complete regret bound under \({ {\mathcal E} }_{T}\) is given by

$$\begin{array}{cc}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{{\mathcal{E}}}_{T}] & \mathop{\le }\limits^{(22)}\sqrt{8{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}\sqrt{4{n}_{T}^{e}({{\boldsymbol{a}}}^{e})}\\ & \mathop{\le }\limits^{(23)}\sqrt{32{\sigma }^{2}{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}\sqrt{\mathop{A}\limits^{ \sim }}\sqrt{\mathop{\sum }\limits_{e=1}^{\rho }\sum _{{{\boldsymbol{a}}}^{e}\in {{\mathcal{A}}}^{e}}{(\sqrt{{n}_{T}^{e}({{\boldsymbol{a}}}^{e})})}^{2}}\\ & \mathop{=}\limits^{(24)}\sqrt{32{\sigma }^{2}\log ({\delta }^{-1})}\sqrt{\mathop{A}\limits^{ \sim }}\sqrt{\rho T}.\end{array}$$
(25)

Theorem 1.Let \(\langle {\mathcal{D}},{\mathcal{A}},f\rangle \)be a MAMAB. If Assumptions 1 and 2 hold, then the MATS policy \(\pi \)satisfies a Bayesian regret bound of

$$\begin{array}{cc}{\mathbb{E}}[R(T,\pi )] & \le \sqrt{64{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}(\mathop{A}\limits^{ \sim }T)}+\frac{2}{\mathop{A}\limits^{ \sim }}\\ & \in O(\sqrt{{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}(\mathop{A}\limits^{ \sim }T)}).\end{array}$$
(26)

Proof. Using the law of excluded middle (M) and the fact that \(\Delta ({{\boldsymbol{a}}}_{t})\) and \(P({{\mathcal{E}}}_{T}|{{\mathcal{H}}}_{t-1})\) are between 0 and 1 (B), the regret can be decomposed as

$$\begin{array}{cc}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})] & \mathop{=}\limits^{({\rm{M}})}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{{\mathcal{E}}}_{T}]P({{\mathcal{E}}}_{T})+{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{\bar{{\mathcal{E}}}}_{T}]P({\bar{{\mathcal{E}}}}_{T})\\ & \mathop{\le }\limits^{({\rm{B}})}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{{\mathcal{E}}}_{T}]+TP({\bar{{\mathcal{E}}}}_{T}).\end{array}$$
(27)

Then, according to Lemmas 1 and 2 (L), we have

$$\begin{array}{cc}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})] & \mathop{\le }\limits^{(27)}{\mathbb{E}}[\mathop{\sum }\limits_{t=1}^{T}\Delta ({{\boldsymbol{a}}}_{t})|{{\mathcal{E}}}_{T}]+TP({\bar{{\mathcal{E}}}}_{T})\\ & \mathop{\le }\limits^{({\rm{L}})}\sqrt{32{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}+2\mathop{A}\limits^{ \sim }{T}^{2}\delta .\end{array}$$
(28)

Finally, choosing \(\delta =(\tilde{A}T{)}^{-2}\), we conclude that

$$\begin{array}{cc}{\mathbb{E}}[R(T,\pi )] & \mathop{\le }\limits^{(28)}\sqrt{32{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}({\delta }^{-1})}+2\mathop{A}\limits^{ \sim }{T}^{2}\delta \\ & \le \sqrt{64{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}(\mathop{A}\limits^{ \sim }T)}+\frac{2}{\mathop{A}\limits^{ \sim }}\\ & \in O(\sqrt{{\sigma }^{2}\mathop{A}\limits^{ \sim }\rho T{\rm{l}}{\rm{o}}{\rm{g}}(\mathop{A}\limits^{ \sim }T)}).\end{array}$$
(29)

Corollary 1.If \(|{{\mathcal{A}}}_{i}|\le k\)for all agents \(i\), and if \(|{{\mathcal{D}}}^{e}|\le d\)for all groups \({{\mathcal{D}}}^{e}\), then

$${\mathbb{E}}[R(T,\pi )]\in O(\rho \sqrt{{\sigma }^{2}{k}^{d}T{\rm{l}}{\rm{o}}{\rm{g}}(\rho {k}^{d}T)}).$$
(30)

Proof. \(\mathop{A}\limits^{ \sim }={\sum }_{e=1}^{\rho }|{{\mathcal{A}}}^{e}|={\sum }_{e=1}^{\rho }{\prod }_{i\in {{\mathcal{D}}}^{e}}|{{\mathcal{A}}}_{i}|\le \rho {k}^{d}\).

Corollary 1 tells us that the regret is sub-linear in terms of time \(T\) and low-order polynomial in terms of the largest action space of a single agent when the number of groups and agents per group are small. This reflects the main contribution of this work. When agents are loosely coupled, the effective joint arm space is significantly reduced, and MATS provides a mechanism that efficiently deals with such settings. This is a significant improvement over the established classic regret bounds of vanilla TS when the MAMAB is ‘flattened’ and the factored structure is neglected24,26. The classic bounds scale exponentially with the number of agents, which renders the use of vanilla TS unfeasible in many multi-agent environments.

Experiments

We evaluate the performance of MATS on the benchmark problems proposed in the paper that introduced MAUCE14, which is the current state-of-the-art algorithm for multi-agent bandit problems, and one novel setting that falls outside the domain of the theoretical guarantees for both MAUCE and MATS. First, we evaluate the performance of MATS on two benchmarks that were introduced in the MAUCE paper, i.e., Bernoulli 0101-Chain and Gem Mining. We compare against a random policy (rnd), Sparse Cooperative Q-Learning (SCQL)27 and the state-of-the-art algorithm, MAUCE14. For SCQL and MAUCE, we use the same exploration parameters as in previous work14. For MATS, we always use non-informative Jeffreys priors, which are invariant toward reparametrization of the experimental settings28. Although including additional prior domain knowledge could be useful in practice, we use well-known non-informative priors in our experiments to compare fairly with other state-of-the-art techniques. Then, we introduce a novel variant of the 0101-Chain with Poisson-distributed local rewards. A Poisson distribution is supergaussian, meaning that its tails tend slower towards zero than the tails of any Gaussian. Therefore, both the assumptions made in Theorem 1 and in the established regret bound of MAUCE are violated. Additionally, as the rewards are highly skewed, we expect that the use of symmetric exploration bounds in MAUCE will often lead to either over- or underexploration of the local arms. We assess the performance of both methods on this benchmark.

Bernoulli 0101-chain

The Bernoulli 0101-Chain consists of \(n\) agents and \(n-1\) local reward distributions. Each agent can choose between two actions: 0 and 1. In the coordination graph, agents \(i\) and \(i+1\) are connected to a local reward \({f}^{i}({a}_{i},{a}_{i+1})\). Thus, each pair of agents should locally coordinate in order to find the best joint arm. The local rewards are drawn from a Bernoulli distribution with a different success probability per group. These success probabilities are given in Table 1. The optimal joint action is an alternating sequence of zeros and ones, starting with 0. In this work, we set the number of agents \(n\) to 10.

Table 1 Bernouilli 0101 Chain – The unscaled local reward distributions of agents \(i\) and \(i+1\), where \(i\) is even.

To ensure that the assumptions made in the regret analyses of MAUCE and MATS hold, we divide the local rewards by the number of groups, such that the global rewards are between 0 and 1.

We provide non-informative Jeffreys priors on the unknown means to MATS, which for the Bernoulli likelihood is a Beta prior, \( {\mathcal B} (\alpha =0.5,\beta =\mathrm{0.5)}\)29. The results for the Bernoulli 0101-chains are shown in Fig. 1(a).

Figure 1
figure 1

Cumulative normalized regret averaged over 100 runs for the (a) Bernoulli 0101-Chain, (b) Gem Mining and (c) Poisson 0101-Chain. Both the mean (line) and standard deviation (shaded area) are plotted.

Gem mining

In the Gem Mining problem, a mining company wants to excavate a set of mines for gems (i.e., local rewards). The goal is to maximize the total number of gems found over all mines. However, the company’s workers live in separate villages (i.e., agents), and only one van per village is available. Therefore, each village needs to decide to which mine it should send its workers (i.e., local action). Moreover, workers can only commute to nearby mines (i.e., coordination graph). Hence, a group can be constructed per mine, consisting of all agents that can travel toward the mine. An example of a coordination graph is given in Fig. 2

Figure 2
figure 2

Example of a coordination graph in the Gem Mining problem. The red nodes are the mines (rewards), while the blue nodes are the villages (agents).

The reward is drawn from a Bernoulli distribution, where the probability of finding a gem at a mine is \({1.03}^{w-1}p\) with \(w\) the number of workers at the mine and \(p\) a base probability that is sampled uniformly random from the interval \(\mathrm{[0,0.5]}\) for each mine. When more workers are excavating a mine, the probability of finding a gem increases. Each village is populated by a number sampled uniformly random from \(\mathrm{[1..5]}\). The coordination graph is generated by sampling for each village \(i\) a number of mines \({m}_{i}\) in \(\mathrm{[2..4]}\) to which it should be connected. Then, each village \(i\) is connected to the mines \(i\) to \((i+{m}_{i}-\mathrm{1)}\). The last village is always connected to 4 mines.

We provide non-informative Jeffreys priors on the unknown means to MATS, which for the Bernoulli likelihood is a Beta prior, \( {\mathcal B} (\alpha =0.5,\beta =\mathrm{0.5)}\)29. The results for the Gem Mining problem are shown in Fig. 1(b).

Poisson 0101-chain

We introduce a novel benchmark with Poisson distributed local rewards, for which the established regret bounds of MATS and MAUCE do not hold. Similar to the Bernoulli 0101-Chain, agents need to coordinate their actions in order to obtain an alternating sequence of zeroes and ones. However, as the rewards are highly skewed and supergaussian, this setting is much more challenging. The means of the Poisson distributions are given in Table 2. We also divide the rewards by the number of groups, similar to the Bernoulli 0101-Chain. Again, we set the number of agents n to 10.

Table 2 Poisson 0101 Chain – The unscaled local reward distributions of agents \(i\) and \(i+1\). Each entry shows the mean for each local arm of agents \(i\) and \(i+1\).

For MAUCE, an exploration parameter must be chosen. This exploration parameter denotes the range of the observed rewards. As a Poisson distribution has unbounded support, we rely on percentiles of the reward distribution. Specifically, as 95% of the rewards when pulling the optimal arm falls below 1, we choose 1 as the exploration parameter of MAUCE. For MATS we use non-informative Jeffreys priors on the unknown means, which for the Poisson likelihood is a Gamma prior, \({\mathcal{G}}(\alpha =0.5,\beta =0)\)29. The results are shown in Fig. 1(c).

Wind farm control application

We demonstrate the benefits of MATS on a state-of-the-art wind farm simulator and compare its performance to MAUCE and SCQL. A wind farm consists of a group of wind turbines, instantiated to extract energy from wind. From the perspective of a single turbine, aligning with the incoming wind vector usually ensures the highest productivity. However, translating this control policy directly towards an entire wind farm may be sub-optimal. As wind passes through the farm, downstream turbines observe a significantly lower wind speed. This is known as the wake effect, which is due to the turbulence generated behind operational turbines.

In recent work, the possibility of deflecting wake away from the farm through rotor misalignment is investigated11. While a misaligned turbine produces less energy on its own, the group’s total productivity is increased. Physically, the wake effect reduces over long distances, and thus, turbines tend to only influence their neighbours. We can use this domain knowledge to define groups of agents and organize them in a graph structure. Note that the graph structure depends on the incoming wind vector. Nevertheless, atmospheric conditions are typically discretized when analyzing operational regimes30, thus, a graph structure can be made independently for each possible incoming discretized wind vector. We construct a graph structure for one possible wind vector.

We demonstrate our method on a virtual wind farm, consisting of 11 turbines, of which the layout is shown in Fig. 3. We use the state-of-the-art WISDEM FLORIS simulator31. Each turbine is an agent, and choosing an orientation with respect to the incoming wind vector corresponds to an action. The groups are constructed according to the graph depicted in Fig. 3. The reward is described by the power production per agent, which we divide uniformly over the groups that the agent is part of. The objective is to find the joint alignment of the wind farm that maximizes the total power production.

Figure 3
figure 3

Wind farm layout – Dependency graph where the nodes are the turbines and the edges describe the dependencies between the turbines. The incoming wind is denoted by an arrow.

For MATS, we assume the local power productions are sampled from Gaussians with unknown mean and variance, which leads to a Student’s t-distribution on the mean when using a Jeffreys prior32. The results for the wind farm control setting are shown in Fig. 4.

Figure 4
figure 4

Cumulative normalized regret averaged over 10 runs for Wind Farm task. Both the mean (line) and standard deviation (shaded area) are plotted.

Discussion

MATS is a Bayesian method, which means that it can leverage prior knowledge about the data distribution. This property is highly beneficial in many practical applications, e.g., influenza mitigation18,19 and wind farm control4.

Both MAUCE and MATS achieve sub-linear regret in terms of time and low-order polynomial regret in terms of the number of local arms for sparse coordination graphs. However, empirically, MATS consistently outperforms MAUCE as well as SCQL. We can see that MATS solves the Bernoulli 0101-Chain problem in only a few time steps, while MAUCE still pulls many sub-optimal actions after 10000 time steps (see Fig. 1(a)). In the more challenging Gem Mining problem, the cumulative regret of MAUCE is three times as high as the cumulative regret of MATS around 40000 time steps (see Fig. 1(b)). In the wind farm control task, we can see that MATS allowed for a five-fold increase of the normalized power productions with respect to the state of the art (see Fig. 4).

We argue that the high performance of MATS is due to the ability to seamlessly include domain knowledge about the shape of the reward distributions and treat the problem parameters as unknowns. To highlight the power of this property, we introduced the Poisson 0101-chain. In this setting, the reward distributions are highly skewed, for which the mean does not match the median. Therefore, in our case, since the mean falls well above 50% of all samples, it is expected that for the initially observed rewards, the true mean will be higher than the sample mean. Naturally, this bias averages out in the limit, but may have a large impact during the early exploration stage. The high standard deviations in Fig. 1(c) support this impact. Although the established regret bounds of MATS and MAUCE do not apply for supergaussian reward distributions, we demonstrate that MATS exploits density information of the rewards to achieve more targeted exploration. In Fig. 1(c), the cumulative regret of MATS stagnates around 7500 time steps, while the cumulative regret of MAUCE continues to increase significantly. As MAUCE only supports symmetric exploration bounds, it is challenging to correctly assess the amount of exploration needed to solve the task.

Throughout the experiments, exploration constants had to be specified for MAUCE, which were challenging to choose and interpret in terms of the density of the data. In contrast, MATS uses either statistics about the data (if available) or, potentially non-informative, beliefs defined by the user. For example, in the wind farm case, the spread of the data is unknown. MATS effectively maintains a posterior on the variance and uses it to balance exploration and exploitation, while still outperforming MAUCE with a manually calibrated exploration range (see Fig. 4).

Currently, we established an upper bound on the cumulative regret to show that MATS learns the optimal action eventually (i.e., the cumulative regret reduces sub-linearly over time) and effectively exploits the sparse structure of the joint action space (i.e., the regret bound is in terms of the number of local joint actions instead of the global joint actions). In future work, we will aim to construct a lower bound for MATS, which will allow us to assess the tightness of the established upper bound.

Related work

Multi-agent reinforcement learning and planning with loose couplings has been investigated in sequential decision problems9,33,34,35. In sequential settings, the value function cannot be factorized exactly. Therefore, it is challenging to provide convergence and optimality guarantees. While for planning some theoretical guarantees can be provided35, in the learning literature the focus has been on empirical validation33. In this work, we focus on MAMABs, which are single-shot stateless problems. In such settings, the reward function is factored exactly into components that only depend on a subset of agents.

The combinatorial bandit36,37,38,39 is a variant of the multi-armed bandit, in which, rather than one-dimensional arms, an arm vector has to be pulled. In our work, the arms’ dimensionality corresponds to the number of agents in our system, and similarly to combinatorial bandits, the number of arms exponentially increases with this quantity. We consider a variant of this framework, called the semi-bandit problem40, in which local components of the global reward are observable. Chen et al.39 constructed an algorithm for this setting that assumes access to an \((\alpha ,\beta )\)-oracle, which provides a joint action that outputs a fraction \(\alpha \) of the optimal expected reward with probability \(\beta \). Instead, we assume the availability of a coordination graph, which we argue is a reasonable assumption in many multi-agent settings.

Sparse cooperative Q-learning is an algorithm that also assumes the availability of a coordination graph27. However, although strong experimental results are given, no theoretical guarantees were provided. Later, the UCB-like algorithm, HEIST, for exploration and exploitation in MAMABs was introduced20, which uses a message-passing scheme for resolving coordination graphs. They provide some theoretical guarantees on the regret for problems with acyclic coordination graphs. Multi-Agent Upper-Confidence Exploration (MAUCE)14 is a more general method that uses variable elimination to resolve (potentially cyclic) coordination graphs. MAUCE demonstrates high performance on a variety of benchmarks and provides a tight theoretical upper bound on the regret. MATS provides a Bayesian alternative to MAUCE based on Thompson sampling (TS).

Our problem definition is related to distributed constraint optimization (DCOP) problems41. In DCOP problems, multiple agents control a set of variables in a distributed manner under a set of constraints. The objective is the same as for a MAMAB, i.e., optimize the sum over group rewards. However, in DCOPs, the rewards are assumed to be known beforehand. The Distributed Coordination of Exploration and Exploitation (DCEE) framework42 extends this setting to unknown rewards, but considers the optimization of the cumulative reward achieved over a time span, rather than of a single-step reward. MAMABs, or MAB-DCOPs20, consider the optimization of a single-step expected reward over time.

In recent research on wind farm control, the impact of optimized rotor alignments on power production is heavily investigated11. To search for the optimal alignments within the wind farm, data-driven methods are usually adopted, where the turbines’ alignments are perturbed iteratively until they locally converge12. When optimizing the alignment of a wind turbine, only considering its neighbours can significantly boost the learning speed3. MATS is also able to leverage neighbourhood structures. In addition, rather than random perturbation of the alignments, MATS leverages an exploration-exploitation mechanism that is inspired by TS and variable elimination, which allows for a global exploration mechanism that targets the optimal alignment configuration, while retaining a small regret during the learning process itself.

Statement of reproducibility

The source code of all experiments is publicly available at: https://github.com/timo-verstraeten/mats-experiments.

Conclusions

We proposed multi-agent Thompson sampling (MATS), a novel Bayesian algorithm for multi-agent multi-armed bandits. The method exploits loose connections between agents to solve multi-agent coordination tasks efficiently. Specifically, we proved that, for \(\sigma \)-subgaussian rewards with bounded means, the expected cumulative regret decreases sub-linearly in time and low-order polynomially in the highest number of actions of a single agent when the coordination graph is sparse. Empirically, we showed a significant improvement over the state-of-the-art algorithm, MAUCE, on several synthetic benchmarks. Additionally, we showed that MATS can seamlessly be adapted to the available prior knowledge, and achieves state-of-the-art performance on the Poisson 0101-Chain, a new benchmark with supergaussian rewards. Finally, we demonstrated that MATS achieves high performance on a realistic wind farm control task, where the optimal rotor alignments of the wind turbines need to be jointly optimized to maximize the farm’s power production. In many practical applications, there exist sparse neighbourhood structures between agents, and we have shown that MATS is able to successfully exploit these structures, while leveraging prior knowledge about the data.