Deterministic networks for probabilistic computing

Neuronal network models of high-level brain functions such as memory recall and reasoning often rely on the presence of some form of noise. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. In vivo, synaptic background input has been suggested to serve as the main source of noise in biological neuronal networks. However, the finiteness of the number of such noise sources constitutes a challenge to this idea. Here, we show that shared-noise correlations resulting from a finite number of independent noise sources can substantially impair the performance of stochastic network models. We demonstrate that this problem is naturally overcome by replacing the ensemble of independent noise sources by a deterministic recurrent neuronal network. By virtue of inhibitory feedback, such networks can generate small residual spatial correlations in their activity which, counter to intuition, suppress the detrimental effect of shared input. We exploit this mechanism to show that a single recurrent network of a few hundred neurons can serve as a natural noise source for a large ensemble of functional networks performing probabilistic computations, each comprising thousands of units.

∑ j =i w ki w l j C i j =C in shared,kl +C in corr,kl .
We introduced the auto-and crosscovariances A i = s 2 i − s i 2 and C i j = s i s j − s i s j of the activities s i and s j of presynaptic neurons i and j, respectively. If Dale's law is respected and the sign of all outgoing connections of a sources is unique, i.e. sign (w ki ) = sign (w li ), ∀k, l, i, the first term is always positive (C in shared,kl > 0). For a pool of independently active presynaptic neurons, C i j = 0 by definition and the second term in the input correlations hence vanishes (C in corr,kl = 0 ∀k, l). The total input correlation is therefore always positive and determined by the number of shared sources. If the presynaptic sources are units in a recurrently connected network, their pairwise correlation is in general non-zero (C i j = 0). In particular, in sparsely connected networks with sufficient inhibition, correlations arrange such that C in corr,kl ≈ −C in shared,kl , leading to small remaining pairwise input correlations, C in kl ≈ 0 1, 2 .

Sampling error depends on number of noise inputs per sampling unit
To closely approximate the effect of Gaussian noise on the input field, one needs a large number K of background inputs per sampling unit. Here, we scale the number of noise sources K per sampling unit, while also scaling the total number N of noise sources to keep their ratio constant. This allows us to investigate the impact of K without altering the amount of shared-input correlations. In addition to the three cases considered in main manuscript (private, shared, network noise), we additionally consider the case of a separate pool of noise sources for each sampling unit ("discrete"), where shared-input correlations are absent.  For small K, the input distribution is strongly discretized and does not approximate Gaussian well, reflected in a large sampling error for very small K for the discrete and shared case (Fig. 1). As we increase K, the sampling error decreases rapidly for the discrete case, and drops to the same level as Gaussian noise at about 50 inputs. For the shared case, the error decreases as well as we increase K, but is limited from below by sampling error introduced by shared-input correlations. For the network case, the sampling error is very large for small K as the network dynamics lock into a fixed point. However, for K > 130, the sampling error for the network case drops almost to the level of Gaussian noise.

Small, recurrent networks can supply large sampling networks with noise -no weight scaling
In Fig. 7 in the main manuscript we scaled the weights in the sampling network with the size M of the sampling network as 1/ √ M. Ignoring the influence of cross-correlations, this scaling keeps the variances of the input distribution arising from recurrent connections in the sampling network constant. Effectively this leads to approximately constant entropy for a large range of sampling network sizes.
If we do not scale the weights as above when increasing the size of the sampling network, the input variance increases and the relative noise strength hence decreases, leading to an effectively stronger coupled sampling network. This strongly decreases the entropy of the sampled distribution (Fig. 2, inset). Despite the decrease in entropy, the sampling errors for the private and network cases stay approximately constant (Fig. 2). For the shared case, the sampling error initially decreases due to the strengthened effective feedback that suppresses shared-input correlations arising from the limited pool of background sources (cf. Small, recurrent networks can supply large sampling networks with noise). As the size of the sampling network increases the sampling error increases again from about M = 40. This is most likely caused by the decrease in the relative noise strength and the sampling dynamics hence becoming too slow to approximate the target distribution in the finite sampling duration considered here.

Synchronization of noise networks for small network sizes
Measurements Binary states of m units from sampling network Measurements Binary states of m units from sampling network Sampling network All-to-all, random weights drawn from Beta distribution, w i j ∼ Beta(a, b), symmetric connections w i j = w ji , no self connections w ii = 0, translation from binary-unit domain to spiking neurons via constant calibration factors (see Sec. in the main manuscript)

D Neuron and synapse model Type
Leaky integrate-and-fire, exponential currents Subthreshold dynamics Subthreshold dynamics (t ∈ (t * ,t * + τ ref )): C m d dt V (t) = −g L (V (t) −V rest ) + I syn (t) Reset and refractoriness (t ∈ (t * ,t * + τ ref )): Here the sum over i runs over all presynaptic neurons and the sum over k over all spike times of the respective neuron i Spiking If V (t * −) < V th ∧V (t * +) ≥ V th : emit spike with time stamp t * E Measurements Spike trains recorded from m neurons from the sampling network F External input Per neuron, one private excitatory and one inhibitory Poisson source with rate ν ex and ν in , respectively. External input Per neuron, γK excitatory and (1 − γ)K inhibitory Poisson sources with weight J, rateν ex and weight −gJ, rateν in , respectively. Excitatory and inhibitory inputs randomly chosen from a common pool of γN and (1 − γ)N units, respectively.