Emergence of local and global synaptic organization on cortical dendrites

Synaptic inputs on cortical dendrites are organized with remarkable subcellular precision at the micron level. This organization emerges during early postnatal development through patterned spontaneous activity and manifests both locally where nearby synapses are significantly correlated, and globally with distance to the soma. We propose a biophysically motivated synaptic plasticity model to dissect the mechanistic origins of this organization during development and elucidate synaptic clustering of different stimulus features in the adult. Our model captures local clustering of orientation in ferret and receptive field overlap in mouse visual cortex based on the receptive field diameter and the cortical magnification of visual space. Including action potential back-propagation explains branch clustering heterogeneity in the ferret and produces a global retinotopy gradient from soma to dendrite in the mouse. Therefore, by combining activity-dependent synaptic competition and species-specific receptive fields, our framework explains different aspects of synaptic organization regarding stimulus features and spatial scales.


Supplementary Figures
Supplementary Figure 1: Uncertainty quantification and sensitivity analysis of the shape of the BTDP rule and the emergence of synaptic clustering. (a) Illustration of prior distributions over the model parameters (see Table 1 and Methods) in the sensitivity analysis. The parameter is denoted below the distribution. Schematics on the left show how each parameter affects the accumulators. (b) Boxplot of changes in synaptic efficacy for different temporal offsets between pre-and postsynaptic bursts (n = 1,000 simulations with parameter distributions as in a). Blue line indicates the median, the box is drawn between the 25th and 75th percentile, whiskers extend above and below the box to the most extreme data points that are within a distance to the box equal to 1.5 times the interquartile range and points indicate data points outside that range. Inset: the integral over the learning window, (

Derivation of the generalized neurotrophin inspired model
To obtain a generalized version of the neurotrophin model we assume ( ) and ( ) to be in steady state = 0, = 0. This is justified since the corresponding time constants are very small compared to the time constants of , or . At steady state We can then rewrite the dynamics of the synaptic efficacy This has the form of a Hebbian learning rule, so we continue by making a first order Taylor approximation around = 0, i.e.

Time and ensemble average analysis
To obtain an analytic expression for the expected change in synaptic efficacy, ⟨̇⟩, we insert Eq. 8 and Eq. 9 into Eq. 10 and assume separation of timescales where the synaptic efficacies change much slower than pre-and postsynaptic activity, and take the ensemble average and the average over time, where we obtain the firing rates and the raw cross-covariance ( , ) at lag so that we can We also consider an example where non-zero-lag correlations are present. We construct the input process by smoothing a Poisson train of events with a boxcar filter with width dur , where ( ) = � 0, < 0, 1, ≥ 0, is the Heaviside step function. Now the raw cross-covariance at lag is given by where ( ) is the triangle function, Consequently we can also compute ∫ ∫  .
Note, that here we used the fact that for a smoothed Poisson train 1 with rate 1 and window size we can write ⟨ 1 ⟩ = 1 dur . For the active synapse we use 11 = 1, 11 = 1 and simplify the expression for ∫ ∫

Analytic condition for switch from depression-dominated to potentiation-dominated regime
To be able to analytically characterize the depression-and potentiation-dominated regimes (Figure 2b,c) we consider the completely homogeneous case with equal efficacies = and rates = for all , and = for all pairs ≠ . When using that = 1 and = 1, we can rewrite the expected change in efficacy as where we have defined = ∑ =1 . Since = − 2 2 2 , note that is a sum over a Gaussian function centered at the position of synapse , , evaluated at the positions of all synapses, 1 , … , . This observation led us to consider the limit where the number of synapses goes to infinity, → ∞, while the length of the dendrite stays fixed. In this limit, we can assume that all neighboring synapses are equidistant, , +1 = , +1 for all pairs , , and interpret = 1 ∑ =1 as a Riemann sum, where = is the density of synapses. When taking the limit as tends to infinity, also tends to infinity and we find that can be approximated as the Gaussian integral (Supplementary Figure 14), and therefore we can replace by √2 and use the resulting expression to generate the contours in Figure 2b. Note that √2 depends on through , which denotes the local density of synapses around synapse ; this is the same for all in the homogenous case, therefore we let = for all , see Figure 2b. This allows us to write where 4 and 5 are newly defined constants. where the STDP learning window is Note that here + and − denote the learning rates at zero offset = 0, while + and − denote the timescales at which causal (pre-post) or acausal (post-pre) pairs affect potentiation vs. depression, respectively. The BTDP learning window can be described by the function Importantly, due to the temporal dynamics of retinal waves, the dependence of the crosscorrelation on the overlap becomes smaller for large . As a consequence, while the positive term of Eq. 21 depends strongly on , the negative term depends much less on . Thus, when the BTDP is parametrized appropriately, it selectively depresses synaptic inputs with poorly overlapping receptive fields (since the poor overlap leads to a small positive term in Eq. 21) and allows for the emergence of synaptic clustering.

Connection between coactivity and correlation
Under certain assumptions, we can establish a link between the coactivity of pairs of synapses used in experiments 6 (reproduced in Figure 2e and Figure 4f) report coactivity and pairwise Pearson correlation coefficient. When 1 ( ) and 2 ( ) are two binarized trains of activity taking the values 1 and 0 to indicate the presence or the absence of an event, then the coactivity of 1 with 2 , coac( 1 , 2 ), is defined as the percentage of the fraction of events in train 1 that cooccur with events of the train 2 . We can write this compactly as coac( 1 , 2 ) =