Abstract
The brain’s functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such ‘codependency’ is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic codependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory–excitatory interactions determine the strength of longterm potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and longterm stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of coactive synapses. In recurrent neuronal networks, codependent plasticity produces rich and stable motor cortexlike dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting microlevel physiology with networkwide phenomena.
Similar content being viewed by others
Main
Synaptic plasticity is thought to be the brain’s fundamental mechanism for learning^{1,2,3}. Based on Hebb’s postulate and early experimental data, theories have focused on the idea that synapses change based solely on the activity of their presynaptic and postsynaptic counterparts^{4,5,6,7,8,9,10}, defining synaptic plasticity as predominantly a synapsespecific process. However, experimental evidence^{11,12,13,14,15,16,17,18,19,20} has pointed toward learning mechanisms that act locally at the mesoscale, taking into account the activity of multiple synapses and synapse types nearby. For example, excitatory synaptic plasticity (ESP) has long been known to rely on intersynaptic cooperativity by way of elevated calcium concentrations from multiple presynaptically active excitatory synapses^{15,16,17,18}. Interestingly, GABAergic, inhibitory synaptic plasticity (ISP) has also been shown to depend on the activation of neighboring excitatory synapses: ISP is blocked when nearby excitatory synapses are deactivated^{11,12}, and the magnitude of the changes depends on the ratio between local excitatory and inhibitory currents (EI balance)^{11}. Moreover, the absence of inhibitory currents can either flip the direction^{13,14} or maximize ESP^{21,22,23}. The amplitude of longterm potentiation (LTP) at excitatory synapses also depends on the history of nearby excitatory LTP induction, revealing temporal and distancedependent effects^{24}. Finally, Hebbian LTP can also trigger longterm depression (LTD) at neighboring synapses^{19} through a heterosynaptic plasticity mechanism—that is, without the need of presynaptic activation. There is currently no unifying framework to incorporate these experimentally observed interdependencies at the mesoscopic level of synaptic plasticity.
Existing models typically aim to explain, for example, how cell assemblies are formed and maintained^{9,25}. In these studies, synapsespecific plasticity rules are typically complemented with global processes, such as normalization of excitatory synapses^{25} or modulation of inhibitory synaptic plasticity by the average network activity^{9}, for stability. Moreover, intricate spatiotemporal dynamics, such as the activity patterns observed in motor cortex during reaching movements^{26}, can be reproduced only when inhibitory connections are optimized (that is, hand tuned) by iteratively changing the eigenvalues of the connectivity matrix toward stable values^{27,28} or learned by nonlocal supervised algorithms, such as FORCE^{29,30}. However, models that rely on connectivity changes triggered by nonlocal quantities are usually based on the optimization of network dynamics^{27,28,29,30} and often do not reflect biologically relevant mechanisms (but see ref. ^{31}).
To fill the theoretical gap in mesoscopic, yet local, synaptic plasticity rules, we introduce a new model of ‘codependent’ synaptic plasticity that includes the direct interaction between different neighboring synapses. Our model accounts for a wide range of experimental data on excitatory plasticity and receptive field plasticity of excitatory and inhibitory synapses and makes predictions for future experiments involving multiple synaptic stimulation. Furthermore, it provides a mechanistic explanation for experimentally observed synaptic clustering and for how dendritic morphology can facilitate the emergence of single (clustered) or mixed (scattered) feature selectivity. Finally, we show how naive recurrent networks can grow into strongly connected, stable and inputsensitive circuits showing amplifying dynamics.
Results
We developed a general theoretical framework for synaptic plasticity rules that accounts for the interplay between different synapse types during learning. In our framework, excitatory and inhibitory synapses change according to the functions ϕ_{E}(E, I; PRE, POST) and ϕ_{I}(E, I; PRE, POST), respectively (Fig. 1a). The signature of the codependency between neighboring synapses—that is, synapses that are within each others’ realm of physical influence—is given by E and I, which describe the recent postsynaptic activation of nearby excitatory and inhibitory synapses. The activity of the synapses’ own presynaptic and postsynaptic neurons—that is, the local synapsespecific activity—is described by the variables PRE and POST. We modeled E and I as variables that integrate neighboring synaptic currents: calcium influx through Nmethyldaspartate (NMDA) channels for E and chloride influx through γaminobutyric acid type A (GABA_{A}) channels for I. The implementation of excitatory and inhibitory plasticity rules varies slightly, as follows below.
Codependent excitatory plasticity model
The rule ϕ_{E}(E, I; PRE, POST) by which excitatory synaptic efficacy change is constructed similarly to classic spiketimingdependent plasticity (STDP) models^{15,32}: prebeforepost spike patterns may elicit potentiation (details below), whereas postbeforepre elicits depression (Fig. 1b). Synaptic changes are also modulated by ‘neighboring’ excitatory and inhibitory activity (Fig. 1a). Initially, we defined an explicit distancedependent term so that the influence between two neighboring synapses decays with their separation (Methods). In later models, we assumed, for simplicity, that all synapses onto a dendritic compartment or postsynaptic neuron contribute equally to the variables E and I, such that all synapses onto a dendritic compartment or postsynaptic neuron are neighbors with each other.
In addition to the STDP component, the learning rate for potentiation increases linearly with the magnitude of neighboring (including the synapse’s own) NMDA currents^{15,16,18} (Fig. 1c, green line). This destabilizing positive feedback, in which potentiation leads to bigger excitatory currents, which, in turn, leads to more potentiation, is counterbalanced by introducing a heterosynaptic term^{9} that weakens a synapse via a quadratic dependency on its neighboring (including the synapse’s own) NMDA currents (Fig. 1c, orange line). This term is based on experimentally observed heterosynaptic weakening of excitatory synapses neighboring other synapses undergoing LTP^{19}. Together, potentiation and heterosynaptic weakening form a fixed point in the dynamics of synaptic weights. As a result, weak to intermediate excitatory currents elicit strengthening, whereas strong currents induce weakening (Fig. 1c, gray line). In addition to neighboring excitatory–excitatory effects, we constructed the model such that elevated inhibition blocks excitatory plasticity: only when synapses are disinhibited can excitatory plasticity change their efficacies (Fig. 1d). Inhibition thus directly modulates excitatory plasticity in our model, complementing the indirect influence of inhibition on excitatory plasticity via the direct influence of inhibition on the postsynaptic neurons’ membrane potential and spike times. This direct control of inhibition over excitatory plasticity allows for rapid, oneshotlike learning^{33} during periods of disinhibition^{34} in behavioral timescales—that is, when multiple presynaptic excitatory spikes coincidentally activate a postsynaptic neuron, because the effective learning rate can vary wildly (through rapid intermittent disinhibition) without compromising the stability of the network. At all other times—when inhibition is strong enough to effectively block excitatory plasticity—excitatory weights cannot drift due to ongoing presynaptic and postsynaptic activity.
Changes in a given excitatory synapse, w_{E}, denoted by Δw_{E}, are expressed in a simplified way as:
where A_{LTP}, A_{het} and A_{LTD} are the (strictly positive) learning rates for the LTP, heterosynaptic and LTD plasticity terms, respectively (see Methods for the detailed implementation). The terms \(\left(PR{E}_{{{{\rm{LTP}}}}}\right)\), \(\left(POS{T}_{{{{\rm{het}}}}}\right)\) and \(\left(POS{T}_{{{{\rm{LTD}}}}}\right)\) represent the filtered spike trains (that is, firing rate estimates) of presynaptic and postsynaptic neurons. Spike times of presynaptic and postsynaptic neurons are represented by \(\left(PR{E}_{{{{\rm{spike}}}}}\right)\) and \(\left(POS{T}_{{{{\rm{spike}}}}}\right)\), respectively, which trigger synaptic weight changes. The parameters I* and γ define the inhibitory control over excitatory plasticity. The amplitude of excitatorytoexcitatory plasticity is maximum when inhibition is blocked, decreasing monotonically with the magnitude of local inhibitory currents. Interestingly, both weightdependent STDP^{32,35} and triplet learning rules^{5} can be recovered from equation (1) under certain approximations and simplifications (see the Supplementary Modeling Note for details).
Codependent inhibitory plasticity model
Inhibitory synapses change according to a function ϕ_{I}(E, I; PRE, POST) that follows a symmetric STDP curve^{3,11,36} (Fig. 1e)—synaptic changes are scaled according to the temporal proximity of presynaptic and postsynaptic spikes. Similar to excitatory plasticity, the learning rate of inhibitory plasticity is modulated by neighboring excitatory and inhibitory activity (Fig. 1f,g). In this case, when E and I (that is, NMDA and GABAergic currents) are equal (E = I), or when NMDA currents vanish (E = 0), there is no change in the efficacy of inhibitory synapses: they remain constant. LTP is induced when excitatory currents are stronger than inhibitory ones and vice versa for LTD. As a consequence, spike times and neighboring synaptic currents act together but at different timescales. These codependent components of ISP are based on the abolition of either LTP^{12} or both LTP and LTD^{11} when postsynaptic NMDA currents are blocked as well as evidence of increase in amplitude of changes for larger EI ratios^{11}.
Changes in a given inhibitory synapse, w_{I}, denoted by Δw_{I}, are expressed in a simplified way as:
where A_{ISP} is the (strictly positive) learning rate for the codependent inhibitory synaptic plasticity rule, and α is the EI balance setpoint imposed by the learning rule, such that E / I = α (see Methods for the detailed implementation). The terms \(\left(PR{E}_{{{{\rm{inh}}}}}\right)\) and \(\left(POS{T}_{{{{\rm{inh}}}}}\right)\) represent the filtered spike trains (that is, firing rate estimates) of presynaptic and postsynaptic neurons. Spike times of presynaptic and postsynaptic neurons are represented by \(\left(PR{E}_{{{{\rm{spike}}}}}\right)\) and \(\left(POS{T}_{{{{\rm{spike}}}}}\right)\), respectively, which trigger synaptic weight changes. Applying specific simplifications to equation (2), we can recover a previously proposed spikedbased learning rule^{7}, similarly to the above case for excitatory synapses (see the Supplementary Modeling Note for details).
Stability of excitatory currents
We implemented the above rules in a single leaky integrateandfire (LIF) neuron with plastic excitatory synapses that emulate αamino3hydroxy5methyl4isoxazolepropionic acid (AMPA) and NMDA receptors as well as inhibitory (GABA_{A}) synapses (Methods). We initially assessed the properties of codependent excitatory plasticity with regard to previous experimental^{15,16,24} and modeling studies^{5,6,8,37,38}, as described below.
First, we considered two otherwise isolated excitatory neurons, so that there was no influence of other presynaptic partners over synaptic changes aside from the synapse that we investigated (Fig. 2a). We found that our model—in agreement with previous models^{6,38,39}—could capture the influence of membrane potential depolarization due to strong initial excitatory weight, current clamp or backpropagating action potential (Supplementary Fig. 1) on synaptic efficacy changes. As a result, an LTDinducing prebeforepost spike protocol became LTP inducing when accompanied by large postsynaptic depolarization^{15,16} (Fig. 2b). In our model, the switch from LTD to LTP was due to an increase in the magnitude of the presynaptic excitatory current through NMDA channels for depolarized states, eliciting stronger LTP (Fig. 1c and Extended Data Fig. 1a).
Similarly, the interaction of presynaptic and postsynaptic spikes could also account for efficacy changes based on the frequency of spike pair presentations (Fig. 2c). Notably, in our model, high frequency of presynaptic and postsynaptic spike pairs elicited increased LTP (Fig. 2c) due to a direct elevation in NMDA currents (Extended Data Fig. 2a and Fig. 1c). Spikebased^{5,9} or voltagebased^{6} models imitate the influence of spike frequency on LTP amplitudes by reacting to an increase in the postsynaptic firing frequency and the consequent increase in spike triplets (postprepost; Extended Data Fig. 2b,c). Our model thus varies in the locus of its mechanism: elevated excitatory currents—that is, a presynapticdriven effect—instead of elevated postsynaptic activity.
In our model, plasticity could be affected by excitatory and inhibitory currents, altering amplitude and direction of synaptic change (Extended Data Fig. 1a–c). To highlight this codependent effect, we simulated the classic frequencydependent protocol^{15} with a pair of neighboring synapses (one excitatory and one inhibitory with static weights) simultaneously activated (Fig. 2d). An increase in neighboring firing rate amplified LTP, which was induced by the synapsespecific prebeforepost spike pattern (Fig. 2e, full lines, and Fig. 2f, left). The same increase in neighbring firing rate reduced LTD, lowering the pairing frequency for which LTD becomes LTP for synapsespecific postbeforepre spike patterns (Fig. 2e, dashed lines, and Fig. 2f, right). These effects arose from elevated NMDA currents from the neighboring excitatory synapse (Extended Data Fig. 1a) and are magnified without inhibitory control (Extended Data Fig. 2e,i). In contrast, in the traditional spikebased^{5,9} or voltagebased^{6} learning rules, neighboring activation does not affect plasticity as long as it does not influence presynaptic and postsynaptic spike patterns or the mean postsynaptic membrane potential^{37} (Extended Data Fig. 2d–k)—that is, due to balanced excitatory and inhibitory currents (Supplementary Fig. 2).
To further investigate the distance and temporal effects of multiple presynaptic activation, we simulated a single postsynaptic neuron connected with two presynaptic excitatory synapses separated by a defined electrotonic distance (Fig. 2g). Similar to experiments in mice cortical slices^{24}, the activation of a single synapse, when followed by a threespike burst of the postsynaptic neuron with a time lag Δt, induced a STDPlike change in efficacy (Fig. 2h). Repeating the same protocol with a time lag of Δt = 5 ms between presynaptic and postsynaptic spikes to induce ‘strong’ LTP (black arrowhead in Fig. 2h) followed by a second, ‘weak’ LTP at a neighboring synapse with a time lag of Δt = 35 ms (purple arrowhead in Fig. 2h), shortly after, reproduced the experimentally reported temporal (Fig. 2i) and spatial (Fig. 2j) dependencies of excitatory synaptic plasticity^{24} in our model.
We extended the above protocol and simulated a single postsynaptic neuron receiving homogeneous Poisson excitatory and inhibitory spike trains from synapses with spatial organization (Fig. 3a,b and Methods). For simplicity, we modeled excitatory synapses as equally spaced along a singlecompartment neuron with equal, unitary distance between immediate neighbors (Fig. 3b, top). The influence of a given synapse onto another was implemented according to their assumed electrotonic distance as a normalized current following a Gaussianshaped decay with standard deviation σ (Fig. 3b). σ thus characterized the topology of spatial interactions. It means that the maximum influence on a synapse was its own NMDA current influx (center of the Gaussian). Other synapses also contributed to the efficacy change, with the amplitude of their effect normalized by the length of interactions, σ, and number of neighboring synapses (Fig. 3b, bottom, and Methods). After the system reached equilibrium, we found that the mean excitatory current influx through NMDA channels was independent of the length constant, σ (Fig. 3c), as a result of the combination of the Hebbian LTP and heterosynaptic terms, which produces a setpoint for the total NMDA currents (Methods and Fig. 1c, red circle).
However, the shape of the distribution of synaptic currents depended on σ (Fig. 3d and Extended Data Fig. 3) such that, for small σ (that is, only weak spatial coupling of synapses), synapsespecific NMDA currents and weights were proportional to the presynaptic neuronsʼ firing rates (Extended Data Fig. 3d,f). For larger σ (that is, when more distant synapses could affect each other), synapses with low presynaptic firing rates were deleted (Extended Data Fig. 3f), as competitive heterosynaptic plasticity disadvantaged these synapses. Although deleted synapses did not generate synapsespecific NMDA currents (Extended Data Fig. 3d), their synapsespecific codependent variable E (filtered neighboring NMDA currents) did not vanish, becoming independent of the presynaptic neuron’s firing rate and σ (Extended Data Fig. 3e). The transition to competition between synapses happened at σ = σ_{th} ≈ 0.6 (Fig. 3d and Extended Data Fig. 3c–f), which is at 60% of the distance between two immediately neighboring synapses in our unitary distance formulation, meaning that the transition to competition occurs when any two synapses could interact in a substantial way (Extended Data Fig. 3g), in line with the experimental results^{24} (Fig. 3d, σ_{fit}; Fig. 2j, green line). For the sake of simplicity, we can thus consider all presynaptic synapses onto a single compartment model to affect each other equally, until we introduce dendritic compartments further below.
For a fixed σ, the setpoint for the total NMDA current is determined by the learning rates of the three mechanisms involved in the learning rule: LTP, LTD and heterosynaptic plasticity (equation (1); Methods). This setpoint decreases with the increase in the learning rate of heterosynaptic plasticity (Fig. 3e), being independent of initial excitatory weights (Fig. 3f), and slightly dependent on inhibitory input strength (Fig. 3g) due to its effect on the postsynaptic firing rate (Extended Data Fig. 3a). Collectively, these results highlight the excitatory codependent plasticity model’s versatility in incorporating effects of spike times, voltage, distance and temporal activation of neighboring synapses in a stable manner.
EI balance and firing rate setpoint
The dynamics of traditional spikebased plasticity rules can be approximated by the firing rate of presynaptic and postsynaptic neurons^{7,9}. In these types of models, stable postsynaptic activity may be achieved if synaptic weights change toward a firing rate setpoint^{7,9} that controls the dynamics such that excitatory weights increase when the postsynaptic firing rate is lower than the setpoint and decrease otherwise^{9}. In the same vein, inhibitory weights decrease for low postsynaptic firing rates (below the setpoint) and increase for high firing rates^{7,40}. When both excitatory and inhibitory synapses are plastic (Fig. 4a), the fixed points from both rules must match to avoid a competition between synapses due to the asymmetric nature of excitatory and inhibitory plasticity with firing rate setpoints^{41} (Fig. 4b) that would result in synaptic weights to either diverge or vanish (Fig. 4c). Codependent inhibitory plasticity does not have such a problem because there is no firing rate setpoint. Instead, it modifies inhibitory synapses based on an explicit setpoint for excitatory and inhibitory currents (α in equation (2)), allowing various stable activity regimes for a postsynaptic neuron while avoiding competition with excitatory plasticity and maintaining a state of balance between excitation and inhibition (Fig. 4d).
Receptive field plasticity
Sensory neurons have been shown to respond more strongly to some features of stimuli than others, which is thought to facilitate recognition, classification and discrimination of stimuli. The shape of a neuron’s response profile—that is, its receptive field—is a result of its input connectivity^{21}. Receptive fields are susceptible to change when an animal learns^{42}, with strong evidence supporting receptive field changes as a direct consequence of synaptic plasticity^{43}.
To assess the functional consequence of codependent plasticity, we studied its performance in receptive field formation for both excitatory and inhibitory synapses jointly. We simulated a postsynaptic LIF neuron receiving inputs from eight pathways (Methods) that represent, for example, different sound frequencies^{21} (Fig. 5a). In this scenario, inhibitory activity acted as a gating mechanism for excitatory plasticity, by keeping the learning rate at a minimum when inhibitory currents were high^{23} (Fig. 1d). Excitatory input weights could, thus, change only during periods of presynaptic disinhibition—that is, the learning window (Extended Data Fig. 4)—and were otherwise stable (Fig. 5b,c). In our simulations, we initially set all excitatory weights to the same strength. A receptive field profile emerged at excitatory synapses after a period of strong stimulation of pathways during the first learning window. The acquired excitatory receptive profile remained stable (static) after the learning period (Fig. 5b, top). Inhibitory synapses changed on a slower timescale (Fig. 5b, bottom) and, due to the spike timing dependence of codependent ISP, developed a cotuned field with the excitatory receptive field (Fig. 5d, top). Inspired by experimental work^{21}, we then briefly activated a nonpreferred pathway during a period of disinhibition (Fig. 5c, top), altering the tuning of excitatory weights and making the previously nonpreferred pathway ‘preferred’ (Fig. 5d, middle). This change in tuning happened thanks to the Hebbian component of the codependent excitatory plasticity rule that induced LTP in the active pathway and the heterosynaptic plasticity component triggering LTD in pathways that were inactive during the learning window, similar to receptive field plasticity reported in mice visual cortex in vivo^{19}. As before, inhibitory weights were reshaped by codependent ISP to a cotuned field with the most recent excitatory receptive field (Fig. 5c, bottom), reaching a state of detailed balance, in which excitatory and inhibitory weights are cotuned based on their input preference^{3} (Fig. 5d, bottom). Plasticity of both excitatory and inhibitory inputs, thus, mimicked results from rat auditory cortex^{21} (Fig. 5e).
Receptive field formation followed by a reshaping of stimulustuned excitation and cotuned inhibition was successful only when the learning rules were codependent (see Supplementary Fig. 3 for a comparison with spikebased and voltagebased models). Moreover, either fast inhibitory plasticity or weak inhibitory control over excitatory plasticity disrupted the formation or stability of receptive fields (Extended Data Fig. 5). When excitatory and inhibitory plasticity operated at similar timescales, inhibitory plasticity prevented excitatory weights to change during disinhibition, because any externally induced decrease in inhibition was quickly compensated for by inhibitory plasticity (Extended Data Fig. 5a–c). With reduced inhibitory control, excitatory weights fluctuated wildly (Extended Data Fig. 5d,e). Although a preferred input signal could be momentarily established, the new preference was soon lost because baseline levels of inhibition were not blocking ongoing excitatory plasticity (Extended Data Fig. 5f).
Dendritic clustering with single or mixed feature selectivity
The dendritic tree of neurons is an intricate spatial structure enabling complex neuronal processing that is impossible to achieve in singlecompartment neuron models^{44}. To assess how our learning rules affected the dendritic organization of synapses, we attached passive dendritic compartments to the soma of our model. Dendritic membrane potentials could be depolarized to values well above the somatic spiking threshold depending on their proximity—that is, electrotonic distance—to the soma (Fig. 6a). These superthreshold membrane potential fluctuations gave rise to larger NMDA and GABA_{A} current fluctuations in distal dendrites (Fig. 6b). Like in the single compartmental models, when excitation and inhibition were unbalanced (that is, when receiving uncorrelated inputs), distal dendrites could undergo fast changes due to the currentinduced high learning rates for excitatory plasticity (Fig. 6b, thick red line). However, when currents were balanced (that is, when receiving correlated excitatory and inhibitory inputs), larger inhibitory currents gated excitatory plasticity ‘off’ despite strong excitation (Fig. 6b, thick blue line). Additionally, the larger the distance of a dendrite to the soma and, consequently, weaker passive coupling^{45} (Fig. 6c), the smaller the influence on the initiation of postsynaptic spikes (Extended Data Fig. 6).
Synapses thus developed differently according to the activity of their neighboring inputs and according to somatic proximity (Fig. 6d). When most excitatory inputs onto a dendritic compartment were coactive—that is, originated from the same source (for example, stimulus feature)—their coactive synapses were strengthened, creating a cluster of similarly tuned inputs onto the compartment (Fig. 6d, middle). Uncorrelated, independently active excitatory synapses weakened and eventually faded away (Fig. 6d, middle). In contrast, when more than a certain number of excitatory inputs were independent, coactive synapses decreased in weight and faded, whereas independently active excitatory synapses strengthened (Fig. 6d, right). The number of coactive excitatory synapses necessary for a dendritic compartment to develop single feature tuning varied with somatic proximity and whether excitation and inhibition were matched (Fig. 6e,f and Extended Data Fig. 7). Notably, in the balanced state, substantially more coactive excitatory synapses were necessary to create clusters at distal than at proximal dendrites (Fig. 6e), because only large groups of coactive excitatory synapses could initiate LTPinducing prebeforepost spike pairs (Extended Data Fig. 6). Thus, single feature or mixed selectivity emerged in our model depending on the branch architecture of the dendritic host structure (Fig. 6f). The resulting connectivity of our simulations, for initially uncorrelated (and, thus, unbalanced) excitatory and inhibitory inputs (Fig. 6f, top), reflects experimental evidence of local dendritic clusters of neighboring excitatory synapses connected onto pyramidal neurons in layer 2/3 of ferretsʼ visual cortex^{46}. Moreover, our results were in line with observations in CA3 pyramidal neurons of rats where a larger proportion of clusters of excitatory connections was found in proximal regions of apical dendrites^{47}(Fig. 6f, bottom).
Transient amplification in recurrent spiking networks
Up to here, we explored the effects of codependent synaptic plasticity in a single postsynaptic neuron. However, recurrent neuronal circuits typically amplify instabilities of any synaptic plasticity rules at play^{9,35}. We thus investigated codependent plasticity in a recurrent neuronal network of spiking neurons with plastic excitatorytoexcitatory (EE) and inhibitorytoexcitatory (IE) synapses (Methods and Fig. 7a). Naive network activity was approximately asynchronous and irregular, with unimodal membrane potential distribution (Extended Data Fig. 8). During learning, neurons began to alternate between hyperpolarized and depolarized states (Fig. 7b,c). Excitatory neurons with longer periods of depolarization developed strong (EE) output synapses and weak (EE) input synapses. Vice versa, neurons with longer periods of hyperpolarization developed weak output synapses but strong excitatory input synapses (Fig. 7d,e). The network eventually stabilized in a high conductance state^{48} that was driven mainly by the excitatory current setpoint set by the codependent excitatory plasticity model (Extended Data Fig. 8). The final connectivity matrix featured opposing strengths of input and output EE connections—that is, excitatory neurons with strong (EE) output synapses developed weak (EE) input synapses and vice versa (Fig. 7f,g)—with IE connections that were correlated to the EE input weights of each neuron (Fig. 7h). Notably, this structure in the learned connectivity matrix depended on the balancing setpoint term of the codependent inhibitory plasticity model (Fig. 7i and Extended Data Fig. 9a–c; α in equation (2)). For a setpoint α = E / I < 1, strong inhibitory currents effectively matched excitatory inputs, not allowing any weight asymmetry to emerge (Extended Data Fig. 9, top row). For α > 1.2, periods of networkwide high and low firing rates due to synchronized hyperpolarized and depolarized states (Extended Data Fig. 9, bottom row) led to symmetric connections. For 1 < α < 1.2, a strong asymmetry of weights emerged (Fig. 7i and Extended Data Fig. 9, middle row) that resulted in a wide distribution of baseline firing rates in the same network (Fig. 7j,k), similar to what has been observed in cortical recordings in vivo^{49}.
To investigate the network’s response to perturbations, we delivered various stimulus patterns to the network (Methods). Before the external stimulation, network neurons were in a state of selfsustained activity, not receiving any external input. During a 1s stimulation, used to perturb the network’s dynamics, each of the neurons received external excitatory spikes with a constant, patternspecific and neuronspecific firing rate (Methods). Randomly selected stimulus patterns (uniformly distributed firing rates) resulted in relatively muted responses (Fig. 8a,b, ‘stimulus R.’) similar to the naive network responses (Extended Data Fig. 10a,b). To identify specific patterns that affected the firing rate dynamics more greatly, we calculated a hypothetical impact of a neuron on the network dynamics, defined as its baseline firing rate (in the selfsustained state) multiplied by its total output weights (according to Fig. 7g,j), giving us a measure of how much a variation in firing rate of a particular neuron would affect the network. To quantify observed network responses, we calculated the ℓ_{2}norm of the firing rate deviations from baseline, which takes into account both positive and negative deviations from baseline equally (that is, it is the sum of the square of the individual firing rates minus the baseline; Methods), allowing us to find large transients even when the rate deviations were increased and decreased in equal amounts. The most impactful perturbation stimuli were observed in a network with asymmetric EE connectivity (Fig. 7f–h). Here, individual neuron responses ranged from small firing rate deflections to large, transient events during or after the delivery of the stimulus that could last several seconds (Fig. 8a,b, ‘stimuli 1–4’), similar to in vivo recordings during sensory activity and movement production^{26} in mammalian systems. The maximum response amplitude resulted from a stimulation pattern in which excitatory neurons with big hypothetical impact and inhibitory neurons with small hypothetical impact received the strong excitatory input currents (Fig. 8a,b, ‘stimulus 1’). Other combinations (for example, shuffling 75% of the ‘stimulus 1’ pattern; Methods) generated intermediate response amplitudes (Fig. 8a,b, ‘stimuli 2–4’). Both naive networks and networks with symmetric connectivity (Fig. 7i, α = 0.9 and α = 1.4) failed to generate large deviations from baseline after stimulus offset (Extended Data Fig. 10), confirming that codependent plasticity shaped the connectivity structure to allow for transient amplification. Finally, the activity of transiently amplified population dynamics could be used to control the activity of a readout network with two output units to draw complex patterns (Fig. 8c,d).
Discussion
Here we introduce a general framework to describe synaptic plasticity as a function of synapsespecific (presynaptic and postsynaptic) interactions, including the modulatory effects of nearby synapses. We built excitatory and inhibitory plasticity rules according to experimental observations, such that the effect of neighboring synapses could gate, control and even invert the direction of efficacy changes^{11,12,13,14,15,16,17,18,24}. Notably, excitatory and inhibitory plasticity rules were constructed such that they strove toward different fixed points (constant levels of excitatory currents for excitatory plasticity and EI balance for inhibitory plasticity), thus collaborating without mutual antagonism.
In our model, inhibition plays an important role in controlling excitatory plasticity, allowing us to make several predictions. First, inhibitory plasticity must be slower than excitatory plasticity. Rapid strengthening of inhibitory weights could compensate for the decreased inhibition during learning periods, effectively blocking excitatory plasticity. Second, inhibitory control over excitatory plasticity has to be relatively strong. That is because the mechanism that allows excitatory weights to quickly reorganize during periods of disinhibition was also responsible for longterm stability of such modifications when inhibitory activity was at baseline. Without strong control, excitatory weights constantly changed due to presynaptic and postsynaptic activity, drifting from the learned weight pattern. Finally, our model also predicts that dendrites on which synaptic contacts of both excitatory and inhibitory presynaptic neurons have correlated activity likely form a connectivity pattern reflecting single feature selectivity. In this scenario, the initial connectivity pattern will determine whether a dendritic region may respond to only a few or many input features, which might, for example, give rise to linear or nonlinear integration of inputs at the soma^{44}.
In our model, neighboring excitatory influence on synaptic plasticity was driven by slow, NMDAlike excitatory currents. Consequently, the same pattern of presynaptic and postsynaptic spike times could produce distinct weight dynamics depending on the levels of postsynaptic depolarization (due to an increase in excitatory currents through NMDA channels caused by the release of the magnesium block^{50}). However, an increase in excitatory activity can lead to a rise in the amplitude of excitatory currents (thus also eliciting stronger LTP), even without depolarization of the postsynaptic neuron (when, for example, inhibition tightly balances excitation). Postsynaptic membrane potential and presynaptic spike patterns, thus, independently control excitatory plasticity in our model. This is in line with cooperative views on synaptic plasticity^{18} and experimental findings showing that highfrequency stimulation, which usually elicits LTP, produces LTD when NMDA ion channels are blocked^{51}. Further experimental data are necessary to disentangle the specific role of excitatory currents and postsynaptic firing frequency in shaping excitatory synaptic plasticity and, thus, unveiling the precise biological form of codependent plasticity.
The setpoint dynamics for excitatory currents can be interpreted as a mechanism that normalizes excitatory weights by keeping their total combined weights within a range that guarantees a certain level of excitatory currents, similarly to homeostatic regulation of excitatory bouton size in dendrites^{52}. Our rule accomplishes this homeostatic regulation through a local combination of Hebbian LTP and heterosynaptic weakening, similarly to what has been reported in dendrites of visual cortex of mice in vivo^{19}. Our results show how such plasticity can develop a stable, balanced network that amplifies particular types of input, generating complex spatiotemporal patterns of activity. These networks developed such that they emulate motorlike outputs for both average and singletrial experiments^{26,53} without specifically being tuned for it. In our simulations, the phenomenon of transient amplification emerged as a result of the network acquiring a stable high conductance state^{48} with asymmetric excitatory–excitatory connectivity. This state was established by an autonomous modification of excitatory weights toward a setpoint for excitatory currents combined with periods of hyperpolarized and depolarized membrane potential. Notably, excitation was balanced by inhibition due to the inhibitory weights selfadjusting toward a regime of precise balance.
Our set of codependent synaptic plasticity rules integrates the mathematical formulation of a number of previously proposed rules that rely on spike times^{5,7,9}, synaptic current^{8,38} with implicit voltage dependence^{6,37}, heterosynaptic weakening^{9} and neighboring synaptic activation^{31,38} in a single theoretical framework. In addition to amplifying correlated input activity by way of controlling the efficacy of a synapse, each of the mechanisms in these previous models may replicate a different facet of learning that was not fully explored with our model and may serve as a starting point for future modifications of the codependent plasticity rules that we put forward. For example, spikebased plasticity rules can maintain a set of stable firing rate setpoints^{7,9,25}. Rules based on local membrane potentials^{6}, on the other hand, are ideal for spatially extended dendritic structure, making it possible to detect localized activity and allowing a spatial redistribution of synaptic weights to improve, for example, associative memory when multiple features are learned by a neural network^{37}. Similarly, calciuminfluxrelated models^{8} are ideal to incorporate information about presynaptic activation, explaining the emergence of binocular matching in dendrites^{38}. Neighboring activation models^{31} emulate neurotrophic factors that influence the emergence of clustering of synapses during development.
We unified these disparate approaches in a fourvariable model that accounts for the interplay between different synapse types during learning and captures a large range of experimental observations. We focused on only two types of synapses—that is, excitatorytoexcitatory and inhibitorytoexcitatory synapses, in an abstract setting—but the simplicity of our model allows for the adaptation of a larger number of synaptic types, including, for example, modulatory signals present in threefactor learning rules^{54}. Faithful modeling of a broader range of influences will require additional experimental work to monitor multicell interactions by way of, for example, patterns of excitatory input with glutamate uncaging^{55} or alloptical intervention in vivo^{56,57}. Looking at synaptic plasticity from a holistic viewpoint of integrated synaptic machinery, rather than as a set of disconnected mechanisms, may provide a solid basis to understanding learning and memory.
Methods
Neuron model
Point neuron
In the simulations with a postsynaptic neuron described by a single variable (point neuron), we implemented a LIF neuron with afterhyperpolarization (AHP) current and conductancebased synapses. The postsynaptic neuron’s membrane potential, u(t), evolved according to a firstorder differential equation:
where τ_{m} is the membrane time constant (τ_{m} = RC; leak resistance × membrane capacitance); u_{rest} is the resting membrane potential; g_{AHP}(t) is the conductance of the AHP channel with reversal potential E_{AHP}; I_{ext}(t) is an external current used to mimic experimental protocols to induce excitatory plasticity; and g_{X}(t) and E_{X} are the conductance and the reversal potential of the synaptic channel X, respectively, with X = {AMPA, NMDA, GABA_{A}}. Excitatory NMDA channels were implemented with a nonlinear function of the membrane potential, caused by a Mg^{2+} block, whose effect was simulated by the function:
where a_{NMDA} and b_{NMDA} are parameters^{50}. The AHP conductance was modeled as:
where τ_{AHP} is the characteristic time of the AHP channel; A_{AHP} is the amplitude of increase in conductance due to a single postsynaptic spike; and S_{post}(t) is the spike train of the postsynaptic neuron:
where \({t}_{k,{{{\rm{post}}}}}^{* }\) is the time of the kth spike of the postsynaptic neuron, and δ( ⋅ ) is the Dirac’s delta. The synaptic conductance was modeled as:
where τ_{X} is the characteristic time of the neuroreceptor X. The sum on the righthand side of equation (7) corresponds to presynaptic spike trains weighted by the synaptic strength w_{j}(t). The presynaptic spike train of neuron j was modeled as:
where \({t}_{k,\;j}^{* }\) is the time of the kth spike of neuron j. The postsynaptic neuron elicited an action potential whenever the membrane potential crossed a spiking threshold from below. We simulated two types of threshold: fixed or adaptive.

Fixed spiking threshold. A fixed spiking threshold was implemented as a parameter, u_{th}. When the postsynaptic neuron’s membrane potential crossed u_{th} from below, a spike was generated, and the postsynaptic neuron’s membrane potential was instantaneously reset to u_{reset} and then clamped at this value for the duration of the refractory period, τ_{ref}. All simulations with a single postsynaptic neuron were implemented with a fixed spiking threshold (Figs. 2–6, Extended Data Figs. 2, 3 and 5–7 and Supplementary Figs. 3 and 4), except the simulations in which the action potential was explicitly implemented (Extended Data Fig. 2c,g,k and Supplementary Figs. 2 and 3d; details in the Supplementary Modeling Note).

Adapting spiking threshold. For the simulations of the recurrent network, we used an adapting spiking threshold, u_{th}(t). When the postsynaptic neuron’s membrane potential crossed u_{th}(t) from below, a spike was generated, and the postsynaptic neuron’s membrane potential was instantaneously reset to u_{reset} without any additional clamping of the membrane potential (the refractory period that results from the adapting threshold is calculated below). Upon spike, the adapting spiking threshold, u_{th}(t), was instantaneously set to \({u}_{{{{\rm{th}}}}}^{* }\), decaying back to its baseline according to:
$${\tau}_{{{{\rm{th}}}}}\frac{{{\rm{d}}}{u}_{{{{\rm{th}}}}}(t)}{{{\rm{d}}}t}={u}_{{{{\rm{th}}}}}(t)+{u}_{{{{\rm{th}}}}}^{0},$$(9)where τ_{th} is the decaying time for the spiking threshold variable, and \({u}_{{{{\rm{th}}}}}^{0}\) is the baseline for spike generation. The maximum depolarization of the membrane potential is linked to the reversal potential of NMDA, and, thus, the absolute refractory period can be calculated as:
$${\tau }_{{{{\rm{ref}}}}}={\tau }_{{{{\rm{th}}}}}\ln \left(\frac{{u}_{{{{\rm{th}}}}}^{* }{u}_{{{{\rm{th}}}}}^{0}}{{E}_{{{{\rm{NMDA}}}}}{u}_{{{{\rm{th}}}}}^{0}}\right),$$(10)which is the time the adapting threshold takes to decay to the same value as the reversal potential of the NMDA channels.
Twolayer neuron
The twolayer neuron was simulated as a compartmental model with a spiking soma that receives input from N_{B} dendritic branches. The soma was modeled as a LIF neuron and the dendrite as a leaky integrator (without generation of action potentials). Somatic membrane potential evolved according to:
The soma of the twolayer neuron was similar to the point neuron (equation (3)); however, synaptic currents were injected on the dendritic tree, which interacted with the soma passively through the last term on the righthand side of equation (11), J_{i} being the conductance that controls the current flow due to connection between the soma and the ith dendrite. In equation (11), u_{i}(t) is the membrane potential of the dendritic branch i. When the somatic membrane potential, u_{soma}(t), crossed the threshold, u_{th}, from below, the postsynaptic neuron generated an action potential, being instantaneously reset to u_{reset} and then clamped at this value for the duration of the refractory period, τ_{ref}.
Dendritic compartments received presynaptic inputs as well as a sink current from the soma. The membrane potential of the ith branch, u_{i}(t), evolved according to the following differential equation:
Spikes were not elicited in dendritic compartments, but, due to the gating function H_{NMDA}(u) and the absence of spiking threshold, voltage plateaus occurred naturally when multiple inputs arrived simultaneously on a compartment (Fig. 6a). We simulated two compartments (N_{B} = 2) with the same coupling with the soma, J_{i}: one whose synapses changed according to the codependent synaptic plasticity model and one with fixed synapses that acted as a noise source.

Coupling strength as function of electrotonic distance. The crucial parameter introduced when including dendritic compartments was the coupling, J_{i}, between soma and the dendritic compartment i. Steady changes in membrane potential at the soma are attenuated at dendritic compartments, and this attenuation has been shown to decrease with distance. Without synaptic inputs and steady membrane potential at both soma and dendritic compartments, equations (11) and (12) are equal to zero, which results in:
$${J}_{i}=\frac{{a}_{i}}{1{a}_{i}},$$(13)where a_{i} is the passive dendritic attenuation of the dendritic compartment i,
$${a}_{i}=\frac{{\overline{u}}_{i}{u}_{{{{\rm{rest}}}}}}{{\overline{u}}_{{{{\rm{soma}}}}}{u}_{{{{\rm{rest}}}}}},$$(14)with \({\overline{u}}_{{{{\rm{soma}}}}}\) being a constant steady state held at the soma and \({\overline{u}}_{i}\) being the resulting steady state at the dendritic compartment i. The coupling between soma and the dendritic compartment i is a function of distance as follows:
$${J}_{i}={f}_{a}(d)=\frac{{d}_{* }^{2}}{{d}^{2}},$$(15)where d_{*} is a parameter that we fitted from experimental data from ref. ^{45} (Fig. 6c). We used this fitted parameter to approximate the distance to the soma in Fig. 6f and Extended Data Figs. 6 and 7 according to the soma–dendrite coupling strength used in our simulations.
Codependent synaptic plasticity model
The codependent plasticity model is a function on both spike times and input currents. We first describe how synaptic currents are accounted and then how excitatory and inhibitory plasticity models were implemented. We defined a variable E_{j}(t) to represent the process triggered by excitatory currents that influence plasticity at the synapse connecting a presynaptic neuron j to the postsynaptic neuron. We considered NMDA currents, which reflect influx of calcium into the postsynaptic cell, as the trigger for biochemical processes that are represented by the state of E_{j}(t). Its dynamics are described by the weighted sum (Gaussian envelope) of the synapsespecific filtered NMDA current, \({\widetilde{E}}_{j}(t)\),
where \({f}_{\Delta x}^{{{\;{\rm{E}}}}}(j,k)\) is the function describing the effect of synapse k in the plasticity of synapse j (based on physical distance considering that both synapses are connected onto the same postsynaptic neuron; details below). The synapsespecific filtered NMDA current dynamics are given by:
where τ_{E} is the characteristic time of the excitatory trace; u(t) is the postsynaptic membrane potential (dendritic membrane potential for the twolayer neuron model); and g_{NMDA,j}(t) is the conductance of the jth excitatory synapse connected onto the postsynaptic neuron, with dynamics given by:
Inhibitory inputs contributed to the plasticity model through a variable I(t). For the inhibitory trace, we used GABA_{A} currents, which reflect influx of chloride, as the trigger of the process described by I(t). The inhibitory trace evolved as:
where τ_{I} is the characteristic time of the inhibitory trace, and \({g}_{{{{{\rm{GABA}}}}}_{{{{\rm{A}}}}},k}(t)\) is the conductance of the kth inhibitory synapse connected onto the postsynaptic neuron (or dendritic compartment) described as:
Notice that both E_{j}(t) and I(t) are in units of voltage because the conductance is unit free in our neuron model implementation (equation (3)).
Influence of distance between synapses
To incorporate distancedependent influence of the activation of a synapse’s neighbors onto excitatory plasticity, we implemented the function \({f}_{\Delta x}^{{{\;{\rm{E}}}}}(i,j)\) in equation (16). For simplicity, we considered that the amplitude of the distancedependent influence decays with Gaussianlike shaped function of the synapses’ distance:
where N_{E} is the number of excitatory synapses; i is the index of synapse undergoing plasticity; and j is the index of the its neighboring synapse, including j = i so that the strongest effect is the influx of the excitatory current by the synapse undergoing plasticity. In equation (21), the term Δx(i, j) is the electrotonic distance between synapses j and i, and the parameter σ is the characteristic distance (that is, standard deviation) of the contribution of excitatory synapses for the variable E_{j}(t). The term inside curly brackets on the righthand side of equation (21) is a normalizing constant.
The sum of the codependent variables E_{j}(t) for a postsynaptic neuron based on the synapsespecific filtered NMDA currents, \({\widetilde{E}}_{j}(t)\), can be written as:
With the normalization used in equation (21), the average of the variable E_{j}(t) is approximately equal to the total synapsespecific filtered NMDA currents, \({\widetilde{E}}_{j}(t)\) (equation (16)), which is independent of σ for a large number of synapses (N_{E} ≫ 1). Notably, for very large σ values (σ ≫ N_{E}), all synapses influence each other’s plasticity equally, so that its implementation can be simplified as:
Codependent excitatory synaptic plasticity
The codependent excitatory synaptic plasticity model is an STDP model regulated by excitatory and inhibitory inputs through E_{j}(t) and I(t). The weight of the jth synapse onto the the postsynaptic neuron (or dendritic compartment), w_{j}(t), changed according to:
where A_{LTP}, A_{het} and A_{LTD} are the learning rates of longterm potentiation, heterosynaptic plasticity and longterm depression, respectively. The additional parameter I* defines the level of control that inhibitory activity imposes onto excitatory synapses, with parameter γ defining the shape of the control. Variables S_{post}(t) and S_{j}(t) represent the postsynaptic and presynaptic spike trains, respectively, as described above for the neuron model (equations (6) and (8)). The trace of the presynaptic spike train is represented by \({x}_{j}^{+}(t)\), and the traces of the postsynaptic spike train (with different timescales) are represented by \({y}_{{{{\rm{post}}}}}^{E}(t)\) and \({y}_{{{{\rm{post}}}}}^{}(t)\). They evolve in time according to:
and
For values of inhibitory trace larger than a threshold, I(t) > I_{th}, we effectively blocked excitatory plasticity to mimic complete shunting of backpropagating action potentials^{58} or additional blocking mechanisms that depend on inhibition^{23}. We implemented maximum and minimum allowed values for excitatory weights, \({w}_{\max }^{{{{\rm{E}}}}}=10\) nS and \({w}_{\min }^{{{{\rm{E}}}}}=1{0}^{5}\) nS, respectively.
Codependent inhibitory synaptic plasticity
Similar to the excitatory learning rule, the codependent inhibitory synaptic plasticity is a function of spike times and synaptic currents. The weight of the jth inhibitory synapse onto the postsynaptic neuron (or dendritic compartment), w_{j}(t), changed over time according to a differential equation given by:
Parameters A_{ISP} and α control the learning rate and the balance of excitatory and inhibitory currents, respectively. Variables x_{j}(t) and y_{post}(t) are traces of presynaptic and postsynaptic spike trains, respectively, that create a symmetric STDPlike curve, with dynamics given by:
and
The STDP window is characterized by the time constant τ_{iSTDP}. The variable E_{j}(t) is given by equation (23). We implemented maximum and minimum allowed values for inhibitory weights, \({w}_{\max }^{{{{\rm{I}}}}}=70\) nS and \({w}_{\min }^{{{{\rm{I}}}}}=1{0}^{5}\) nS, respectively.
Experimental protocols: Fig. 2b,c,e,f,h–j and Extended Data Fig. 2d–k
We fitted three datasets with the codependent excitatory synaptic plasticity model to asses its dependency on voltage—that is, membrane potential (Fig. 2b)—on the frequency of presynaptic and postsynaptic spikes (Fig. 2c) and on the effect of coinduction of LTP at neighboring synapses (Fig. 2h–i).

Voltagedependent STDP protocol. Following the original experiments^{16}, we simulated five presynaptic and five postsynaptic spikes at 50 Hz, with 10 ms between presynaptic and postsynaptic spike times (prebeforepost; Δt = +10 ms), repeated 15 times with an interval of 10 s in between each pairing (Fig. 2b). The more depolarized the membrane potential, the bigger the effect of the NMDA currents, and, therefore, more LTP was induced. We combined three different ways to depolarize the postsynaptic neuron’s membrane potential: strength of synapse, current clamp and backpropagating action potential (see the Supplementary Modeling Note for details). Postsynaptic spike times were directly implemented in the codependent plasticity rule—that is, manually setting the spike times in equation (6), spike times that were also used to generate backpropagating action potentials (Supplementary Fig. 1; see the Supplementary Modeling Note for details). We implemented a parameter sweep on these three quantities (see the Supplementary Modeling Note for details), measuring the average depolarization during the prebeforepost interval of the simulation (200ms interval starting at the first presynaptic spike in each burst). Due to the multiple ways to depolarize the postsynaptic membrane potential, we plotted a region (instead of a single line) in Fig. 2b indicating the possible weight changes for the same depolarization with the different depolarization methods.

Frequencydependent STDP protocol. Following the protocol from the original experiments^{15}, we simulated 60 presynaptic and postsynaptic spikes with either Δt = +10 ms (prebeforepost) or Δt = −10 ms interval (postbeforepre) with firing rates between 0.1 Hz and 50 Hz. In the simulations of the frequencydependent protocol (Fig. 2c), postsynaptic spikes were induced by the injection of a current pulse, I_{ext}(t) = 3 nA, for the duration of 2 ms. For a smooth curve, we incremented presynaptic and postsynaptic firing rates in steps of 0.1 Hz (500 simulations per pairing in total). The increase in presynaptic firing rate caused a bigger accumulation in NMDA currents, which increased LTP (Extended Data Fig. 2a). In the simulations with extra presynaptic partners (Fig. 2e,f and Extended Data Fig. 2d–k), we calculated the average synaptic change over 10 trials to account for the trialtotrial variability due to the added external Poisson spike trains.

Distancedependent STDP protocol. In the simulations of the distancedependent protocol (Fig. 2h–i), postsynaptic spikes were induced by the injection of a current pulse, I_{ext}(t) = 3 nA, for the duration of 2 ms. We simulated 60 presynaptic spikes with interspike interval of 500 ms, each followed by three postsynaptic spikes with interspike interval of 20 ms. For Fig. 2h, we varied the interval between the presynaptic spike and the first postsynaptic spike in a threespike burst, defined as Δt. For Fig. 2i, we simulated the above protocol (prebeforeburst) with an interval Δt = 5 ms (‘strong LTP’) in a given synapse, followed by the same protocol with Δt = 35 ms (‘weak LTP’) in a neighboring synapse (Δx = 3 μm and σ = 3.16 μm in equation (21)), varying the interval between the strong and weak LTP inductions. For Fig. 2j, we simulated a similar protocol as the one in Fig. 2i, but we fixed the interval between the strong and weak LTP inductions (90 s) and varied the distance between the synapses.

Fitting. Fitting was done with brute force parameter sweep on four parameters for Fig. 2b,c (each fit with different values): A_{LTP}, A_{het}, A_{LTD} and τ_{E}. For Fig. 2h–j, a similar brute force parameter sweep on five parameters was performed: A_{LTP}, A_{het}, A_{LTD}, τ_{E} and σ, with the three plots having the same set of parameters.
Stability
The codependent plasticity model has a rich dynamics that involves changes in synaptic weights due to presynaptic and postsynaptic spike times as well as synaptic weight and input currents. In this section, we briefly analyze the fixed points for input currents and synaptic weights for general conditions of inputs and outputs.
Considering each synapse individually, we can write the average change in weights (from equation (24), ignoring inhibitory inputs) as:
where 〈⋅〉_{t} is the average over a time window bigger than the timescale of the quantities involved. In equation (33), we took into consideration that presynaptic spike times are not influenced by postsynaptic activity, and, thus, the average of the products in the last term on the righthand side of equation (32) is the equal to the product of the averages. Additionally, we assumed no strong correlations between E_{j}(t) and S_{post}(t) due to the small fluctuations of the variable E_{j}(t). Correlations between presynaptic and postsynaptic spikes govern the LTP term and, thus, cannot be ignored. They also depend on the neuron model and amount of inhibition a neuron (or compartment) receives. We can conclude from equation (33) that the weights from silent presynaptic neurons will vanish due to the heterosynaptic term. In our model, these weights can vanish only in moments of disinhibition, when the inhibitory control over excitatory plasticity is minimum.
For our analysis, we consider that all neurons of the network have nearly stationary firing rates without strong fluctuations. Therefore, the spike trains can be rewritten as average firing rates:
and the traces from the spike trains become:
where ν_{j} is the average firing rate of neuron j. The same is valid for the postsynaptic neuron’s firing rate as well as all other traces.
We consider the outcome of the excitatory plasticity rule when LTD is not present, A_{LTD} = 0, which informs us on steady state for excitatory currents as a competition between LTP and heterosynaptic plasticity only. Moreover, we assume that the postsynaptic firing rate, ν_{post}, is proportional to the total NMDA current:
where 〈ν_{I}〉 and 〈w_{I}〉 are the population average firing rate and weight of inhibitory afferents, respectively, and ν *, E * and \({w}_{{{{\rm{I}}}}}^{* }\) are parameters that depend on the neuron model (see the Supplementary Modeling Note for details). In this case, the steady state of the system is given by:
This is also the maximum value for excitatory currents for when LTD is present, as LTD can only decrease synaptic weights. To arrive in equation (37), we set equation (33) to zero and summed over j assuming weak correlations between presynaptic and postsynaptic spikes so that \({\langle {x}_{j}^{+}(t){E}_{j}(t){S}_{{{{\rm{post}}}}}(t)\rangle }_{t}={\langle {x}_{j}^{+}(t)\rangle }_{t}{\langle {E}_{j}(t)\rangle }_{t}{\langle {S}_{{{{\rm{post}}}}}(t)\rangle }_{t}\) (see the Supplementary Modeling Note for details). Notice that this fixed point depends on the presynaptic firing rates and the model parameters. For very low postsynaptic firing rates and weak excitatory weights, assuming two consecutive postsynaptic spikes and, thus, setting \({y}_{{{{\rm{post}}}}}^{{{{\rm{E}}}}}=1\) (rather than an average \(\langle {y}_{{{{\rm{post}}}}}^{{{{\rm{E}}}}}\rangle ={\nu }_{{{{\rm{post}}}}}{\tau }_{y{{{\rm{post}}}}}\ll 1\)), we find a threshold for which the learning rate of heterosynaptic plasticity induces vanishing of synapses:
For a recurrent network, we can assume that ν_{j} = ν_{post} and thus:
Notice that the maximum excitatory current onto a neuron embedded in a recurrent network is independent on firing rate of presynaptic and postsynaptic neurons.
In Fig. 3e–g, we simulated the codependent excitatory plasticity model with nonzero A_{LTP}, A_{het} and A_{LTD} but without inhibitory control. Each excitatory input was simulated with a constant presynaptic firing rate, 0 < ν_{j} < 18 Hz, uniformly distributed, while the firing rate of all presynaptic inhibitory neurons was set to 18 Hz (details below). For each corresponding value in the x axis of Fig. 3e–g, we simulated 40 trials (one point per trial is plotted). We separated these 40 trials into four combinations of the parameters σ and τ_{E} (10 trials per parameter set) to confirm the independence of the steady state on these parameters: σ = 10 and τ_{E} = 1,000 ms; σ = 1,000 and τ_{E} = 10 ms; and σ = 1,000 and τ_{E} = 1,000 ms. In Fig. 3e–g, we plotted the theory as equation (37). In Fig. 3e, we plotted the learning rate for which weights may vanish as a dashed vertical line (equation (38)). The parameters from equation (36) were fitted by varying excitatory and inhibitory weights without any plasticity (see the Supplementary Modeling Note for details). Extra postsynaptic spikes were manually added to the plasticity rule implementation (equation (6)) at 1 Hz (Poisson process) to enforce plasticity when excitatory inputs were too weak (compared to inhibitory inputs) to elicit postsynaptic response. To test the effect of input firing rate and LTD with weight dependency, we also simulated a similar protocol (as in Fig. 3e) with different levels of excitatory input (all presynaptic neurons with the same firing rate), LTD and inhibitory gating (Supplementary Fig. 4). These simulations show that the excitatory input levels had minimal effect on the fixed point of excitatory currents.
Applying the same idea to the codependent inhibitory synaptic plasticity model, we get the following average dynamics for the jth inhibitory weight:
where \(\overline{I}={\langle I(t)\rangle }_{t}\), and E_{j}(t) is the same for every inhibitory synapse connected onto the postsynaptic neuron (equation (23)) so that \(\overline{E}={\langle {E}_{j}(t)\rangle }_{t},{E}_{j}(t)={E}_{k}(t),\forall j,k\). From equation (41), we can calculate the steady state for the inhibitory learning rule, which results in the balance between excitation and inhibition given by α:
Synaptic changes for simple spike patterns and fixed excitatory and inhibitory input levels
From equation (24) and equation (28), we calculated changes in excitatory and inhibitory synapses for simple spike patterns (Extended Data Fig. 1). We considered fixed excitatory and inhibitory inputs and calculated changes in a given excitatory synapse as:
where Δt_{LTP} is the interval between presynaptic and postsynaptic spikes (prebeforepost); Δt_{het} is the interval between two consecutive postsynaptic spikes; and Δt_{LTD} is the interval between postsynaptic and presynaptic spikes (postbeforepre). In a similar fashion, we calculated changes at a given inhibitory synapse as:
where Δt is the interval between presynaptic and postsynaptic spikes, being positive for prebeforepost and negative for postbeforepre spike patterns.
Inputs
Single output neuron (feedforward network)
Presynaptic spike trains for single neurons were implemented as follows. A spike of a presynaptic neuron j occurred in a given timestep of duration Δt with probability p_{j}(t) if there was no spike elicited during the refractory period beforehand; \({\tau }_{ref}^{E}\) for excitatory and \({\tau }_{ref}^{I}\) for inhibitory inputs, respectively; and zero otherwise. Different simulation paradigms were defined by the input statistics, which are described below.

Constant firing rate. In Figs. 2e,f, 3 and 4, Extended Data Figs. 2d–k and 3 and Supplementary Figs. 2 and 4, presynaptic neurons fired spikes with a constant probability outside the refractory period. For a constant probability p_{j}(t) = p_{j}, the mean firing rate, ν_{j}, was therefore:
$${\nu }_{j}=\frac{1}{\Delta t}{p}_{j}{\left(1{p}_{j}\right)}^{{\tau }_{ref}^{X}/\Delta t}.$$(45)In Figs. 2e,f and 3c,d, Extended Data Figs. 2d–k and 3c,d and Supplementary Figs. 2 and 4, the firing rate for external neurons is indicated in the captions and legends. In Fig. 3c,d (colored points) and Fig. 3e–g, as well as Extended Data Fig. 3, the probability of external excitatory spikes was synapse specific, uniformly distributed: 0 < p_{j} ⩽ 0.002, whereas the probability of external inhibitory spikes was p_{j} = 0.002, resulting in 0 < ν_{j}⪅18.1 Hz and ν_{j} ≈ 18.1 Hz, respectively, considering a timestep Δt = 0.1 ms and refractory periods \({\tau }_{{{{\rm{ref}}}}}^{{{{\rm{E}}}}}=5\) ms and \({\tau }_{{{{\rm{ref}}}}}^{{{{\rm{I}}}}}=2.5\) ms. In Fig. 3c,d (gray points), the probability of external excitatory spikes was p_{j} = 0.001, whereas the probability of external inhibitory spikes was p_{j} = 0.002, resulting in ν_{j} ≈ 9 Hz and ν_{j} ≈ 18.1 Hz, respectively. In Fig. 4 and Supplementary Fig. 4, the probability of external excitatory and inhibitory spikes was p_{j} = 5 × 10^{−4} and p_{j} = 10^{−3} for excitatory and inhibitory afferents, resulting in ν_{j} ≈ 4.87 Hz and ν_{j} ≈ 9.75 Hz, respectively.

Variable firing rate (pathways). In Figs. 5 and 6, Extended Data Figs. 4–7 and Supplementary Fig. 3, presynaptic neurons fired spikes according to an inhomogeneous Poisson process.
For the receptive field plasticity simulations (Fig. 5, Extended Data Figs. 4 and 5 and Supplementary Fig. 3), we simulated eight input pathways. We defined a pathway as a group of 100 excitatory and 25 inhibitory afferents (spike trains of presynaptic neurons) with two components: a constant background firing rate and a fluctuating firing rate taken from an Ornstein–Uhlenbeck (OU) process as described below. The background firing rate for all 800 excitatory and 200 inhibitory afferents was given by a probability of \({p}_{j}^{{{{\rm{bg}}}}}=2\times 1{0}^{4}\) for excitatory and \({p}_{j}^{{{{\rm{bg}}}}}=4\times 1{0}^{4}\) for inhibitory afferents, with respective background firing rates of \({\nu }_{j}^{{{{\rm{bg}}}}}\approx 1.98\) Hz and \({\nu }_{j}^{{{{\rm{bg}}}}}\approx 3.96\) Hz for excitatory and inhibitory presynaptic neurons, respectively, considering a timestep Δt = 0.1 ms and refractory periods of \({\tau }_{ref}^{E}=5\) ms and \({\tau }_{ref}^{I}=2.5\) ms. The fluctuating firing rate of the pathway μ was created from an OU process. We used an auxiliary variable, y_{μ}(t), that followed stochastic dynamics given by:
$$\frac{{{\rm{d}}}{y}_{\mu}(t)}{{{\rm{d}}}t}=\frac{{y}_{\mu}(t)}{{\tau }_{{{{\rm{OU}}}}}}+{\xi }_{\mu }(t),$$(46)where τ_{OU} is the time constant of the OU process, and ξ_{μ}(t) is a random variable drawn from a Gaussian distribution with zero mean and unitary standard deviation. The fluctuating probability was then defined as:
$${p}_{j}^{\mu }(t)={p}^{* }{\left[{y}_{\mu }(t)\right]}_{+},$$(47)where p* = 0.025 is the amplitude of the fluctuations, and [⋅]_{+} is a rectifying function. The probability of a presynaptic afferent j belonging to pathway μ to spike due to both background and fluctuating firing rate was given by:
$${p}_{j}(t)={p}_{j}^{\mu }(t)+{p}_{j}^{{{{\rm{bg}}}}}.$$(48)In Fig. 5 and Extended Data Fig. 5, we implemented two learning windows: first to learn the initial receptive field profile (Fig. 5b and Extended Data Figs. 5a,d; see Extended Data Fig. 4a) and later to learn the new configuration of the receptive field profile (Fig. 5c and Extended Data Fig. 5b,e; see Extended Data Fig. 4b). During both learning periods, which lasted 700 ms, we set the firing rate of all inhibitory neurons to background firing rate (constant) and the excitatory pathways as follows. During the first 500 ms, we set the probability of all excitatory neurons to spike at background levels (constant). During the last 200 ms, we set the probability of all excitatory neurons in each excitatory pathway as α_{μ}p_{active}, with μ representing the pathway index, 0 ⩽ α_{μ} ⩽ 1 and p_{active} = 0.005. In the first learning period (Extended Data Fig. 4a), we used α_{6} = 0.8, α_{5} = α_{7} = 0.6, α_{4} = α_{8} = 0.4, α_{3} = 0.3, α_{2} = 0.2 and α_{1} = 0.15. In the second learning period (Extended Data Fig. 4b), we used α_{4} = 0.8, α_{3} = α_{5} = 0.6, α_{2} = α_{6} = 0.4, α_{1} = α_{7} = 0.3 and α_{8} = 0.2.
To explore the clustering effect on dendritic compartments in Fig. 6 and Extended Data Figs. 6 and 7, we divided the input spikes in pathways to have coactive or independent presynaptic afferents. We used the same implementation as for the receptive field simulations (described above), but we changed the number of afferents per group in both excitatory and inhibitory presynaptic inputs. A dendritic compartment received 32 excitatory and 16 inhibitory afferents. In Fig. 6e,f and Extended Data Figs. 6 and 7, we used two conditions: independent E & I and matching E & I. In both cases, the number of excitatory afferents following the same fluctuating firing rate was increased from 1 (0% coactive group size) to 32 (100% coactive group size), whereas the remaining excitatory afferents had independent fluctuating firing rates. For independent excitatory and inhibitory inputs (independent E & I), all 16 inhibitory afferents followed independent fluctuating firing rates. For matching excitatory and inhibitory inputs (matching E & I), eight inhibitory afferents followed the same fluctuations in firing rate as the coactive excitatory group (of different sizes), whereas the other eight inhibitory afferents were independent.
Details of the learning period for Supplementary Fig. 3 can be found in the Supplementary Modeling Note.

Recurrent network. The simulation with the recurrent network had two parts: a learning period with both excitatory and inhibitory plasticity active and a recall period without plasticity mechanisms active.

Learning period. During the beginning of the learning period of T = 10 h, we kept the network receiving a minimum of external input to avoid inactivity. The implementation of the external presynaptic spike trains was as follows. In the beginning of the simulation (first 5 min of simulated time), each excitatory neuron of the network received a spike train from one external source with constant probability p = 0.01 (timestep Δt = 0.1 ms) to mimic 100 presynaptic afferents firing at 1 Hz. We decreased the probability to p = 0.001 for another 5 min of simulated time and then set it to p = 0.0001 for the rest of the simulation.

Recall period. To elicit transient amplification, we selected specific neurons to receive external input based on the resulted weight matrix and the neurons’ baseline firing rate. Before and after stimulation, no external input was implemented, meaning that the network was in a state of selfsustained activity. During the stimulation period, network neurons were stimulated with presynaptic spikes with a constant firing rate with different amplitudes for each of the five conditions (stimulus patterns) shown in Fig. 8. We ordered excitatory and inhibitory neurons according to their baseline firing rate multiplied by total output weight (from maximum to minimum values), \({\nu }_{j}^{{{{\rm{bg}}}}}\mathop{\sum }\nolimits_{i = 1}^{{N}_{{{{\rm{E}}}}}}{w}_{ij}\) for excitatory neurons and \({\nu }_{j}^{{{{\rm{bg}}}}}\mathop{\sum }\nolimits_{i = 1}^{{N}_{{{{\rm{E}}}}}}{w}_{ij}\) for inhibitory neurons, where N_{E} is the total number of excitatory neurons in the recurrent network. We assumed that the bigger the baseline firing rate multiplied by the output weight, the bigger the neuron’s influence on the rest of the network. Considering the order of maximal influence to minimal influence, we used the following patterns of stimulation. For stimulus 1, external firing rates were decreased from \({p}_{j}^{{{{\rm{E}}}}}=0.5\) to \({p}_{j}^{{{{\rm{E}}}}}=0\) for excitatory neurons and increased from \({p}_{j}^{{{{\rm{I}}}}}=0\) to \({p}_{j}^{{{{\rm{I}}}}}=0.25\) for inhibitory neurons. For stimuli 2–4, 25% of excitatory and inhibitory neurons (chosen randomly from a uniform distribution) had the same external input as for stimulus 1, whereas the remaining 75% had a random probability \({p}_{j}^{{{{\rm{E}}}}}=[0,0.5]\) and \({p}_{j}^{{{{\rm{I}}}}}=[0,0.4]\) drawn from a uniform distribution. For stimulus ‘R.’, external firing rates had a random probability \({p}_{j}^{{{{\rm{E}}}}}=[0,0.5]\) and \({p}_{j}^{{{{\rm{I}}}}}=[0,0.4]\) for excitatory and inhibitory neurons, respectively. Notice that in the pattern of stimulation that activated excitatory neurons with large and inhibitory neurons with small impact on the network (stimulus 1), amplification was the largest among the stimulus patterns, and, when the pattern of stimulation was random (stimulus ‘R.’), the resulting network dynamics had minimum amplification (Fig. 8a,b).

Clustering index for dendritic dynamics (Fig. 6e,f)
We defined the clustering index as:
where \(\left\langle {w}_{{{{\rm{co}}}}{{{\rm{active}}}}}\right\rangle\) is the average of the weights from the coactive excitatory group, and \(\left\langle {w}_{{{{\rm{independent}}}}}\right\rangle\) is the average of the weights from all independent groups after learning (see individual weight dynamics in Extended Data Fig. 7). When c_{cluster} = 1, the excitatory weights from the coactive group survived after learning and independent ones vanished, whereas, for c_{cluster} = −1, the opposite happened. Both coactive and independent groups survived after learning when c_{cluster} ≈ 0.
Training an output to draw complex patterns
To confirm whether the dynamics of the recurrent network were capable of generating rich output dynamics, we connected all excitatory neurons of our recurrent network to two linear readouts, x^{t} and y^{t}, with discrete timestep t, given by:
where \({\xi }_{x}^{t}\) and \({\xi }_{y}^{t}\) are noise sources taken from a uniform distribution in the interval [−0.02, 0.02]. The readouts represented movement in the horizontal and vertical directions of a twodimensional (2D) plane. The parameters a_{j}, b_{j}, x_{0} and y_{0} were optimized to minimize the error in both x and y coordinates:
where \({e}_{x}^{t}\) and \({e}_{y}^{t}\) are the errors in the horizontal and vertical directions, respectively, and \(1 < {\hat{x}}^{t} < 1\) and \(1 < {\hat{y}}^{t} < 1\) are the coordinates of one of four complex patterns. To calculate \({r}_{j}^{t}\), we filtered the spike trains of the jth neuron with a Gaussian filter with standard deviation σ_{r} = 10 ms:
where \({\widetilde{r}}_{j}(t,\Delta T)\) is the jth neuron’s normalized firing rate deviation from baseline (averaged over trials) in the time bin between t and t + ΔT (ΔT = 20 ms):
where \({r}_{j}^{{{{\rm{bg}}}}}\) is the jth neuron’s baseline firing rate. The timecourse of the simulation was divided into 88 bins, and the period after the stimulus offset was used to train the output weights to draw four complex patterns for the four different stimuli from Fig. 8 that resulted from distinct patterns of stimulation. Each training epoch (single pattern presentation) was simulated with the average firing rate of 1,000 trials and noise. We used the same activity patterns fed to the two readouts, \({r}_{j}^{t}\), to compute the principal components shown in Fig. 8c. Figure 8d shows 10 trajectories for each pattern. We did not perform any benchmark test as this is beyond the scope of this study.
Spikebased and voltagebased plasticity models
In Fig. 4c, we combined excitatory^{9} and inhibitory^{7} spikebased plasticity rules to show how they can destructively compete when their firing rate setpoints do not match. In Fig. 4d, we combined an excitatory spikebased plasticity rule^{9} with the codependent inhibitory synaptic plasticity rule to show how the competition is not present when the plasticity rules dynamics follow fixed points for different quantities—here, ESP imposes a firing rate setpoint while ISP imposes an input currents setpoint. In Extended Data Fig. 2b,c,f,g,j,k, we compared the codependent excitatory plasticity rule with spike based^{5} and voltage based^{6} for the frequencydependent STDP protocol^{15} with additional external inputs. In Supplementary Fig. 3, we implemented spikebased^{5,9,10,59} and voltagebased^{6} models in a receptive field plasticity paradigm. The spikebased and voltagebased plasticity models are described in the Supplementary Modeling Note.
Simulations and analyses
All simulations were run with Intel Fortran 19.0.1.144. Parameters used in simulations are defined in Supplementary Tables 1–9. Principal component analysis of the recurrent network activity was performed with MATLAB 2020b. Data collection and analysis were not performed blinded to the conditions of the experiments. No data were excluded.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Spiketimingdependent plasticity data (ref. ^{15} and ref. ^{16}) are publicly available from http://plasticity.muhc.mcgill.ca/page8.html.
Code availability
Relevant code for simulations reported in this study is available at https://github.com/ejagnes/codependent_plasticity.
References
Markram, H., Gerstner, W. & Sjöström, P. J. A history of spiketimingdependent plasticity. Front. Synaptic Neurosci. 3, 4 (2011).
Poo, M. et al. What is memory? The present state of the engram. BMC Biol. 14, 40 (2016).
Hennequin, G., Agnes, E. J. & Vogels, T. P. Inhibitory plasticity: balance, control, and codependence. Annu. Rev. Neurosci. 40, 557–579 (2017).
Song, S., Miller, K. D. & Abbott, L. F. Competitive Hebbian learning through spiketimingdependent synaptic plasticity. Nat. Neurosci. 3, 919–926 (2000).
Pfister, J.P. & Gerstner, W. Triplets of spikes in a model of spike timingdependent plasticity. J. Neurosci. 26, 9673–9682 (2006).
Clopath, C., Büsing, L., Vasilaki, E. & Gerstner, W. Connectivity reflects coding: a model of voltagebased STDP with homeostasis. Nat. Neurosci. 13, 344–352 (2010).
Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C. & Gerstner, W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573 (2011).
Graupner, M. & Brunel, N. Calciumbased plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location. Proc. Natl Acad. Sci. USA 109, 3991–3996 (2012).
Zenke, F., Agnes, E. J. & Gerstner, W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat. Commun. 6, 6922 (2015).
Payeur, A., Guerguiev, J., Zenke, F., Richards, B. A. & Naud, R. Burstdependent synaptic plasticity can coordinate learning in hierarchical circuits. Nat. Neurosci. 24, 1010–1019 (2021).
D’amour, J. A. & Froemke, R. C. Inhibitory and excitatory spiketimingdependent plasticity in the auditory cortex. Neuron 86, 514–528 (2015).
Mapelli, J., Gandolfi, D., Vilella, A., Zoli, M. & Bigiani, A. Heterosynaptic GABAergic plasticity bidirectionally driven by the activity of preand postsynaptic NMDA receptors. Proc. Natl Acad. Sci. USA 113, 9898–9903 (2016).
Wang, L. & Maffei, A. Inhibitory plasticity dictates the sign of plasticity at excitatory synapses. J. Neurosci. 34, 1083–1093 (2014).
Paille, V. et al. GABAergic circuits control spiketimingdependent plasticity. J. Neurosci. 33, 9353–9363 (2013).
Sjöström, P. J., Turrigiano, G. G. & Nelson, S. B. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 32, 1149–1164 (2001).
Sjöström, P. J. & Häusser, M. A cooperative switch determines the sign of synaptic plasticity in distal dendrites of neocortical pyramidal neurons. Neuron 51, 227–238 (2006).
Brandalise, F., Carta, S., Helmchen, F., Lisman, J. & Gerber, U. Dendritic NMDA spikes are necessary for timingdependent associative LTP in CA3 pyramidal cells. Nat. Commun. 7, 13480 (2016).
Debanne, D., Gähwiler, B. H. & Thompson, S. M. Cooperative interactions in the induction of longterm potentiation and depression of synaptic excitation between hippocampal CA3CA1 cell pairs in vitro. Proc. Natl Acad. Sci. USA 93, 11225–11230 (1996).
ElBoustani, S. et al. Locally coordinated synaptic plasticity of visual cortex neurons in vivo. Science 360, 1349–1354 (2018).
Tazerart, S., Mitchell, D. E., MirandaRottmann, S. & Araya, R. A spiketimingdependent plasticity rule for dendritic spines. Nat. Commun. 11, 4276 (2020).
Froemke, R. C., Merzenich, M. M. & Schreiner, C. E. A synaptic memory trace for cortical receptive field plasticity. Nature 450, 425–429 (2007).
Williams, L. E. & Holtmaat, A. Higherorder thalamocortical inputs gate synaptic longterm potentiation via disinhibition. Neuron 101, 91–102 (2019).
CantoBustos, M., Friason, F. K., Bassi, C. & Oswald, A.M. M. Disinhibitory circuitry gates associative synaptic plasticity in olfactory cortex. J. Neurosci. 42, 2942–2950 (2022).
Harvey, C. D. & Svoboda, K. Locally dynamic synaptic learning rules in pyramidal neuron dendrites. Nature 450, 1195–1200 (2007).
LitwinKumar, A. & Doiron, B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nat. Communun. 5, 5319 (2014).
Churchland, M. M. et al. Neural population dynamics during reaching. Nature 487, 51–56 (2012).
Hennequin, G., Vogels, T. P. & Gerstner, W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82, 1394–1406 (2014).
Christodoulou, G., Vogels, T. P. & Agnes, E. J. Regimes and mechanisms of transient amplification in abstract and biological neural networks. PLoS Comput. Biol. 18, e1010365 (2022).
Nicola, W. & Clopath, C. Supervised learning in spiking neural networks with force training. Nat. Commun. 8, 2208 (2017).
Sussillo, D. & Abbott, L. F. Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557 (2009).
Kirchner, J. H. & Gjorgjieva, J. Emergence of local and global synaptic organization on cortical dendrites. Nat. Commun. 12, 4005 (2021).
Bi, G.q & Poo, M.m Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 (1998).
Rutishauser, U., Mamelak, A. N. & Schuman, E. M. Singletrial learning of novel stimuli by individual neurons of the human hippocampusamygdala complex. Neuron 49, 805–813 (2006).
Letzkus, J. J., Wolff, S. B. E. & Lüthi, A. Disinhibition, a circuit mechanism for associative learning and memory. Neuron 88, 264–276 (2015).
Morrison, A., Aertsen, A. & Diesmann, M. Spiketimingdependent plasticity in balanced random networks. Neural Comput. 19, 1437–1467 (2007).
Woodin, M. A., Ganguly, K. & Poo, M.m Coincident preand postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl^{−} transporter activity. Neuron 39, 807–820 (2003).
Bono, J. & Clopath, C. Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level. Nat. Commun. 8, 706 (2017).
Hiratani, N. & Fukai, T. Detailed dendritic excitatory/inhibitory balance through heterosynaptic spiketimingdependent plasticity. J. Neurosci. 37, 12106–12122 (2017).
Ebner, C., Clopath, C., Jedlicka, P. & Cuntz, H. Unifying longterm plasticity rules for excitatory synapses by modeling dendrites of cortical pyramidal neurons. Cell Rep. 29, 4295–4307 (2019).
Agnes, E. J., Luppi, A. I. & Vogels, T. P. Complementary inhibitory weight profiles emerge from plasticity and allow flexible switching of receptive fields. J. Neurosci. 40, 9634–9649 (2020).
Miehl, C. & Gjorgjieva, J. Stability and learning in excitatory synapses by nonlinear inhibitory plasticity. PLoS Comput. Biol. 18, e1010682 (2022).
Fritz, J., Shamma, S., Elhilali, M. & Klein, D. Rapid taskrelated plasticity of spectrotemporal receptive fields in primary auditory cortex. Nat. Neurosci. 6, 1216–1223 (2003).
Froemke, R. C. Plasticity of cortical excitatoryinhibitory balance. Annu. Rev. Neurosci. 38, 195–219 (2015).
Poirazi, P. & Papoutsi, A. Illuminating dendritic function with computational models. Nat. Rev. Neurosci. 21, 303–321 (2020).
Gulledge, A. T. & Stuart, G. J. Action potential initiation and propagation in layer 5 pyramidal neurons of the rat prefrontal cortex: absence of dopamine modulation. J. Neurosci. 23, 11363–11372 (2003).
Wilson, D. E., Whitney, D. E., Scholl, B. & Fitzpatrick, D. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex. Nat. Neurosci. 19, 1003–1009 (2016).
Kavalali, E. T., Klingauf, J. & Tsien, R. W. Activitydependent regulation of synaptic clustering in a hippocampal culture system. Proc. Natl Acad. Sci. USA 96, 12893–12900 (1999).
Destexhe, A., Rudolph, M. & Paré, D. The highconductance state of neocortical neurons in vivo. Nat. Rev. Neurosci. 4, 739–751 (2003).
Hengen, K. B., Lambo, M. E., Van Hooser, S. D., Katz, D. B. & Turrigiano, G. G. Firing rate homeostasis in visual cortex of freely behaving rodents. Neuron 80, 335–342 (2013).
Sanders, H., Berends, M., Major, G., Goldman, M. S. & Lisman, J. E. NMDA and GABA_{B} (KIR) conductances: the ‘perfect couple’ for bistability. J. Neurosci. 33, 424–429 (2013).
Nabavi, S. et al. Metabotropic NMDA receptor function is required for NMDA receptordependent longterm depression. Proc. Natl Acad. Sci. USA 110, 4027–4032 (2013).
Keck, T. et al. Synaptic scaling and homeostatic plasticity in the mouse visual cortex in vivo. Neuron 80, 327–334 (2013).
Churchland, M. M. et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat. Neurosci. 13, 369–378 (2010).
Frémaux, N. & Gerstner, W. Neuromodulated spiketimingdependent plasticity, and theory of threefactor learning rules. Front. Neural Circuits 9, 85 (2016).
Branco, T., Clark, B. A. & Häusser, M. Dendritic discrimination of temporal input sequences in cortical neurons. Science 329, 1671–1675 (2010).
Packer, A. M., Russell, L. E., Dalgleish, H. W. P. & Häusser, M. Simultaneous alloptical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nat. Methods 12, 140–146 (2015).
Pardi, M. B. et al. A thalamocortical topdown circuit for associative memory. Science 370, 844–848 (2020).
Wilmes, K. A., Sprekeler, H. & Schreiber, S. Inhibition as a binary switch for excitatory plasticity in pyramidal neurons. PLoS Computat. Biol. 12, e1004768 (2016).
Van Rossum, M. C. W., Bi, G. Q. & Turrigiano, G. G. Stable Hebbian learning from spike timingdependent plasticity. J. Neurosci. 20, 8812–8821 (2000).
Acknowledgements
We thank C. Currin, B. Podlaski and the members of the Vogels group for fruitful discussions. E.J.A. and T.P.V. were supported by a Research Project Grant from the Leverhulme Trust (RPG2016446; TPV), a Sir Henry Dale Fellowship from the Wellcome Trust and the Royal Society (WT100000; T.P.V.), a Wellcome Trust Senior Research Fellowship (214316/Z/18/Z; T.P.V.) and a European Research Council Consolidator Grant (SYNAPSEEK, 819603; T.P.V.). For the purpose of open access, the authors have applied a CC BY public copyright license to any author accepted manuscript version arising from this submission.
Funding
Open access funding provided by University of Basel.
Author information
Authors and Affiliations
Contributions
E.J.A. and T.P.V. designed the research. E.J.A. carried out the simulations and analyses. E.J.A. and T.P.V. wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Neuroscience thanks the anonymous reviewers for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 Contribution of spike times, excitation, and inhibition to weight changes for the codependent synaptic plasticity model.
ac, Schematics of the sequence of spikes (left), and the resulting weight change for two different spike patterns (middle and left) for codependent excitatory synaptic plasticity model as a function of the levels of excitation and inhibition during plasticity. a, Spike triplet: postprepost sequence with fixed prebeforepost spike interval, Δt_{LTD}, and two examples for intervals between two consecutive postsynaptic spikes, Δt_{het}. b, Doublet: postbeforepre spike pattern with two different intervals, Δt_{LTD}. c, Postsynaptic burst with two spikes at different interspike intervals, Δt_{het}. d, Same as panel b for codependent inhibitory synaptic plasticity model.
Extended Data Fig. 2 Comparison between synaptic plasticity models using the frequencydependent STDP protocol.
ac, Relevant variables at the moment of synaptic plasticity induction as a function of the frequency of spike pairs with prebeforepost (top; Δt = + 10 ms) and postbeforepre (bottom; Δt = − 10 ms). Synaptic plasticity is induced at the moment of either a postsynaptic spike, t_{post} (or \({t}_{{{{\rm{post}}}}}^{{{{\rm{AP}}}}}\)), or a presynaptic spike, \({t}_{{{{\rm{pre}}}}}\). a, Average of the traces of NMDA currents (left; E(t_{post})), presynaptic spikes (right; \({x}_{{{{\rm{pre}}}}}^{+}\) (t_{post})), and postsynaptic spikes (right; \({y}_{{{{\rm{post}}}}}^{{{{\rm{E}}}}}\) (t_{post}) and \({t}_{{{{\rm{post}}}}}^{}\) (\({t}_{{{{\rm{pre}}}}}\))). b, Same as panel a, right, for the spikebased triplet STDP model. c, Average of the traces of presynaptic spikes (left; \({t}_{{{{\rm{pre}}}}}\) (\({t}_{{{{\rm{post}}}}}^{{{{\rm{AP}}}}}\))) and postsynaptic membrane potential (right; u(\({t}_{{{{\rm{post}}}}}^{{{{\rm{AP}}}}}\)), u^{+} (\({t}_{{{{\rm{post}}}}}^{{{{\rm{AP}}}}}\)), and u^{+} (\({t}_{{{{\rm{pre}}}}}\))). Dashed and continuous lines show averages for zero (w_{eff} = 0) and nonzero (w_{eff} = w) synaptic weights. dg, Plasticity inducing protocol for different models with pairs of prebeforepost (Δt = + 10 ms) and postbeforepre (Δt = − 10 ms) for varying spiking frequencies, and different firingrates of neighbouring excitatory and inhibitory afferents (colour coded). Plots show changes in synaptic weight of a single connection while the other two (excitatory and inhibitory) are kept fixed. Spikebased triplet spiketimingdependent plasticity model from ref. ^{5} and voltagebased plasticity model from ref. ^{6}. hk, Weight change as a function of neighbouring synapses’ input frequency (yaxis), and frequency of spike pairs (xaxis). Arrows indicate external frequencies used in panels dg. Plots from panels d and h are also shown in Fig. 2e,f. Error bars indicate SEM. Experimental data in panels dg was adapted with permission from ref. ^{15} (we refer to ref. ^{15} for information about sample sizes and statistical analysis).
Extended Data Fig. 3 Effect of distance dependence for excitatory current and weight stability.
a, Firingrate of a single postsynaptic neuron as a function of the total NMDA current for three different inhibitory weights. Points from simulations and lines from fitting the points to Eq. (36). b, Average NMDA currents (red open circles; same plot from Fig. 3b) and average filtered NMDA currents (variable E) divided by the number of excitatory synapses, N_{E} (pink filled circles) for τ_{E} =10 ms. c, Standard deviation of the NMDA currents (red open circles; same plot from Fig. 3b) and standard deviation of the filtered NMDA currents (variable E) divided by the number of excitatory synapses, N_{E} (pink filled circles) for τ_{E} =10 ms. Arrows indicate which values were used in the plots from panels d to f. d, Top: Temporal average of the NMDA currents after learning for each excitatory synapse as a function of the presynaptic firingrate for distinct values of σ (see arrows in panel b, right). Bottom: Distribution of the NMDA currents after learning (from plots above). Arrowheads indicate the mean. e, Same as panel d for the filtered NMDA currents (variable E) divided by the number of excitatory synapses, N_{E}. Notice that for large σ, E/N_{E} becomes independent of the input firingrate. f, Same as panel d for the synaptic weights. g, Influence of synapses on the excitatory synaptic plasticity. Dashed and continuous lines correspond to the first and middle synapse in our 1D line implementation (see Fig. 3b). Each colour corresponds to a different number of excitatory synapses, N_{E} (legend). Left: Percentage of influence of the synapse undergoing plasticity (synapse’s own NMDA current contribution) to its plasticity. Right: Percentage of influence from the neighbouring synapses (contribution of neighbouring NMDA currents only, without accounting for the synapse’s own NMDA currents).
Extended Data Fig. 4 Raster plot of inhibitory (top) and excitatory (bottom) neurons used in the receptive field plasticity simulation (Fig. 5 and Extended Data Fig. 5a).
Extended Data Fig. 5 Fast inhibitory plasticity or weak inhibitory control over excitatory plasticity prevents the stable formation of receptive fields.
a and b, Same simulation protocol used in Fig. 5a,b, but with larger learning rate of inhibitory plasticity (increased by 50fold). Evolution of excitatory (top) and inhibitory (bottom) weights. The shaded area (*) indicates the learning window, when all inhibitory afferents are downregulated. Excitatory input groups are activated for receptivefield formation during the learning window (Extended Data Fig. 4). c, Snapshots of the average synaptic weights for the different pathways at the moments indicated by the ⋆ symbols in panels a and b. d and e, Same as panels a and b, but with weak inhibitory control over excitatory plasticity rather than fast inhibitory plasticity. f, Snapshots of the average synaptic weights for the different pathways at the moments indicated by the ⋆ symbols in panels d and e.
Extended Data Fig. 6 Correlation between postsynaptic spike times and the main input pathway connected to a dendritic compartment.
Pearson correlation between filtered postsynaptic spike times (lowpass filter with a 100ms time constant) and main input pathway as a function of electroctonic distance between the dendritic compartment and the soma. Each colour indicates the coactive group size (see legend) for independent (left) and matching (right) excitatory and inhibitory inputs.
Extended Data Fig. 7 Evolution of synaptic weights connected to dendritic compartments for matched and independent E & I.
a, Weights of coactive (green) and uncorrelated (grey) excitatory inputs with size of coactive excitatory group and distance of dendritic compartment from the soma indicated by ‘group size’ and ‘d’, respectively. b, Weights of coactive (green; same activity pattern as coactive excitatory group) and uncorrelated (grey) inhibitory inputs. Size of the coactive inhibitory group was kept fixed at half of the inhibitory population. c, Same as panel a, but when inhibitory inputs are independent of excitatory ones. d, Same as in panel b, but with no correlation between excitatory and inhibitory inputs.
Extended Data Fig. 8 Characterisation of the recurrent network dynamics before and after learning.
a, Histogram of the membrane potential of all excitatory neurons. b, Histogram of the interspikeinterval (ISI) of all excitatory neurons. c, Histogram of the coefficient of variation (standard deviation divided by the mean) of the interspikeintervals (from panel b) for all excitatory neurons. d, Histogram of the average effective membrane time constant for all excitatory neurons. Effective membrane time constant of a neuron is defined as the neuron’s membrane time constant divided by the neuron’s total conductance. e, Pearson correlation between excitatory and inhibitory inputs onto an example excitatory neuron of the network.
Extended Data Fig. 9 Development of the recurrent connectivity structure for different balancing parameters.
a, Sum of input excitatory connections onto each excitatory neuron of the network, ordered from the strongest to the weakest connection sum. b, Sum of output excitatory connections per excitatory neuron, following the same order from panel a. c, Sum of input inhibitory connections onto each excitatory neuron of the network, following the same order from panel a. d, Total excitatory (red) and inhibitory (blue) currents onto a given excitatory neuron of the recurrent network during the learning period. e, Firingrate of two excitatory neurons in the recurrent network at different time bins (of size 1 second). f, Average membrane potential (calculated in a 1second time bin) of the two neurons from panel b. Each row shows plots of simulations with a different balancing term, α (Eq. (2)). Panels ac in the middle row (α = 1.2) are the same as in Fig. 7f–h.
Extended Data Fig. 10 Recurrent network response to external inputs before and after learning with different EI balance setpoints.
a, Dynamics of the naïve network (before learning). Norm, i.e., ℓ_{2}norm of the firingrate deviations from baseline (left), and average firingrate (right) of excitatory neurons for the five stimulation patterns. Dynamics used in Fig. 8c,d (‘before’). b, Same as panel a, but for the naïve network (before learning) without background input (baseline firingrate is zero for all neurons). c, Same as panels a and b, but for a network after learning with α = 0.9. d, Same as panels a and b, but for a network after learning with α = 1.4.
Supplementary information
Supplementary Information
Supplementary Figs. 1–4, Supplementary Tables 1–9 and Supplementary Modeling Note.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Agnes, E.J., Vogels, T.P. Codependent excitatory and inhibitory plasticity accounts for quick, stable and longlasting memories in biological networks. Nat Neurosci 27, 964–974 (2024). https://doi.org/10.1038/s41593024015974
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41593024015974