Abstract
Our daily endeavors occur in a complex visual environment, whose intrinsic variability challenges the way we integrate information to make decisions. By processing myriads of parallel sensory inputs, our brain is theoretically able to compute the variance of its environment, a cue known to guide our behavior. Yet, the neurobiological and computational basis of such variance computations are still poorly understood. Here, we quantify the dynamics of sensory variance modulations of cat primary visual cortex neurons. We report two archetypal neuronal responses, one of which is resilient to changes in variance and coencodes the sensory feature and its variance, improving the population encoding of orientation. The existence of these variancespecific responses can be accounted for by a model of intracortical recurrent connectivity. We thus propose that local recurrent circuits process uncertainty as a generic computation, advancing our understanding of how the brain handles naturalistic inputs.
Similar content being viewed by others
Introduction
Selectivity to the orientation of visual stimuli is an archetypal feature of the neurons in the mammalian primary visual cortex (V1)^{1}, which has been historically studied using lowcomplexity stimuli such as oriented gratings^{2}. While this approach offers a clear hypothesis as to what neurons are responding to, it only probes for neural selectivity to individual input parameters, such as orientation or spatial frequency. Natural vision, however, involves rich cortical dynamics^{3} integrating a mixture of multiple local parameters and global contextual information^{4}. Hence, a majority of our understanding of V1 relies on neural responses to single inputs in orientation space, rather than naturalistic responses to multiple orientations.
This knowledge gap is not trivial, as the variance of distributions of sensory inputs is a fundamental cue on which our brain relies to produce coherent integration of sensory inputs and prior knowledge of the world^{5,6} in order to drive behavior^{7}. According to Bayesian inference rules, lowvariance inputs are processed through fast feedforward pathways, whereas higher sensory variance elicits a slower, recurrent integration^{8}. How the brain performs computations on variance is not yet fully understood. In V1, it has been shown that single neurons undergo nonlinear tuning modulations as a function of their input’s variance^{9} which can serve as a functional encoding scheme^{10,11}. These recent results align with earlier models of recurrent cortical activity of V1^{12,13} and also match psychophysical measurements in humans^{14,15,16}. While it seems that local interactions within V1 are sufficient to encode orientation variance^{17}, the quantification of single neuron responses, their dynamics and their link to a functional population encoding of variance remains to be established.
Here, we investigate the neural basis of variance processes in V1 using stimuli matching the orientation content of natural images^{18}. We present a quantitative analysis of single neurons’ variancetuning functions, as well as their dynamics, reporting heterogeneous modulations. Two archetypal response types emerge in V1, one of which relies on predominantly supragranular neurons that maintain robust orientation tuning despite high sensory variance, allowing them to coencode orientation and variance, and enhancing V1’s orientation distribution encoding. A wellestablished V1 intracortical recurrence model accounts for these resilient neurons, aligning with canonical Bayesian frameworks^{6} and suggesting uncertainty computations as a new generic function for local recurrent cortical connectivity.
Results
Singleneuron response in V1 depends on input variance
We recorded neural activity from 249 anesthetized cat V1 neurons and measured orientationselective responses to naturalistic images called Motion Clouds^{18}. These stimuli are bandpass filtered white noise textures and offer three advantages over both simple gratinglike stimuli and complex natural images. First, they enable fine control of mean θ and variance, controlled by B_{θ}, of orientation distributions through a generative model, thereby reproducing natural images’ oriented content (Fig. 1). Second, as they are stationary in the spatial domain, they only probe orientation space, excluding any secondorder information exploitable by the visual cortex^{19}. Third, by conforming to natural images’ 1/f^{2} power spectrum distribution^{20}, they attain a desirable balance between controllability and naturalness^{21}. We generated 96 Motion Clouds by varying mean orientation θ between 0° and 180° in 12 even steps and variance B_{θ} between ≈0° and 35° in eight evenly spaced steps.
All recorded neurons displayed orientation selectivity to Motion Clouds. Nearly all (98.8%, p < 0.05, Wilcoxon signedrank test) units maintained their preferred orientation when variance B_{θ} increased, while the peak amplitude of the tuning curve diminished significantly (95.1% units, p < 0.05, Wilcoxon signedrank test, 73.1% mean amplitude decrease for B_{θ} = 35°). Only 28.5% of the recorded units were still tuned for B_{θ} = 35.0° stimuli (p < 0.05, Wilcoxon signedrank test). Thus, increasing input variance reduces single neuron tuning, which manifests heterogeneously across neurons, as evidenced by two representative single units shown in Fig. 2a. Neuron A illustrates single units which are no longer orientationtuned when variance B_{θ} reaches 35° (W = 171.0, p = 0.24, Wilcoxon signedrank test), unlike neuron B (W = 22.5, p = 10^{−6}) which exemplifies the aforementioned 28.5% varianceresilient units. These response types are characterized by functions relating B_{θ} to the goodness of tuning (circular variance, CV), named here variancetuning functions (VTF, Fig. 2b). Such VTFs represent the input/output transformation in variance space, and are wellfitted with NakaRushton functions^{22} (Supplementary Fig. 2a). This allows to summarize variance modulations using only three parameters: n, the VTF nonlinearity; B_{θ50}, the input variance level for the tuneduntuned state transition; and f_{0}, the orientation tuning goodness for lowestvariance inputs. Overall, VTFs exposed diverse responses to variance among V1 neurons, with median values outlining a characteristic VTF that is slightly nonlinear, with a changepoint at B_{θ} = 19.2° (Fig. 2c). In other words, most neurons tend to change abruptly in tuning when input variance reaches 19.2°, after which the response becomes less sensitive to orientation. Alternative metrics were also calculated, including variancehalf width at half height (HWHH) and variancemaximum response functions (Supplementary Fig. 2b–e). Although HWHH displayed patterns resembling VTFs, we elected to not use it, as its reliance on fits, its consequent susceptibility to fitting artifacts, and its similarity with CV are not desirable properties. Since CV also inherently accounts for the firing rate at the preferred orientation (see “Methods”), we relied on this metric to describe both maximum amplitude and goodness of tuning in a single metric.
Orientation variance impacts not only orientation tuning but also the dynamics of the response of V1 neurons (Fig. 3). Interestingly, both effects are linked, as demonstrated by the two example VTFs: neuron B, which exhibited orientationtuned responses for B_{θ} = 35° inputs (Fig. 2a), also had a slower timedependent change of goodness of tuning (relative min. of reduction of 42% of max. CV at 200 ms poststimulation onset, B_{θ} = 0°) compared to neuron A (relative min. of 26% of max. CV at 90 ms poststimulation onset, Fig. 3b). These dynamical modulations were also heterogeneously distributed among the population, significantly more spikes emitted 200 ms after stimulation onset for B_{θ} = 35° (Fig. 3d, U = 14936.0, p < 0.001, Mann–Whitney Utest). In summary, orientation variance induces changes in both tuning and dynamics of V1 neurons, revealing two archetypal types of response: either fast in time and nonlinear with respect to variance (neuron A) or slow in time and linear with respect to variance (neuron B).
Multiple types of variance responses are found in V1
To properly characterize the two aforementioned types of responses to variance, we separated the recorded neurons into two groups using Kmeans clustering the Principal Components (PC, Fig. 4) of the neuronal responses. Clustering was performed on the VTFs (Fig. 4b), tuning statistical measurements (Fig. 4c, d) and response dynamics (Fig. 4e, f). We used the first 2 PC for clustering the data, which accounted for 39.1% of the cumulative variance (Supplementary Fig. 4a), and chose two clusters based on the number of example responses and the empirical absence of an elbow^{23} in the WithinClustersSumofSquares (WCSS) curve (Supplementary Fig. 4b). This splits the data into a cluster of 164 neurons, including neuron A, and another cluster of 85 neurons associated with neuron B’s response type. As neuron B displayed resilience to increased input variance (Fig. 2a), its cluster was labeled resilient neurons. Conversely, neurons clustered with neuron A were labeled vulnerable neurons (blue and red colors, respectively, Fig. 4a). Opting to categorize the data into two distinct response types facilitates a comprehensive understanding of the underlying continuum of behaviors. This approach has proven successful in the characterization of novel visual responses, such as V1 simple/complex cells^{24} and MT pattern/component cells^{25}.
The Kmeans clustering resulted in a significant difference between the two groups’ VTF parameters (Fig. 4b): resilient neurons had significantly more linear modulations (\(\log (n),U=4029.0.0,p \, < \, 0.001\), Mann–Whitney Utest), higher changepoints (B_{θ50}, U = 7854.0, p = 0.028) and better tuning to lowvariance inputs (f_{0}, U = 4992.0, p < 0.001), which endows them with the ability to respond to an orientation on a broader range of input variances^{26,27}. No significant differences in the varianceHWHH and variancefiring rate functions were observed, except for the nonlinearity of the latter metric (Supplementary Fig. 5). This is coherent with the clustering on the statistical measurement of orientation tuning, which showed that resilient neurons remained significantly tuned to higher values of B_{θ} (\({B}_{\theta \max }\), Fig. 4c, U = 9155.0, p < 0.001). However, both groups of neurons had a similar circular variance for B_{θ} = 35° (Fig. 4d). This suggests that both types of neurons were similarly poorly tuned for inputs of the highest variance, but underwent different tuning changes between B_{θ} = 0° and B_{θ} = 35°. In terms of dynamics, the two groups exhibited the same differences that characterized neurons A and B. Resilient neurons discharged significantly later than vulnerable neurons for B_{θ} = 0° (Fig. 4e, U = 8455.5, p = 0.002), but both groups were on par for inputs of B_{θ} = 35° (U = 7794.5, p = 0.063). Interestingly, resilient neurons had significantly lower time to the maximum amplitude of the tuning curve for Bθ = 0° (Fig. 4f, U = 5542.5, p = 0.014), which opposes the early/late ratio of spikes. Neither group showed variancedependent modulation of the delay to maximum spike count (U = 3058.0, p = 0.084 and U = 11545.5, p = 0.090 for resilient and vulnerable, respectively), and both groups showed similar delay for B_{θ} = 35° (U = 6094.5, p = 0.158).
The existence of these two groups of neurons could not be attributed to the integration of the drifting motion of the stimuli (direction selectivity index, unused in the clustering process, Fig. 4g, U = 7031.5, p = 0.910). Instead, the location of the recorded units (unused in the clustering process) predominantly positioned the resilient neurons in supragranular layers, offering a mechanistic basis for their existence (Fig. 4h). Moreover, resilient neurons have sharper orientation tuning and slower dynamics, which are distinctive features of supragranular neurons^{28,29}. This, however, does not establish a functional role for these two types of responses in V1.
Populationlevel modulations of the orientation code
As the neuronal population has been separated into wellcharacterized groups, we wish to understand the functional role played by resilient and vulnerable neurons. To that end, we used a neuronal decoder that probes for population codes in V1, enabling us to seek what parameters of the stimuli each neuron group was encoding. We trained a multinomial logistic regression classifier^{30}, a probabilistic model that classifies data belonging to multiple classes (see “Methods”). This classifier received the firing rate of neurons in a sliding time window (100 ms) and learned, for each neuron, a coefficient that best predicts the class (i.e., the generative parameter θ, B_{θ} or θ × B_{θ}) of the stimulus.
This decoder was first used to probe for representation of the stimuli’s orientations θ in the population activity. For this purpose, the dataset of trials was separated for each variance, such that eight independent, B_{θ}specific, orientation decoders were learned, with optimal parametrization (Supplementary Fig. 6). These orientation decoders were able to retrieve the correct stimulus’ θ well above the chance level (1 out of 12 orientations, max. accuracy = 10.56 and 4.68 times chance level for B_{θ} = 0° and B_{θ} = 35°, respectively) from the entire population recordings. The temporal evolution of these decoders’ accuracy (Fig. 5a) showed that maximally accurate orientation encoding correlates almost linearly with the stimuli’s variance, as does the time to reach this accuracy (Fig. 5e, black). These dynamics depend on the input’s variance, exhibiting a rapid initial rise followed by a plateau for lowvariance inputs, while steadily increasing linearly over time for highvariance inputs. Interestingly, the decoding accuracy remained stable for approximately 100 ms even after a stimulus was no longer displayed. Since the decoders are trained independently in each time window, this accumulative process occurs in the recordings themselves, and not in the decoder.
The full output of these decoders (see “Methods”) is a population tuning curve, which displays the likelihood of decoding all possible input classes (here, all θ, Fig. 5b), rather than the proportion of correct decoding reported by the accuracy metric. The clear correlation between the sharpness of these population tuning curves (Fig. 5f left) and the accuracy of the decoder show that improvements in decoding accuracy rely directly on a populationlevel separation of features within orientation space^{30}, particularly at higher B_{θ} (Fig. 5b, third panel). Overall, B_{θ} influences the temporality of the orientation code in V1, which echoes its influence on singleneuron dynamics (Fig. 3). The short delay required to process precise inputs is congruent with the feedforward processing latency of V1^{31}, while the increased time required to reach maximum accuracy for low precision oriented inputs suggests the involvement of a slower, recurrent mechanism.
We then sought to assert the role of the vulnerable and the resilient neural populations by decoding θ from either group. The number of neurons in each group was imbalanced (79 more vulnerable neurons), which influences the accuracy of the decoder (Supplementary Fig. 6). Consequently, we randomly selected (with replacement) groups of 100 neurons from either population, repeating the selection 5 times. Using the same approach as with the global population decoding, we then trained B_{θ}specific orientation decoders on the activity of either group of neurons. Resilient neurons outperformed vulnerable ones in decoding accuracy for 56% of the time steps, mainly in the 160–330 ms period (Fig. 5c). However, both groups exhibited similar population tuning curves (Fig. 5d) and time courses (Fig. 5e). Despite the better tuning of resilient neurons to inputs with higher variance (Fig. 4), both groups have overall similar orientation encoding performances for B_{θ} = 35°. Therefore, orientation can be decoded somewhat more effectively from the resilient neurons at the population level, but neither group appears to have a clear or stable advantage over the other in this regard, especially at higher B_{θ}.
A subset of V1 neurons coencode orientation and its variance
Given that orientation encoding did not reveal a fundamental difference in the respective contributions of resilient and vulnerable neurons, we then investigated the encoding of the stimulus’ variance B_{θ}. The same type of decoder previously used failed to infer the variance B_{θ} (chance level = 1 out of 8 values of B_{θ}, max. accuracy = 1.91 times chance level) from the population activity (Supplementary Fig. 8a, b). This variance decoding also failed to reach more than twice the chance level (max. accuracy = 1.72 and 1.71 times chance level for resilient and vulnerable neurons, respectively) in both resilient and vulnerable neurons (Supplementary Fig. 8c,d). At the single neuron level, tuning curves flatten with increments of variance (Supplementary Fig. 2a), which makes it difficult to distinguish activity generated by stimuli with B_{θ} = 0. 0° and orthogonal orientation from the activity generated by stimuli with B_{θ} = 35. 0° and preferred orientation. This limitation could potentially stem from the recording scale (249 neurons), which is more than an order of magnitude smaller than the quantity of neurons a single V1 biological decoder can access^{32}. Thus, neither the decoding of variance B_{θ} nor the decoding of orientation θ accounts for a different role between resilient and vulnerable neurons.
The decoding methods used so far have assumed that V1 encodes independently single input parameters. However, a more realistic assumption is to consider the visual system’s natural inputs as distributions of information (Fig. 1) that cortical neurons must process from thalamic inputs^{33} based on a probabilistic computational principle^{34}. Here, this implies that the naturalistic form of processing for a V1 neuron would be coencoding both the mean feature (θ) and its associated variance (B_{θ}) to access the entire probability distribution.
We thus proceeded to train a decoder that retrieves both orientation and variance of the stimulus’ simultaneously, referred to as a θ × B_{θ} decoder. This decoder correctly predicted orientation and variance with a maximum accuracy reaching 16.36 times the chance level (1/96, Fig. 6a, gray). The likelihood structure (Fig. 6b, upper row) showed that the correct θ was decoded alongside multiple concurrent hypothesis over B_{θ}. The progressive increase of accuracy stems from the emergence of a dominant encoding of θ at the correct B_{θ}, consequently diminishing the relative magnitude of representations over other B_{θ} values over time. Interestingly, resilient neurons showed here a different functional role from vulnerable neurons, with markedly better coencoding of B_{θ} and θ (max. accuracy = 11.0 and 9.0 times chance level for resilient and vulnerable neurons, respectively, Fig. 6a, blue, red). Both groups displayed ambiguity regarding B_{θ} (Fig. 6b, lower row), and correlated sharpening/accuracy ratios on the correct B_{θ} population curve (Fig. 6c, left) or on the offmedian population curves (Fig. 6c, right).
To understand the utility of this coencoding, we marginalized the decoder over B_{θ}, creating an orientationonly encoder that simultaneously learned both orientation and variance. Data from resilient neurons then provided significantly better encoding of orientation than vulnerable neurons (max. accuracy = 6.0 and 5.4 times the 1/12 chance level for resilient and vulnerable neurons respectively, Fig. 6d, gray regions), demonstrating that the overall V1 orientation code improves with a codecoding of its variance. The distinction between resilient and vulnerable neurons is further emphasized by the decoder coefficients, which represent the contributions of each type of neuron toward the overall θ × B_{θ} code (Fig. 6e, for single neuron examples see Supplementary Fig. 9). Here, these coefficients are depicted as a polar plot, where the orientation θ (centered around preferred orientation) is shown as the angle of each bin from the upper vertical and the variance B_{θ} is represented as the eccentricity of each bin from the center. Visualizing the coefficients of the whole population decoder (i.e., trained on the 249 neurons, Fig. 6a, gray) shows that the output learned from resilient neurons concurrently informs about both a wide range of orientations and variances, as observed by the extent of the bins in the eccentricity (B_{θ}) axis (Fig. 6e, bottom row). On the other hand, the decoding process extracted orientation information on a very small range of B_{θ} from the activity of vulnerable neurons (Fig. 6e, top row). Even though the coefficients are learned independently at each time step, the difference in information between the two groups of neurons remains extremely stable through time.
Overall, orientation and its variance can be codecoded simultaneously from resilient neurons, while only orientation can be decoded from vulnerable neurons. This is confirmed by a continuous scorebased decoding metric based on the Kmeans parameters (Fig. 6f) that correlates, for the entire population (i.e., without splitting into two groups), their maximum decoding accuracy to a degree of vulnerability/resilience. After providing this functional rationale for resilient and vulnerable neurons, we finally address the question of how both types of neurons can exist in V1.
Recurrent activity can explain the existence of neurons coencoding orientation and variance
A notable difference between vulnerable and resilient neurons is their different location within the cortical layers (Fig. 4h). This typically implies differences in local circuitry, particularly in the intraV1 recurrent interactions between cortical columns, which are mostly confined to supragranular layers^{35}. Given that resilient neurons are predominantly found in these supragranular layers, we aimed to find a mechanistic rationale for the existence of the two groups of neurons based on local interactions in V1. We developed a neural network from a wellestablished computational model of recurrent connectivity in V1, originally used to account for the intracortical activity in cat V1^{36} and later simplified as a centersurround filter in the orientation domain^{29}. This model has already accounted for an extensive range of emerging properties in cortical circuits^{37,38}. Briefly, it is built of orientationselective neurons tiling the orientation space and connected among themselves via recurrent synapses which follow an excitatory/inhibitory difference of von Mises distributions (Fig. 7a). Here, we model inputs with higher variance as more spread in orientation space (Fig. 1) and thus in model space, which hence drives the recurrent dynamics of the model based on B_{θ} (for a full description, see “Methods”).
Considering that feedforward connectivity with heterogeneous tuning can encode mixtures of orientations and natural images^{9}, we first ran our model without recurrent synapses. We reproduced the heterogeneous selectivity by convolving the input with tuning curves of varying bandwidths (Fig. 7b, inset). This feedforward mode of the network was only able to produce a limited number of responses (Fig. 7b), in which increasing the bandwidth of the tuning curves increased the parameter f_{0} of the VTF, but kept n and B_{θ50} constant.
Barring that explanation, we focused on the role of recurrent synapses and disabled the convolution of inputs. We varied the concentration parameters of the synaptic distributions κ_{inh} and κ_{exc} (Fig. 7c, e) in 200 even steps ranging from 0.35 to 7, yielding 40,000 possible configurations of the model. This allowed to manipulate the VTF and to accurately reproduce those of single neurons recorded in V1 (neuron A, B in Fig. 2b and C in Supplementary Fig. 1, modeled in Fig. 7c). Altering the type of recurrence between neurons with different orientation preference allowed to reproduce all VTF found in V1. The parameter spaces (Fig. 7e) showed a trend for resilient VTFs (low n, high B_{θ50}, low f_{0}) to be found mostly around the K_{exc}; K_{inh} identity line, thus produced by balanced recurrent connectivity. Vulnerable VTFs (high n, low B_{θ50}, high f_{0}) were, on the contrary, mostly found above the identity line, where the configuration of the network is dominated by excitation over inhibition. This is consistent with the range of parameters that yielded higher response latency (Fig. 7d), which also occupied more parameter space when input variance increased. In summary, recurrence between V1 neurons seems to be sufficient to explain the existence of vulnerable and resilient neurons and, consequently, to account for the coencoding of orientation and variance.
Discussion
The variance of oriented inputs to V1 impacts orientation selectivity^{9} and we have sought to understand how V1 could process this input parameter. We found that variance causes modulations in tuning (Fig. 2) and dynamics (Fig. 3) of single V1 neurons, which we have classified as either vulnerable or resilient (Fig. 4). Decoding analysis revealed variancedependent accumulative dynamics in the two groups of neurons (Fig. 5) that are directly tied to a populationlevel separation of features within orientation space^{30}. Both groups can encode orientation but not variance (Supplementary Fig. 8), and only resilient neurons are able to accurately coencode orientation and variance of the input to V1 (Fig. 6). Based on cortical layer position (Fig. 4h) and on a computational approach (Fig. 7), we propose that the processing input variance in V1 is supported by recurrent connectivity between local cortical populations (Fig. 8). This not only improves the encoding of orientation in V1 but also links directly to canonical Bayesian frameworks, suggesting uncertainty computation as a new mechanism supported by local recurrent cortical connectivity.
Here, we restricted our approach to orientation space, rather than investigating the full extent of spatial relationships which are present in natural images. Thus, fullfield stimuli without secondorder correlation were used, which compared to a purely ecological environment, have likely excluded endstopped cells^{39}. While this approach limited the responses to V1 and excluded higherorder cortical areas, there exists both neurobiological and computational evidence that V1 does not need to recruit other cortical areas to process orientation variance. For instance, the heterogeneous recurrent excitatory and inhibitory synaptic connectivity in V1^{40,41,42,43} sustains resilient orientation tuning^{44} that can account for the diversity of single neurons’ resilience under different connectivity profiles, as explored in our computational model (Fig. 7). This is supported by the temporal scale of local recurrent connectivity, namely the slowlyconducted horizontal waves in an orientation map^{45}, which fit the view of variance processing as an iterative and accumulative computation implemented by local recurrent interactions between supragranular resilient neurons that are heavily connected through recurrent interactions with neighboring cortical columns^{28,29,35,45}. In this regard, our reported time scales may have been slightly affected by the use of anesthesia (halothane), which has a limited visible effect on V1^{46,47} and is less likely to cause modulations in this area compared to higherorder areas^{48,49,50,51}.
Computationally, most existing models support the idea that processing orientation variance can be achieved solely with local V1 computations^{10}. For instance, Goris et al.^{9} reported that heterogeneously tuned V1 populations help encode the orientation distributions found in natural images and that this functional diversity could be accounted for by a linearnonlinear (LNL) model. While this could explain the diversity of tuning in our data (Fig. 2), we found that such a model failed to account for some types of modulations of the VTFs (Fig. 7b). Therefore, we employed a model designed to replicate intracortical cat V1 data^{38} and demonstrated that it reproduces various VTFs and dynamics observed in our recordings. The model used here pools activity from multiple orientationtuned units into a single neuron, which we interpreted as a local recurrent model. While our results do not require contributions from extrastriate regions to explain the observed results, the possibility of recurrence involving neurons outside V1 cannot be entirely ruled out at this time^{52}.
Our study confirms the findings in the anesthetized macaque literature^{9} by identifying singleneuron variance modulations that serve as the basis for decoding orientation variance at the population level in V1. This suggests that a common mechanism may underlie this neural mechanism in both felines and primates, which is a fundamental computational requirement for the proper encoding of natural images in V1^{53}. Although gain/variance V1 functions have been previously reported^{17}, we demonstrate a similar inputoutput relationship in the form of VTFs, that has the added benefit of characterizing and extrapolating variance modulations across the full dynamical range of V1 populations. Further, we finely analyzed the temporal component of the response that is absent from the literature. We propose that all these response properties can be linked to cortical layers, binding the idea that supragranular neurons with sharp tuning and slow dynamics^{28,29} support the coencoding orientation and its variance.
This leads to an interesting tie to Bayesian inference, namely under the specific case of predictive coding^{34}, that canonically assigns (inverse) variance weighting of cortical activity to supragranular recurrent connectivity^{6,8}, without the need for extrastriate computations. This is an interesting perspective that opens up a general interpretation of our results into the broader context of processing variance/precision/uncertainty at different scales of investigations. Extending the present results to other cortical areas or other sensory modalities would be a simple process, given the generative stimulus framework used here^{18}, which could yield pivotal new insights into our understanding of predictive processes in the brain.
Methods
Visual stimulation
Motion Clouds are generative modelbased stimuli^{18} that allow for fine parameterized control over naturalistic stimuli^{54}, which is a desirable trait when probing sensory systems under realistic conditions^{21}. They are mathematically defined as bandpass filtered white noise stimuli, whose filters in Fourier space are defined as a parameterized distribution in a given perceptual axis (here, only orientation, but can be extended to speed^{55} and scale^{56}). Thus, the Motion Clouds presently used are fully characterized by their mean orientation and their orientation variance, such that a given stimulus S can be defined as:
where \({{{{{{{\mathcal{F}}}}}}}}\) is the Fourier transform and O the orientation envelope, characterized by its mean orientation θ and its orientation bandwidth B_{θ}. For \({B}_{\theta } < 45.{0}^{\circ },{B}_{\theta }=1/\sqrt{\kappa }\), where κ is the concentration parameter of a von Mises distribution, and hence approximates the standard deviation^{57}. It thus serves as a measure of the orientation variability in the pattern, and as such, we used the term variance to describe it throughout the text. A total of 96 different stimuli were generated, with 12 mean orientations θ ranging from 0 to π in even steps, and eight orientation variance B_{θ} ranging from ≈0 to π/5 in even steps. The orientation envelope is a von Mises distribution:
where θ_{f} is the angle of the frequency components of the envelope in the Fourier plane, which controls the spatial frequency parameters of the stimuli, set here at 0.9 cycle per degree. The stimuli were drifting orthogonally in either direction with respect to the mean orientation θ at a speed of 10°/s, which is optimal to drive V1 neurons^{58}. For the range of values of B_{θ} considered here, the orientation envelope approximates a Gaussian distribution and B_{θ} is thus a measure of the variance of the orientation content of the stimuli.
All stimuli were generated using opensource Python code (see Additional information) and displayed using Psychopy^{59}. Monocular stimuli were projected with a ProPixx projector (VPixx Technologies Inc.) onto an isoluminant screen (DaLiteⓒ) covering 104° × 79° of visual angle. All stimuli were displayed for 300 ms, interleaved with a mean luminance screen (25 cd/m^{2}) shown for 150 ms between each trial. Trials were fully randomized, and each stimulus (a unique combination of θ × B_{θ} × drift direction) was presented 15 times. Stimuli were shown at 100% contrast, meaning that as B_{θ} increased, the amount of orientation energy at median orientation θ decreased, and conversely for offmedian orientations (as illustrated in Fig. 1b). This differs from manipulating the contrast, which would reduce the orientation energy at all orientations.
Surgery
Experiments were conducted on three adult cats (3.6–6.0 kg, 2 males). All surgical and experimental procedures were carried out in compliance with the guidelines of the Canadian Council on Animal Care and were approved by the Ethics Committee of the University of Montreal (CDEA #20006). Animals were initially sedated using acepromazine (Atravet®, 1 mg/kg) supplemented by atropine (0.1 mg/kg). Anesthesia was induced with 3.5% isoflurane in a 50:50 mixture of O_{2}:N_{2}O (v/v). Following tracheotomy, animals underwent artificial ventilation as muscle relaxation was achieved and maintained with an intravenous injection of 2% gallamine triethiodide (10 mg/kg/h) diluted in a 1:1 (v/v) solution of 5% dextrose lactated Ringer solution. Through the experiment, the expired level of CO_{2} was maintained between 35 and 40 mmHg by adjusting the tidal volume and respiratory rate. Heart rate was monitored and body temperature was maintained at 37 °C by means of a feedbackcontrolled heated blanket. Lidocaine hydrochlorine (2%) was applied locally at all incisions and pressure points and a craniotomy was performed over area 17 (V1, HorsleyClarke coordinates 48P; 0.5–2 L). Dexamethasone (4 mg) was administered intramuscularly every 12 h to reduce cortical swelling. Eye lubricant was regularly applied to avoid corneal dehydration.
Electrophysiological recordings
During each recording session, pupils were dilated using atropine (Mydriacyl) while nictitating membranes were retracted using phenylephrine (Mydfrin). Rigid contact lenses of appropriate power were used to correct the eyes’ refraction. Anesthesia was changed to 0.5–1% halothane to avoid anesthesiainduced modulation of visual responses^{47}. Finally, small durectomies were performed before each electrode insertion and a 2% agar solution in saline was applied over the exposed cortical surface to stabilize recordings. Linear probes (≈1 MΩ, 1x326mm100177, Neuronexus) were lowered in the cortical tissue perpendicularly to the pia, and extracellular activity was acquired at 30KHz using an Open Ephys acquisition board^{60}. Single units were isolated using Kilosort 2^{61} and manually curated using Phy^{62}. Clusters with low amplitude templates or illdefined margins were excluded from further analysis. The additional exclusion was performed if a cluster was unstable (firing rate below 5 spikes.s^{−}1 for more than 30 s), or if the neuron was not deemed sufficiently orientation selective (R^{2} < 0.75 when fitted with a von Mises distribution). Passing that exclusion step, all remaining neurons responded to Motion Clouds. Laminar positions were determined by the depth of the recording site with respect to the pia, which was then crossvalidated by the evoked Local Field Potential (LFP) using sink/source analysis^{63,64}.
Single neuron analysis
Orientation tuning curves were computed by selecting a 300 ms window maximizing spikecount variance^{65}. The firing rate was averaged across drift directions and a von Mises distribution^{57} was fitted to the data:
where θ_{k} is the orientation of the stimuli, \({R}_{\max }\) is the response (baseline subtracted) at the preferred orientation θ_{pref}, R_{0} the response at the orientation orthogonal to θ_{pref} and κ a measure of concentration. To control for direction selectivity when averaging tuning curves across drift direction, we computed a direction selectivity index:
where R_{pref} is the firing rate at the preferred direction (baseline subtracted) and R_{null} is the firing rate at the preferred direction plus π. The quality of each tuning curve was assessed by computing a global metric, the circular variance (CV) of the unfitted data, which varies from 0 for perfectly orientationselective neurons to 1 for orientationuntuned neurons^{29}. It is defined as:
where R(θ_{k}) is the response of a neuron (baseline subtracted) to a stimulus of angle θ_{k}. The changes of CV as a function of B_{θ} were fitted with a NakaRushton function^{22}:
where f_{0} is the base value of the function, \({f}_{0}+{f}_{\max }\) its maximal value, B_{θ50} the stimulus’ variance at half \({f}_{\max }\) and n a strictly positive exponent of the function.
The significance of the tuning to orientation was measured by comparing the unfitted firing rate at the preferred and orthogonal orientations across trials, using a Wilcoxon signedrank test correct for continuity, and the maximum value of B_{θ} which yielded a significant result was designed as \({B}_{\theta \max }\) (i.e., the maximum variance at which a neuron is still tuned). Shifts of the preferred orientation were evaluated as the difference of θ_{pref} between trials where B_{θ} = 0° and \({B}_{\theta }={B}_{\theta \max }\). The significance of the variation of the peak amplitude of the tuning curve was measured by comparing the unfitted firing rate at the preferred orientation between trials where B_{θ} = 0° and \({B}_{\theta }={B}_{\theta \max }\).
Population decoding
The parameters used to generate Motion Clouds were decoded from the neural recordings using a multinomial logistic regression classifier^{30}. For a given stimulus, the activity of all the recorded neurons was a vector \(X(t)=\left[\begin{array}{cccc}{X}_{1}(t)&{X}_{2}(t)&\cdots \,&{X}_{249}(t)\end{array}\right]\), where X_{i}(t) is the spike count of neuron i in a time window [t; t + ΔT]. The onset of this window t was slid from −200 to 400 ms (relative to the stimulation time) in steps of 10 ms while ΔT was kept constant at 100 ms. It should be noted that merging neural activity across electrodes or experiments is a common procedure^{66,67}, which we validated in our data by verifying that the electrode or experiment which yielded the data could not be decoded from the neural activity (Supplementary Fig. 7). Mathematically, the multinomial logistic regression is an extension of the binary logistic regression^{30} trained here to classify the spike vector X(t) between K classes. The probability of any such vector belonging to a given class is:
where 〈⋅,⋅〉 is the scalar product over the different neurons, k = 1, …, K is the class out of K possible values and β_{k} are the coefficients learned during the training procedure of the classifier. Several decoders were trained with classification tasks: decoding orientation θ (K = 12, Fig. 5), decoding orientation variance B_{θ} (K = 8, Supplementary Fig. 8) or both (K = 12 × 8 = 96, Fig. 6). All metaparameters were controlled, showing that the decoding performances stem mainly from experimental data rather than finetuning of the decoder parameterization (Supplementary Fig. 6). For all decoding experiments reported, we used integration window size ΔT = 100 ms, penalty type = ℓ_{2}, regularization strength C = 1. and train/test split size = 0.15.
The performance of all decoders was reported as the average accuracy across all classes K, known as the balanced accuracy score^{68}. The accuracy for each specific class k can also be reported in the form of a population tuning curve, in which the likelihood of decoding each possible class K is given by equation (7). The significance of differences between two neuron groups was reported only when two consecutive time steps, i.e., 20 ms or more, exhibited significant differences. To estimate the time course of the decoders, they were fitted in the [0; 300] ms range with a sigmoid function:
where \({\max }_{{{{{{{{\rm{acc}}}}}}}}}\) and \({\min }_{{{{{{{{\rm{acc}}}}}}}}}\) are respectively the maximum and minimum accuracies of the decoder, k the steepness and τ the time constant of the function. To perform decoding on the same number of vulnerable or resilient neurons, we randomly picked replacement groups of 100 neurons and bootstrapped this process five times.
As the neurons were clustered into two populations for comparison purposes (Fig. 4), we also reported the decoding accuracy based on a continuous vulnerability score (Fig. 6f). This score was computed as a sum of neuronal responses variables significantly different after the clustering, weighted by their mean Principal Component (PC1 and PC2) parameters:
where W_{i} is a parameter yielded by the Principal Component Analysis corresponding to its associated neuronal response variable. Each variable is normalized, yielding a scalar score that varies between 0 (most resilient) to 1 (most vulnerable neuron). This scorebased decoding was performed on groups of 100 neurons sorted by descending score and repeated a total of seven times on increasingly more vulnerable neurons (thus with an overlap of 20 neurons).
Computational model
We used a recurrent network of orientationtuned neurons to model responses to increasing orientation variance B_{θ}. The model presently used was first used to account for the intracortical activity in the cat primary visual cortex^{36}, although it was presently simplified as a centersurround filter in the orientation domain^{29}. Notably, this network has been able to account for numerous experimental findings, including learning and adaptation of cortical neurons^{37,38}, whose implementations are similar to ours.
The model consisted of N orientationtuned neurons, evenly tiling the orientation space between −π and π. Each neuron is modeled as a single passive unit whose membrane potential obeys the equation:
where τ is the membrane time constant and V_{ff}, V_{exc}, V_{inh} are the synaptic potentials coming from the feedforward input, recurrent excitatory and recurrent inhibitory connectivity, respectively. The firing rate R at time t of each neuron is computed as an instantaneous quantity modulated by a gain α:
For computational simplicity, the neurons had no spontaneous firing rate and V was measured relative to the firing threshold. Each neuron could send mixed excitatory and inhibitory synaptic potentials to its neighbor, although this specific model has been reported to achieve similar behavior with separate units^{38}. For each stimulus of main orientation θ, the input to a cell with preferred orientation θ_{pref} is:
where J_{ff} is the strength of the input and I_{0} is the modified Bessel function of order 0. The righthand side of the equation describes a von Mises with mean θ_{pref} and concentration κ_{ff}. This latter parameter is related to the orientation variance B_{θ}, which was varied to yield a model’s TVF B_{θ}/CV curves:
a total of 20 B_{θ} spanning the same range used in the experiments were used, each with 32 different θ tiling a [−75°;75°] orientation space. The recurrent connectivity profile for excitatory (C_{exc}) and inhibitory (C_{inh}) synapses was controlled by separate von Mises distributions over the orientation space Θ:
which are both used to describe an overall connectivity kernel:
which followed a typical Ricker wavelet (or Mexican hat) shape (Fig. 7d). The overall activity of the network is then a weighted sum of the firing rates of all the neurons:
Parameterization of the model was done to match single V1 neuron recordings of anesthetized cats, in an experimental setup similar to the one used here^{69}. The computational procedure to match experimental data was entirely done in a previous publication^{38}. Briefly, it consisted in scanning a range of possible values for each parameter, then finding all possible combinations using a metric of likeliness to single grating response, timetopeak, peak response and tuning width. The parameters yielded by this procedure were τ = 10.8 ms; α = 10.6 Hz/mV; J_{ff} = 9.57 mv/Hz; J_{exc} = 1.71 Hz/mV; J_{inh} = 2.0178 Hz/mV. For the feedforward mode of the model (Fig. 7b), J_{exc} and J_{inh} were set to 0 Hz/mV and the input was convolved with a receptive field:
of which we reported the HalfWidth at HalfHeight, given by^{70}:
For the recurrent mode (Fig. 7c–e), the concentration measures of the recurrent connectivity profiles κ_{exc} and κ_{inh} were both varied from 0.35 to 7, in 200 even steps, and the input was not convolved with a receptive field.
Statistics and reproducibility
All data were analyzed using custom Python code. Statistical analysis was performed using nonparametric tests. Wilcoxon signedrank test with discarding of zerodifferences was used for paired samples and Mann–Whitney Utest with exact computation of the U distribution was used for independent samples. Due to the impracticality of using error bars when plotting time series, colored contours are used to represent standard deviation values (unless specified otherwise), with a solid line representing mean values. For boxplots, the box extends from the lower to upper quartile values, with a solid white line at the median value. The upper and lower whiskers extend to respectively Q1 − 1.5*IQR and Q3 + 1.5*IQR, where Q1 and Q3 are the lower and upper quartiles and IQR is the interquartile range.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Data used in the present study is publicly available in a Figshare repository^{71}. Unprocessed electrophysiological recording files are available upon reasonable request to the corresponding author.
Code availability
Custom Python code written for the present study is publicly available in a GitHub repository^{72}.
References
Hubel, D. H. & Wiesel, T. N. Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 148, 574 (1959).
Priebe, N. J. Mechanisms of orientation selectivity in the primary visual cortex. Annu. Rev. Vis. Sci. 2, 85–107 (2016).
Fiser, J., Chiu, C. & Weliky, M. Small modulation of ongoing cortical dynamics by sensory input during natural vision. Nature 431, 573–578 (2004).
Simoncelli, E. P. & Olshausen, B. A. Natural image statistics and neural representation. Annu. Rev. Neurosci. 24, 1193–1216 (2001).
Helmholtz, H. v. Helmholtz’s Treatise on Physiological Optics, 3 Vols. (Optical Society of America, 1924).
Friston, K. A theory of cortical responses. Philos. Trans. R. Soc. B Biol. Sci. 360, 815–836 (2005).
Barthelmé, S. & Mamassian, P. Evaluation of objective uncertainty in the visual system. PLoS Comput. Biol. 5, e1000504 (2009).
Bastos, A. M. et al. Canonical microcircuits for predictive coding. Neuron 76, 695–711 (2012).
Goris, R. L., Simoncelli, E. P. & Movshon, J. A. Origin and function of tuning diversity in macaque visual cortex. Neuron 88, 819–831 (2015).
Orbán, G., Berkes, P., Fiser, J. & Lengyel, M. Neural variability and samplingbased probabilistic representations in the visual cortex. Neuron 92, 530–543 (2016).
Festa, D., Aschner, A., Davila, A., Kohn, A. & CoenCagli, R. Neuronal variability reflects probabilistic inference tuned to natural image statistics. Nat. Commun. 12, 3635 (2021).
Keeble, D., Kingdom, F., Moulden, B. & Morgan, M. Detection of orientationally multimodal textures. Vision Res. 35, 1991–2005 (1995).
Beaudot, W. H. & Mullen, K. T. Orientation discrimination in human vision: psychophysics and modeling. Vision Res. 46, 26–46 (2006).
Phillips, G. C. & Wilson, H. R. Orientation bandwidths of spatial mechanisms measured by masking. J. Opt. Soc. Am. A 1, 226–232 (1984).
Heeley, D., Timney, B., Paterson, I. & Thompson, R. Width discrimination for bandpass stimuli. Vision Res. 29, 901–905 (1989).
Heeley, D. W. & BuchananSmith, H. M. The influence of stimulus shape on orientation acuity. Exp. Brain Res. 120, 217–222 (1998).
Hénaff, O. J., BoundySinger, Z. M., Meding, K., Ziemba, C. M. & Goris, R. L. Representation of visual uncertainty through neural gain variability. Nat. Commun. 11, 2513 (2020).
Leon, P. S., Vanzetta, I., Masson, G. S. & Perrinet, L. U. Motion clouds: modelbased stimulus synthesis of naturallike random textures for the study of motion perception. J. Neurophysiol. 107, 3217–3226 (2012).
Johnson, A. P. & Baker, C. L. Firstand secondorder information in natural images: a filterbased approach to image statistics. J. Opt. Soc. Am. A 21, 913–925 (2004).
Field, D. J. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A 4, 2379–2394 (1987).
Rust, N. C. & Movshon, J. A. In praise of artifice. Nat. Neurosci. 8, 1647–1650 (2005).
Naka, K. & Rushton, W. A. Spotentials from colour units in the retina of fish (Cyprinidae). J. Physiol. 185, 536–555 (1966).
Thorndike, R. L. Who belongs in the family. Psychometrika 18, 267–276 (1953).
Hubel, D. H. & Wiesel, T. N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160, 106 (1962).
Movshon, J. A. The analysis of moving visual patterns. Exp. Brain Res. 54, 117–151 (1985).
Laughlin, S. A simple coding procedure enhances a neuron’s information capacity. Z. Naturforschung C Biosci. 36, 910–912 (1981).
Kinouchi, O. & Copelli, M. Optimal dynamical range of excitable networks at criticality. Nat. Phys. 2, 348–351 (2006).
Ringach, D. L., Hawken, M. J. & Shapley, R. Dynamics of orientation tuning in macaque primary visual cortex. Nature 387, 281–284 (1997).
Ringach, D. L., Shapley, R. M. & Hawken, M. J. Orientation selectivity in macaque V1: diversity and laminar dependence. J. Neurosci. 22, 5639–5651 (2002).
Bishop, C. M. Pattern Recognition and Machine Learning (Springer, 2006).
Berens, P. et al. A fast and simple population code for orientation in primate V1. J. Neurosci. 32, 10618–10626 (2012).
Chavane, F., Perrinet, L. U. & Rankin, J. Revisiting horizontal connectivity rules in V1: from liketolike towards liketoall. Brain Struct. Funct. 227, 1279–1295 (2022).
Roelfsema, P. R., Engel, A. K., König, P. & Singer, W. Visuomotor integration is associated with zero timelag synchronization among cortical areas. Nature 385, 157–161 (1997).
Aitchison, L. & Lengyel, M. With or without you: predictive coding and Bayesian inference in the brain. Curr. Opin. Neurobiol. 46, 219–227 (2017).
Douglas, R. J., Martin, K. A. & Whitteridge, D. A canonical microcircuit for neocortex. Neural Comput. 1, 480–488 (1989).
Somers, D. C., Nelson, S. B. & Sur, M. An emergent model of orientation selectivity in cat visual cortical simple cells. J. Neurosci. 15, 5448–5465 (1995).
Teich, A. F. & Qian, N. Learning and adaptation in a recurrent model of V1 orientation selectivity. J. Neurophysiol. 89, 2086–2100 (2003).
del Mar Quiroga, M., Morris, A. P. & Krekelberg, B. Adaptation without plasticity. Cell Rep. 17, 58–68 (2016).
Hubel, D. H. & Wiesel, T. N. Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat. J. Neurophysiol. 28, 229–289 (1965).
Jia, H., Rochefort, N. L., Chen, X. & Konnerth, A. Dendritic organization of sensory input to cortical neurons in vivo. Nature 464, 1307–1312 (2010).
Chen, X., Leischner, U., Rochefort, N. L., Nelken, I. & Konnerth, A. Functional mapping of single spines in cortical neurons in vivo. Nature 475, 501–505 (2011).
Iacaruso, M. F., Gasler, I. T. & Hofer, S. B. Synaptic organization of visual space in primary visual cortex. Nature 547, 449–452 (2017).
Scholl, B., Wilson, D. E. & Fitzpatrick, D. Local order within global disorder: synaptic architecture of visual space. Neuron 96, 1127–1138 (2017).
Monier, C., Chavane, F., Baudot, P., Graham, L. J. & Frégnac, Y. Orientation and direction selectivity of synaptic inputs in visual cortical neurons: a diversity of combinations produces spike tuning. Neuron 37, 663–680 (2003).
Chavane, F. et al. Lateral spread of orientation selectivity in V1 is controlled by intracortical cooperativity. Front. Syst. Neurosci. 5, 4 (2011).
Uhl, R. R., Squires, K. C., Bruce, D. L. & Starr, A. Effect of halothane anesthesia on the human cortical visual evoked response. Anesthesiology 53, 273–276 (1980).
Villeneuve, M. Y. & Casanova, C. On the use of isoflurane versus halothane in the study of visual response properties of single cells in the primary visual cortex. J. Neurosci. Methods 129, 19–31 (2003).
MartinezConde, S. et al. Effects of feedback projections from area 18 layers 2/3 to area 17 layers 2/3 in the cat visual cortex. J. Neurophysiol. 82, 2667–2675 (1999).
Wang, C., Waleszczyk, W., Burke, W. & Dreher, B. Modulatory influence of feedback projections from area 21a on neuronal activities in striate cortex of the cat. Cereb. Cortex 10, 1217–1232 (2000).
Huang, L., Chen, X. & Shou, T. Spatial frequencydependent feedback of visual cortical area 21a modulating functional orientation column maps in areas 17 and 18 of the cat. Brain Res. 998, 194–201 (2004).
Hudetz, A. G., Vizuete, J. A., Pillay, S. & Mashour, G. A. Repertoire of mesoscopic cortical activity is not reduced during anesthesia. Neuroscience 339, 402–417 (2016).
Carandini, M. et al. Do we know what the early visual system does? J. Neurosci. 25, 10577–10597 (2005).
Olshausen, B. A. & Field, D. J. Emergence of simplecell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).
Vacher, J., Meso, A. I., Perrinet, L. U. & Peyré, G. Biologically inspired dynamic textures for probing motion perception. In Proc. TwentyNinth Annual Conference on Neural Information Processing Systems (NIPS) (NIPS, 2015).
Simoncini, C., Perrinet, L. U., Montagnini, A., Mamassian, P. & Masson, G. S. More is not always better: adaptive gain control explains dissociation between perception and action. Nat. Neurosci. 15, 1596–1603 (2012).
Ravello, C. R., Perrinet, L. U., Escobar, M.J. & Palacios, A. G. Speedselectivity in retinal ganglion cells is sharpened by broad spatial frequency, naturalistic stimuli. Sci. Rep. 9, 1–16 (2019).
Swindale, N. V. Orientation tuning curves: empirical description and estimation of parameters. Biol. Cybern. 78, 45–56 (1998).
Movshon, J. A., Thompson, I. & Tolhurst, D. Spatial and temporal contrast sensitivity of neurones in areas 17 and 18 of the cat’s visual cortex. J. Physiol. 283, 101–120 (1978).
Peirce, J. et al. Psychopy2: experiments in behavior made easy. Behav. Res. Methods 51, 195–203 (2019).
Siegle, J. H. et al. Open Ephys: an opensource, pluginbased platform for multichannel electrophysiology. J. Neural Eng. 14, 045003 (2017).
Pachitariu, M., Steinmetz, N. A., Kadir, S. N., Carandini, M. & Harris, K. D. Fast and accurate spike sorting of highchannel count probes with kilosort. Adv. Neural Inf. Process. Syst. 29, 4448–4456 (2016).
Rossant, C. et al. Spike sorting for large, dense electrode arrays. Nat. Neurosci. 19, 634–641 (2016).
Katzner, S. et al. Local origin of field potentials in visual cortex. Neuron 61, 35–41 (2009).
Maier, A., Adams, G. K., Aura, C. & Leopold, D. A. Distinct superficial and deep laminar domains of activity in the visual cortex during rest and stimulation. Front. Syst. Neurosci. 4, 31 (2010).
Smith, M. A., Majaj, N. J. & Movshon, J. A. Dynamics of motion signaling by neurons in macaque area MT. Nat. Neurosci. 8, 220–228 (2005).
Quiroga, R. Q., Reddy, L., Koch, C. & Fried, I. Decoding visual inputs from multiple neurons in the human temporal lobe. J. Neurophysiol. 98, 1997–2007 (2007).
Guitchounts, G., Masis, J., Wolff, S. B. & Cox, D. Encoding of 3D head orienting movements in the primary visual cortex. Neuron 108, 512–525 (2020).
Brodersen, K. H., Ong, C. S., Stephan, K. E. & Buhmann, J. M. The balanced accuracy and its posterior distribution. In Proc. 2010 20th International Conference on Pattern Recognition, 3121–3124 (IEEE, 2010).
Felsen, G. et al. Dynamic modification of cortical orientation tuning mediated by recurrent connections. Neuron 36, 945–954 (2002).
Swindale, N. V., Grinvald, A. & Shmuel, A. The spatial pattern of response magnitude and selectivity for orientation and direction in cat visual cortex. Cereb. Cortex 13, 225–238 (2003).
Ladret, H. Data for Ladret et al. 2023 : Cortical recurrence supports resilience to sensory variance in the primary visual cortex. figshare https://figshare.com/articles/dataset/Data_for_Ladret_et_al_2023_Cortical_recurrence_supports_resilience_to_sensory_variance_in_the_primary_visual_cortex_/23366588 (2023).
Ladret, H. hugoladret/varianceprocessingV1: v1.0publication. Zenodo https://doi.org/10.5281/zenodo.8016705 (2023).
Acknowledgements
The authors would like to thank Genevieve Cyr for her technical assistance, Bruno Oliveira Ferreira de Souza and Visou Ady for experimental advice, Louis Eparvier, JeanNicolas Jérémie and Salvatore Giancani for their comments on the manuscript, and Jonathan Vacher for fruitful exchanges on the formalization of the generation of synthetic images and for his contributions to related analysis of other neurophysiological recordings. This work was supported by the French government under the France 2030 investment plan, as part of the Initiative d’Excellence d’AixMarseille Université  A*MIDEX (AMX21RID025), as well as by an ANR project “AgileNeuRobot” ANR20CE230021 to L.U.P, by a CIHR grant to C.C. (PJT148959) and an École Doctorale 62 PhD grant to H.J.L.
Author information
Authors and Affiliations
Contributions
L.U.P., C.C., F.C., N.C., and H.J.L. designed the study. H.J.L., N.C. and L.I. collected the data. H.J.L. and L.U.P. analyzed the data. H.J.L. and L.U.P. wrote the original draft of the manuscript. All authors reviewed and edited the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Communications Biology thanks Dario L. Ringach and the other anonymous reviewer(s) for their contribution to the peer review of this work. Primary handling editors: Enzo Tagliazucchi and Joao Valente. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ladret, H.J., Cortes, N., Ikan, L. et al. Cortical recurrence supports resilience to sensory variance in the primary visual cortex. Commun Biol 6, 667 (2023). https://doi.org/10.1038/s42003023050423
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s42003023050423
This article is cited by

Cortical recurrence supports resilience to sensory variance in the primary visual cortex
Communications Biology (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.