Endogenous activity modulates stimulus and circuit-specific neural tuning and predicts perceptual behavior

Perception reflects not only sensory inputs, but also the endogenous state when these inputs enter the brain. Prior studies show that endogenous neural states influence stimulus processing through non-specific, global mechanisms, such as spontaneous fluctuations of arousal. It is unclear if endogenous activity influences circuit and stimulus-specific processing and behavior as well. Here we use intracranial recordings from 30 pre-surgical epilepsy patients to show that patterns of endogenous activity are related to the strength of trial-by-trial neural tuning in different visual category-selective neural circuits. The same aspects of the endogenous activity that relate to tuning in a particular neural circuit also correlate to behavioral reaction times only for stimuli from the category that circuit is selective for. These results suggest that endogenous activity can modulate neural tuning and influence behavior in a circuit- and stimulus-specific manner, reflecting a potential mechanism by which endogenous neural states facilitate and bias perception.


Supplementary Figures
Supplementary Figure 1. Timecourse of stP and stBHA from an example electrode. Single trial field potential (left panel) and broadband high-gamma (right panel) activity recorded from an example category-selective electrode with regard to its preferred condition (faces) and a nonpreferred condition (houses). Vertical black line indicates stimulus onset time, single trials are plotted in thin lines, averaged stP and stBHA responses are plotted in dashed lines, dark red line represents a fast response trial (RT = 688 ms), dark gray line represents a slow response trial (RT = 985 ms).

Supplementary Methods
Solving the two-stage GLM using coordinate descent

Solve the elastic-net problem in the first step
In the first step, we set pre = 0, and solve the following elastic-net problem, we follow the classical coordinate descent method 2 : where evk is a vector that contains the intercept evk 0 and the feature weights evk and For simplicity of the notation, we are omitting the superscript 'evk' in the following part, and we assume that X has been standardized such that each dimension x :j has 0 mean and unit variance.
During the optimizing iterations, assume that the current solution is [˜ ,˜ 0 ], we are solving the updated solution [ 0 , ] following problem: we can use quadratic approximation around [˜ ,˜ 0 ] for the negative log likelihood term in (4) Plugging (7),(8) into (5) and the comparing with (6), we get As a result, solving (4) becomes solving the following regularized weighted least-squares problem: We use coordinate descent to solve (11). Taking subgradient and set it to 0, through some calculus we get coordinate-wise updatẽ j S( wherez ( j) i =˜ 0 + P k6 =j x ik˜ k is the fitted value excluding the contribution from x ij , and S(z, ) = sign(z)(|z| ) + is the soft-thresholding operator, where To sum up, in the first step, we solve the elastic-net regularized GLM using coordinate de-scent, as shown in Algorithm 1. Algorithm 1: Solve the elastic-net regularized GLM using coordinate descent Data: data matrix X evk 2 R N ⇥T 2 for post-stimulus part of the data, data label y 2 R N ; where N is the number of samples, T 2 = t P evk + t BHA evk , and X evk = [X P evk , X BHA evk ]; Parameters: the elastic-net hyper-parameter ↵, maximum regularization parameter max and minimum regularization parameter ✏ max . Result: Weight vectors for post-stimulus features ⇤ evk = [ evk 0 , P evk , BHA evk ] 1 Fit the elastic-net problem for post-stimulus features: 2 for the i-th cross-validation split {X while not converge do 5 update the current quadratic approximation (11)by computing (9),(10); 6 for j 1 to T 2 (cyclic coordinate descent) do 7 update the weight of each coordinate˜ j using (12); 8 estimate the deviance of the solution for current on X (i) evk,test ; 9 find optimal ⇤ and the corresponding ⇤ evk that minimizes deviance.

Solve the group elastic-net GLM problem in the second step
The second step of fitting the two-stage GLM requires fixing the contribution from post-stimulus features and optimize the model with group elastic-net penalty on the pre-stimulus features 3 . By fixing the weights from post-stimulus features, for each sample x i , we get a fixed offset where and where the second term is the group-lasso penalty on the pre-stimulus features. Similarly to the previous part, from now on we omit the 'pre' in superscript and subscript for simplicity. Similar to previous part, we first take the quadratic approximation of the negative log likelihood at the current iteration step around [˜ 0 ,˜ ] as where z = [z 1 , ..., z N ] T , W = diag{w 1 , ..., w N }, and X = [X (1) , ..., X (G) ] is the blocks in X that corresponding to each group (g) and and Analogously, solving (14) becomes iteratively solving the following regularized weighted least-squares problem: Let r ( g) = z P j6 =g X (j) (j) be the residual excluding the contribution of (g) . The firstorder optimality condition gives (X (g) ) T W r ( g) + ⇥ 2 (1 ↵)I (g) (X (g) ) T W X (g) ⇤ (g) + 2 ↵ p p g ⌫ (g) = 0 where subgradient The optimal solution for each group is given as (X (g) ) T W r ( g) if k(X (g) ) T W r ( g) k 2 > 2 ↵ p p g 0 if k(X (g) ) T W r ( g) k 2  2 ↵ p p g (25)