Neural sensitivity following stress predicts anhedonia symptoms: a 2-year multi-wave, longitudinal study

Animal models of depression show that acute stress negatively impacts functioning in neural regions sensitive to reward and punishment, often manifesting as anhedonic behaviors. However, few human studies have probed stress-induced neural activation changes in relation to anhedonia, which is critical for clarifying risk for affective disorders. Participants (N = 85, 12–14 years-old, 53 female), oversampled for risk of depression, were administered clinical assessments and completed an fMRI guessing task during a baseline (no-stress) period to probe neural response to receipt of rewards and losses. After the initial task run of the fMRI guessing task, participants received an acute stressor and then, were re-administered the guessing task. Including baseline, participants provided up to 10 self-report assessments of life stress and symptoms over a 2 year period. Linear mixed-effects models estimated whether change in neural activation (post- vs. pre-acute stressor) moderated the longitudinal associations between life stress and symptoms. Primary analyses indicated that adolescents with stress-related reductions in right ventral striatum response to rewards exhibited stronger longitudinal associations between life stress and anhedonia severity (β = −0.06, 95%CI[−0.11, −0.02], p = 0.008, pFDR = 0.048). Secondary analyses showed that longitudinal positive associations between life stress and depression severity were moderated by stress-related increases in dorsal striatum response to rewards (left caudate β = 0.11, 95%CI[0.07,0.17], p < 0.001, pFDR = 0.002; right caudate β = 0.07, 95%CI[0.02,0.12], p = 0.002, pFDR = 0.003; left putamen β = 0.09, 95%CI[0.04, 0.14], p < 0.001, pFDR = 0.002; right putamen β = 0.08, 95%CI[0.03, 0.12], p < 0.001, pFDR = 0.002). Additionally, longitudinal positive associations among life stress and anxiety severity were moderated by stress-related reductions in dorsal anterior cingulate cortex (β = −0.07, 95%CI[−0.12,.02], p = 0.008, pFDR = 0.012) and right anterior insula (β = −0.07, 95%CI[−0.12,−0.02], p = 0.002, pFDR = 0.006) response to loss. All results held when adjusting for comorbid symptoms. Results show convergence with animal models, highlighting mechanisms that may facilitate stress-induced anhedonia as well as a separable pathway for the emergence of depressive and anxiety symptoms.


Functional Data Preprocessing
First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep.A B0-nonuniformity map (or fieldmap) was estimated based on a phase-difference map calculated with a dual-echo GRE (gradient-recall echo) sequence, processed with a custom workflow of SDCFlows inspired by the epidewarp.fslscript and further improvements in HCP Pipelines [7].The fieldmap was then co-registered to the target EPI (echo-planar imaging) reference run and converted to a displacements field map (amenable to registration tools such as ANTs) with FSL's fugue and other SDCflows tools.Based on the estimated susceptibility distortion, a corrected EPI (echo-planar imaging) reference was calculated for a more accurate co-registration with the anatomical reference.The BOLD reference was then co-registered to the T1w reference using bbregister (FreeSurfer) which implements boundary-based registration [8].Co-registration was configured with six degrees of freedom.Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9)[9].BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 [10] (RRID:SCR_005927).
The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for headmotion and susceptibility distortions.These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD.The BOLD time-series were resampled into standard space, generating a preprocessed BOLD run in ['MNI152NLin2009cAsym'] space.A reference volume and its skull-stripped version were then generated using a custom methodology of fMRIPrep.
Several confounding time-series were calculated based on the preprocessed BOLD: framewise displacement (FD), DVARS and three region-wise global signals.FD and DVARS are calculated for each functional run, both using their implementations in Nipype (following the definitions by Power et al., 2014 [11].The three global signals are extracted within the CSF, the WM, and the whole-brain masks.Additionally, a set of physiological regressors were extracted to allow for component-based noise correction (CompCor, [12]) Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor).tCompCor components are then calculated from the top 5% variable voxels within a mask covering the subcortical regions.This subcortical mask is obtained by heavily eroding the brain mask, which ensures it does not include cortical GM regions.For aCompCor, components are calculated within the intersection of the aforementioned mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run (using the inverse BOLD-to-T1w transformation).Components are also calculated separately within the WM and CSF masks.For each CompCor decomposition, the k components with the largest singular values are retained, such that the retained components' time series are sufficient to explain 50 percent of variance across the nuisance mask (CSF, WM, combined, or temporal).The remaining components are dropped from consideration.The 6-parameter head-motion estimates calculated in the correction step were placed within the corresponding confounds file.The confound time series derived from head motion estimates and global signals were expanded with the inclusion of temporal derivatives and quadratic terms for each [13], resulting in 32 motion confounds.Frames that exceeded a threshold of 0.5 mm FD or 1.5 standardised DVARS were annotated as motion outliers.All resamplings were performed with a single interpolation step by composing all the pertinent transformations (i.e., head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and output spaces).Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels [14].Non-gridded (surface) resamplings were performed using mri_vol2surf (FreeSurfer).

Supplementary Figure 1. Change in ROI activation pre-to post-stress
There were no outliers in residual scores, defined by values exceeding Q3 + 3*IQR.On average, there was no evidence of a significant difference in ROI activation pre-to post-stress in the Win or Loss Conditions (see Supplementary Figures 2A, 3A); however, there is variation in individual change in activation pre-to post-stress in both conditions (see Supplementary Figures 2B, 3B).

Note. R=Right; L=Left Supplementary Figure 3 .
ROI activation patterns pre-and post-stress within the loss condition Note.R=Right; L=Left Supplementary Figure 4. Affective ratings pre and post stress Note.Height of bars indicate mean affect rating, and bars denote 95% CI. ***p<0.001.

Supplementary Table 1. Comparison of Participant Characteristics between Included and
Choices for biological sex were Male or Female.Within those excluded, some demographic information was not recorded (n=27) with unrecorded Tanner, SHAPS, MAFQ, MASC, ALEQ, n=1 with unrecorded Sex, n=3 with unrecorded Ethnicity and Race.Post-hoc comparisons for the significant group difference in Race are unreliable because there are rows with zero counts.

Table 2 . Comparison of Participant Characteristics between Included and Excluded Participants with Brain Data
Choices for biological sex were Male or Female.