Supplementary Information for Effects of Sad and Happy Music on Mind-Wandering and the Default Mode Network

Music is a ubiquitous phenomenon in human cultures, mostly due to its power to evoke and regulate emotions. However, effects of music evoking different emotional experiences such as sadness and happiness on cognition, and in particular on self-generated thought, are unknown. Here we use probe-caught thought sampling and functional magnetic resonance imaging (fMRI) to investigate the influence of sad and happy music on mind-wandering and its underlying neuronal mechanisms. In three experiments we found that sad music, compared with happy music, is associated with stronger mind-wandering (Experiments 1A and 1B) and greater centrality of the nodes of the Default Mode Network (DMN) (Experiment 2). Thus, our results demonstrate that, when listening to sad vs. happy music, people withdraw their attention inwards and engage in spontaneous, self-referential cognitive processes. Importantly, our results also underscore that DMN activity can be modulated as a function of sad and happy music. These findings call for a systematic investigation of the relation between music and thought, having broad implications for the use of music in education and clinical settings.


Experiment 2
Participants details. Participants were screened for depressive symptoms, alexithymia, and sensitivity to music reward, using respectively the Quick Inventory of Depressive Symptomatology (QIDS-SR; Rush et al., 2003), the Toronto Alexithymia Scale (TAS-20; Bagby et al., 1994), and the Barcelona Music Reward Questionnaire (BMRQ; Mas-Herrero et al., 2013). All participants scored below 6 on the QIDS-SR and 52 on the TAS-20 (thus, none of the participants were depressive or alexithymic). With regard to the BMRQ, all participants scored between 40 and 60 on the two factors of emotion evocation and mood regulation, indicating an average sensitivity to reward derived from music-evoked emotional experiences. All participants were German native speakers.
None of the participants were professional musicians. 58.3% of the participants were non-musicians, 29.2% amateur musicians, and 12.5% semi-professional musicians.
Participants' favorite musical genres fell into the following categories: 25. 7% rock, 20% electronic, 15.7% pop, 15.7% classical & soundtrack, 12.8% jazz, 5.7% reggae, and 4.4.% other. Exclusion criteria were prior history of major neurological or psychiatric disorder, alcohol and other drug abuse, and excessive consumption of alcohol or caffeine as well as poor sleep during the 24 hours before the experimental session.

Stimulus preparation and selection.
We initially selected a large number of instrumental excerpts of sad and happy film soundtracks, considered capable of evoking emotions of sadness and happiness, respectively. We avoided popular music themes to control for memory effects (similarly as in Experiments 1A and 1B) and we matched sad and happy pieces in instrumentation. Importantly, because the tempo, measured in beats per minute (BPM), is usually faster for happy than sad music, and because music beats also evoke vestibular responses (potentially leading to the activation of vestibular cortical areas, which overlap in part with areas implicated in emotional processing; Koelsch, 2014), we ensured that both sad and happy excerpts had the same tempo. To achieve this, we compiled sad and happy pairs of excerpts, featuring the same or a very similar number of BPM, and we combined them with an isochronous sequenced electronic beat (as in Pehrs et al., 2014), which contained sounds of drum kits or percussions and made the tempo clear and noticeable to participants. We composed and added such electronic beat to each pair of excerpts using the software FL Studio (https://www.image-line.com/flstudio/). Different types of beats were used, following the rule of applying the same beat to each pair (sad, happy) of excerpts. The volume of the beats was set to 6 dB below the volume of the original music excerpts during the rendering of the music stimuli. A behavioral pilot study was then performed to determine the best pairs (sad, happy) of stimuli capable of evoking emotions of sadness and happiness. 42 volunteers (24 female, mean age = 28.1, age range 18-38) listened to 15 sad and 15 happy pieces, presented in a counterbalanced order, and were then asked to rate their emotional state during the music on four scales (valence, arousal, sadness, and happiness [7-point scales]). Based on the emotion ratings, we selected the four "sad-happy" pairs of excerpts that were most consistently identified as belonging to their respective emotion category.
Neutral stimuli. In the scanner participants were also presented with neutral stimuli.
These were isochronous tone sequences for which the pitch classes were randomly selected from the pentatonic scale. They featured the same beat track of the corresponding sad-happy stimulus pair (thus, neutral and emotion stimuli had identical numbers of BPM). They were generated using the MIDI (Musical Instrument Digital Interface) toolbox for Matlab (Eerola and Toiviainen, 2004) and edited to have the same length, timbre, fade in/out ramps, as well as loudness of the corresponding sad-happy excerpts.
Controlling for familiarity. Participants listened to short excerpts (15 s) of the selected sad and happy stimuli and indicated their familiarity with each excerpt on a scale ranging from 1 ("I have never heard this piece before") to 5 ("I know this piece").
Participants were not included in the fMRI session if they were familiar with any of the music pieces. A paired t-test showed that there was no significant difference in familiarity between the happy (1.62 ± 0.57) and the sad pieces (1.57 ± 0.63), P > .05. A minimum of 14 days passed between this behavioral session and the fMRI experiment to avoid memory effects (Pereira et al., 2011).

Eigenvector centrality mapping (ECM) comparisons with neutral stimuli (ROI analysis).
To evaluate how stimuli matched for tempo but without a sad or a happy emotional tone modulate DMN activity in comparison with the emotion conditions, we compared ECM values in the nodes of the DMN (vmPFC, dmPFC, PCC, PCC/PCu, and pIPL bilaterally) between sad and neutral stimuli, as well as between happy and neutral stimuli, using ROI analyses. ROI analyses were conducted for the significant clusters identified in the main ECM analysis comparing sad with happy music (see main text and Table S4). Paired t-tests for these ROIs were carried out for the contrasts happy vs.
neutral and sad vs. neutral. Significantly higher centrality values within the six DMN nodes were observed during neutral compared with happy music stimuli (all Ps < .05).
Significantly higher centrality values were found in the vmPFC during sad compared with neutral stimuli (P = .03), and all the remaining DMN structures showed slightly higher centrality values for the contrast sad vs. neutral (although these differences were statistically not significant; all Ps ≈ .1), pointing to a trend towards increased DMN activity during sad vs. neutral music (note that the likelihood that ECM values are higher for the neutral than for the sad condition in all six ROIs is 1/64). Thus, the results suggest that, compared with neutral stimuli, happy music reduces DMN activity, while sad music increases DMN activity. Although the evidence that sad (compared with neutral) music enhances DMN activity is only trend-wise significant, note that the neutral stimuli were more similar to the sad than the happy excerpts with regard to the evoked arousal (arousal ratings did not significantly differ between sad and neutral stimuli, P > .05, but significantly differed between happy and neutral stimuli, P < .001; see Behavioral ratings for neutral stimuli). Moreover, because the neutral stimuli were experienced as less pleasurable compared with the sad and happy ones (both Ps < .001; see Behavioral ratings for neutral stimuli), they may have enhanced mind-wandering levels (mind-wandering increases during boring and unpleasant activities; see Kane et al., 2007 and main text on p. 12). For these reasons, the results of the ROI analyses should be verified by future studies employing neutral stimuli that are better controlled for arousal and valence. This could be done, for example, through the use of more ecologically valid neutral stimuli such as "real" music evoking neither sadness nor happiness. Figure S1. Increased centrality in the right inferior frontal gyrus (IFG) during listening to happy vs. sad music. Shown is a cluster of significantly higher centrality values located in the right IFG (pars opercularis) extending into the right precentral gyrus. The pars opercularis of the IFG (BA 44) is implicated in the processing of music syntax (Maess et al., 2001), consistent with the results of Experiment 1A, indicating more focus on the musical structure during happy (compared with sad) music (Fig. 1B). Figure S2. Emotions evoked by sad, happy, and neutral stimuli during the fMRI experiment. Participants rated their emotional state on four scales: valence, arousal, sadness, and happiness. Scales ranged from 1 ("very unpleasant", "very calm", "not at all") to 6 ("very pleasant", "very aroused", "very much so"). Results were corrected for multiple comparisons. (A) Valence ratings did not significantly differ between sad and happy music, but were significantly lower during neutral compared with sad and happy music. (B) Arousal ratings did not differ significantly between sad and happy music as well as between sad and neutral music, but were significantly lower during neutral compared with happy music. (C) Sadness ratings were significantly higher during sad compared with happy and neutral music, as well as during neutral compared with happy music. (D) Happiness ratings were significantly higher during happy compared with sad and neutral music, but did not differ significantly between sad and neutral music. Error bars indicate 1 SEM. ** P < .001. Tables   Table S1. Items Used in Experiment 1A

Mind-wandering
Mind-wandering Where was your attention just before the music stopped?
Meta-awareness How aware were you of where your attention was focused?

Content of thought
Open-ended format What were you thinking about just before the music stopped?
Valence Was the content of your thoughts positive or negative?

Past
Were you thinking about something from the past?

Future
Were you thinking about something from the future?

Self-referentiality
To what extent were these thoughts about yourself?
Familiar people Please indicate the extent to which your thoughts were about … people you know (e.g., family, friends, partner) Unknown people … unknown people

Movements … body movements or dancing
Bodily sensations … bodily sensations (e.g., feeling hot/cold/tired/hungry, smiling/frowning) Musical structure … thinking about the music (e.g., its melody, beat, harmony) Evaluating the music … evaluating the music (e.g., I like it because..) Experiment … thinking about the experiment

Form of thought
Visual imagery Did you think in images?
Inner language Did you think in words?