Abstract
Music provides a means of communicating affective meaning. However, the neurological mechanisms by which music induces affect are not fully understood. Our project sought to investigate this through a series of experiments into how humans react to affective musical stimuli and how physiological and neurological signals recorded from those participants change in accordance with self-reported changes in affect. In this paper, the datasets recorded over the course of this project are presented, including details of the musical stimuli, participant reports of their felt changes in affective states as they listened to the music, and concomitant recordings of physiological and neurological activity. We also include non-identifying meta data on our participant populations for purposes of further exploratory analysis. This data provides a large and valuable novel resource for researchers investigating emotion, music, and how they affect our neural and physiological activity.
Measurement(s) | brain measurement • response to stimulus • emotion/affect behavior trait • electrodermal activity measurement |
Technology Type(s) | electroencephalography (EEG) • electrocardiography • functional magnetic resonance imaging • electrode |
Factor Type(s) | age of participants • sex of participants |
Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.12326519
Background & Summary
Music is an intrinsic part of life for many people. It provides a means to smooth social bonding, a source of entertainment, and provides a sense of identity by communicating, expressing, and inducing a wide range of emotions1,2,3,4. However, the processes by which music induces emotions are not fully understood5. In particular, the links between neurological activity, reported emotions, and music have not yet been explored in sufficient detail to allow a complete theory of the process of emotion-induction by music to be developed1,6,7,8,9,10,11.
Our project, entitled ‘Brain-computer music interfacing for monitoring and inducing affective states’ (http://neuromusic.soc.plymouth.ac.uk/bcmi-midas/index.html), sought to explore relationships between music, emotions, and activity in the brain and body12,13,14,15,16. We sought to build relational models describing how these different process relate to one another and to use these models to construct physiologically-driven systems for interacting with music.
Over the course of the project, datasets were collected from a number of different experiments exploring how music affects emotions and related neurological and physiological processes. The studies involved adult participants aged 18 to 66 years old, with approximately the same numbers of males and females. Combinations of music excerpts and computer-generated music were used as stimuli, and participants’ felt emotions were recorded via self-reports as they listened to music via a battery of widely-used methods. Additionally, in a large proportion of the studies neural and physiological data were recorded while participants listened to music. This included, but was not limited to, functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), electrocardiogram (ECG), galvanic skin response (GSR), and respiration rates.
In total our data consists of approximately 150 hours of recordings from 114 participants. The data are summarized in Table 1.
Methods
In this paper we present four key datasets that were recorded during the project and which form the basis of the majority of the research conducted in the project. The four datasets are:
-
1.
The ‘Film clips’ dataset, which contains EEG recorded while participants listened to short music clips extracted from films and chosen to induce specific emotional responses.
-
2.
The ‘Brain-Computer Music Interface’ dataset, which contains EEG and other physiological signals recorded during development and evaluation of the Brain-Computer Music Interface (BCMI) system developed in the project. This dataset is divided into three parts: calibration, training, and testing.
-
3.
The ‘BCMI tempo’ dataset, which contains EEG recorded while developing a BCMI for controlling the tempo of music.
-
4.
The ‘Joint EEG-fMRI’ dataset, which contains EEG and fMRI recorded simultaneously from participants while they listened to both synthetic and classical music and reported their current felt affective states on a continuous basis.
Film clips
The film clips dataset was recorded as part of a study investigating the effects of short clips of acoustic film music on participants’ emotions and their EEG. Thirty one healthy adult individuals (age range 18–66, 18 female) participated in this study. EEG (approximately 45 mins in total per participant) was recorded from 19 channels positioned according to the international 10/20 system for electrode placement. Participants were asked to listen to a series of short (12 s) clips of music taken from a pre-labeled set of extracts of music from films17. They were then asked to report their felt affective states via a series of 8 randomly-ordered Likert questions designed to identify the level of emotional response along 8 axes. These 8 questions allowed the participants to report their music-induced emotions in terms of pleasantness, energy, sadness, anger, tenderness, happiness, fear, and tension.
The experiment consisted of 6 runs of EEG recording. The first and last of these runs were baseline recordings in which participants were instructed to sit still for 300 s and avoid excessive eye movement. The other 4 runs each contained 10 trials, presented in random order to each participant.
Each trial contained a 3 s fixation cross, followed by 12 s of music, a 0.5 s pause and then the 8 Likert questions were presented, in random order, to the participants for them to report on their felt emotions after listening to the music. The specific questions were of the form ‘The music made me feel…’ followed by one of the adjectives: ‘pleasant’, ‘energetic’, ‘tense’, ‘angry’, ‘fearful’, ‘happy’, ‘sad’, or ‘tenderness’. Each question could be answered on a 9-point scale from ‘strongly disagree’ to ‘strongly agree’. Finally, the inter-trial interval was uniformly drawn from the range 2–4 s.
Full details of how the data recorded in this study was processed and analysed, as well as the initial results, are available in13. Additional analysis of this dataset, showing how music-induced emotions can be more effectively predicted from combinations of EEG and musical features, are presented in18. Finally, this dataset has also been used in the evaluation of a feature selection method19.
Brain-Computer Music Interface (BCMI): calibration, training, testing
A Brain-Computer Music Interface (BCMI) was constructed with the aim of using music as a means of modifying the user’s affective state. Specifically, the BCMI attempted to identify a user’s current affective state and modify it by playing a sequence of generated music pieces in order to place the user in a more desirable affective state (for example by reducing a user’s arousal to calm them down). The BCMI was calibrated, trained, and tested online on a per participant basis, over a number of sessions and days. EEG, GSR, and ECG data from 19 healthy adult participants is available across 3 datasets, corresponding to the calibration, training, and testing sessions (with approximately 1.2, 3.2 and 1.3 hours of recording per participant in each session).
The system was calibrated for each individual user during 1 session, comprising 5 runs during which the user was presented with 40 s music clips targeting different affective trajectories, i.e. shifts from one affective state to another.
During the calibration and training sessions recordings of pre-generated music were used as stimuli. These were generated with the music generator described in20 and are included with the datasets made available with this paper. During the testing session the synthetic music was generated in real-time and online (the synthetic generator used to generate the music clips is described in20). Because these music stimuli was generated in real time during the experiment they differed between participants and are not included in our datasets. Readers wishing to replicate this experiment may find the details of how to generate the music in20. We have also included the code used by this music generator and a short read-me file describing its use with the datasets.
It may also be noted that the same music generator was used to generate the stimuli used in all three stages of the experiment (calibration, training, and testing). Therefore, the stimuli included with the calibration and training datasets may be viewed as representative of the stimuli used during the testing experiments.
The calibration session consisted of 5 runs, each of which contained eighteen trials in which participants listened to music and reported their current felt emotions as they listened to the music. Each trial began with a fixation cross, which was displayed for 1–2 s. This was followed by a 21 s music listening and reporting period, in which participants both listened to a randomly selected piece of pre-generated music and reported their current felt emotions via the FEELTRACE interface21. This was followed by two self-assessment manikins, which were used to allow the participants to report their current felt valence and arousal. Finally, a brief distraction task was used to minimise the effect of sequential presentations of different pieces of music. Participants were asked to count the number of high-pitched beeps they heard in a 15 s period and then rest for 2.5 s.
Following calibration, the BCMI system was trained for each individual user over a number of days, during which participants listened to 40-s synthetic music clips targeting two affective states, as defined by valence and arousal (the first 20 s targeted one affective state, then the second 20 s targeted a different affective state). During each 40 s music listening period participants reported their current affective states via the FEELTRACE interface. All other details of the trial structure were the same as the calibration session.
Data were recorded over 3 sessions (conducted over separate days), each containing 4 runs (same day) of 18 music-listening trials each. The training sessions served a twofold purpose; firstly, to allow the system to learn how to modify the music generator’s parameters in order to achieve a desired affective trajectory (i.e. shift from one affective state to another) for each individual user; and, secondly, to identify neural signatures of affect and how these varied for different affective states.
The system was then tested in a fully online and real-time testing session, during which the system performance in inducing selected affective trajectories for each individual user was investigated. The data recorded during this session is contained in the BCMI-testing dataset.
The testing session had the same session and trial structure as the training session. Each participant completed one testing session, which contained 4 runs, with 18 trials in each run. Each trial contained a 60 s period during which music was played. In the first 20 s a specific affective state was targeted by the music. The next 20 s were used to attempt to identify the user’s current affective state. Finally, the last 20 s were used by the BCMI to attempt to dynamically alter the user's affective state.
The data recording methods and the BCMI system are described in full in22.
Brain-Computer Music Interface (BCMI) tempo
An additional BCMI system was also developed to allow users to control the tempo of pieces of music via active modulation of their motor imagery. Specifically, the BCMI was developed to allow users to increase the tempo of a piece of music by concentrating on kinaesthetic imagination of upper limb movement. Music was generated dynamically during the experiments via a music generator developed in the project and described in20. The tempo of the music could be reduced by relaxing and increased by kinaesthetic motor imagery.
EEG was recorded from 19 channels positioned according to the international 10/20 system for EEG electrode placement. Recording was made via a BrainProducts BrainAmp system at a sampling rate of 1,000 Hz with impedances below 5 kΩ.
Feedback to the users took three forms: visual (via an onscreen ball), auditory (via the music), and a combination of visual and auditory. The data comprises approximately 1 hour of EEG recordings per participant.
This 1 hour recording session was split into 9 runs, the first of which was a calibration run, with the remaining 8 runs used to train the user to control the BCMI. The calibration run was split into 30 trials, while each subsequent run was split into 18 trials.
Each trial consisted of a fixation cross displayed on screen for 4 s. This was followed by 12 s in which participants were instructed to attempt to use kinasthetic dominant hand motor imagery to attempt one of the following tasks: move a ball either to the top of the screen or to the bottom, increase or decrease the tempo of a piece of music, or simultaneously attempt both tasks. This was followed by feedback (a smiling or frowning face depending on whether the user succeeded in the cued task or not), which was displayed on screen for 0.5 s.
Full details of the dataset are provided in15. Additionally, further analysis of the data was conducted to show how changes in the tempo of the music acted to entrain changes in event-related (de)synchronization (ERD/S) in the EEG23.
Joint EEG-fMRI
A final dataset was recorded via a joint EEG-fMRI imaging modality from a cohort of 21 healthy adult participants (20–30 years old, all right handed, 10 female) while they listened to music. Two sets of stimuli were used: the first comprised synthetic music generated with the intention of inducing a wide range of different affective states, while the second set comprised a small set of classical music clips chosen for their ability to induce a wide range of different affective states.
Participants reported their current felt affective states on a continuous basis via the FEELTRACE interface21 while they listened to the different music clips. Additionally, control conditions were used to control for the arm and eye movements associated with the use of the FEELTRACE interface.
EEG was recorded via 31 channels (with 1 additional channel used for ECG recording) via an MRI-compatible BrainAmp MR system at a rate of 5,000 Hz with impedances below 10 kΩ. EEG channels were placed according to the international 10/20 system for EEG electrode placement.
fMRI recordings were made via a 3.0 Tesla Siemens Magnetom Trio scanner with a 37-channel head coil. Anatomical scans were first made from each participant (field of view = 256 × 256 × 176 voxels, TR = 2020 ms). This was followed by functional sequences (TR = 2000 ms, echo time = 30 ms, field of view = 64 × 64 × 37).
The paradigm consisted of three 10-minute runs during which participants listened to randomly-selected pieces of generated music, followed by a 2 minute n-back task, and then a 30 minute run in which they listened to a set of classical music pieces. During each music-listening trial participants reported their current felt emotions via the FEELTRACE interface.
Each trial began upon a transistor-transistor logic pulse from the fMRI scanner. This was followed by a fixation cross for 1–3 s, followed by a music listening and reporting period, which lasted for 40 s for the generated music and for a variable length of time for the classical music listening session (the length was dependent on the length of the piece of classical music and varied between 2–3 minutes). This was followed by a 0.5 s inter-stimulus interval before waiting for the next TTL pulse from the fMRI scanner to start the next trial.
Full details of this dataset, its analysis, and the experiment design are provided in our associated publication16.
Quality assurance (QA)
All physiological data was visually inspected for presence of artefacts by researchers with at least 5 years experience at the time of inspection. In all cases the inspector was blinded to the associated labels within the data (stimuli used, affective responses reported by participants etc.).
Artefact contaminated trials were corrected or removed in our analysis of the datasets, as described in the associated publications. However, the data are published here in its original, uncorrected, form to allow maximum flexibility in its use.
For reference the artefact removal process and proportion of artefact contaminated trials removed from each dataset in our original publications are described in Table 2.
EEG data was recorded with minimal impedance on all channels using CE certified research grade EEG measurement systems. All experiments were conducted by, or with supervision from, researchers with a minimum of 5 years experience conducting EEG and physiological measurement studies.
All EEG data were recorded via the standard or extended 10/20 international systems for EEG electrode placement. However, the numbers and locations of recording sites differ across studies according to the particular needs of each study.
The fMRI data has been pre-processed to remove face information. Specifically, the anatomical scans were processed to remove the faces of the participants (note this was not necessary on the functional scans as the field of the functional scans only covered the brain and some parts of the skull). The data are, otherwise, provided in raw (unprocessed) format along with associated anatomical scans for each participant. This is to allow maximum flexibility in re-use of the data.
The EEG recordings included in this dataset have already been pre-processed to remove fMRI scanner noise via the average artefact substract (AAS) method24. The EEG has also been co-registered to the fMRI recordings. Specifically, the first TTL trigger recorded in the EEG dataset (see the event description file) corresponds to the time point of the first scan image in the fMRI dataset (functional scan set).
Ethics
All the studies described in this paper were reviewed according to the procedures of the University of Reading research ethics committee and given favourable opinion for conduct. All experiments were performed in accordance with all relevant guidelines and regulations. Informed written consent was obtained from all participants in all sets of experiments.
Data Records
All data are available on the OpenNeuro data archive. The Film Clips dataset can be accessed from the Open Neuro archive via https://doi.org/10.18112/openneuro.ds002721.v1.0.1. The data are provided in BIDS format25,26, with each folder corresponding to data from one participant27.
The BCMI system data are provided in 3 separate datasets, corresponding to the calibration, training, and testing phases of the BCMI system. Even though the number of participants varies between datasets, participant codes are the same across all three datasets for consistency. The BCMI calibration dataset can be accessed from the Open Neuro data archive via https://doi.org/10.18112/openneuro.ds002722.v1.0.128. The BCMI training dataset can be accessed via https://doi.org/10.18112/openneuro.ds002724.v1.0.129. The testing dataset can be accessed via https://doi.org/10.18112/openneuro.ds002723.v1.1.030. The data are provided in BIDS format, with each folder corresponding to data from one session for each participant.
The BCMI tempo dataset can be accessed from the Open Neuro Data Archive via https://doi.org/10.18112/openneuro.ds002720.v1.0.131. The data are provided in BIDS format, with each folder corresponding to data from one participant.
The joint EEG-fMRI dataset can also be accessed from the Open Neuro data archive via https://doi.org/10.18112/openneuro.ds002725.v1.0.032. The data are provided in the BIDS format.
Technical Validation
We performed technical validation for each dataset individually as a part of the initial analysis we performed after each experiment.
Specifically, for the ‘Film clips’ dataset we first examined the participant response data to ensure it contained values within the expected numerical ranges and at the correct points in time. We then segmented the EEG according to the recorded meta data before visually inspecting it to ensure it was reasonably clean of artefacts and physiologically meaningful. This was done by a researcher with over 8 years of experience of EEG inspection (author ID). Finally, the segmented EEG was used to measure changes in prefrontal asymmetry for trials in which participants reported different levels of felt emotions. The results were then compared to those expected from literature. The full details are reported in13.
For the ‘BCMI’ datasets (‘calibration’, ‘training’, and ‘testing’) we technically validated the data via a similar process. The time series of participant responses (recorded via the FEELTRACE interface) were first visually inspected to ensure they were recorded at the correct times and fell within expected numerical ranges. The EEG and other physiological signals were then visually inspected by experienced researchers (authors ID and DW) before segmenting them into trials and plotting the resulting changes in prefrontal asymmetry against the participant’s reports of their felt emotions. This allowed us to validate that the EEG contained expected neural correlates of affective responses to the stimuli. Full details are reported in22.
For the ‘BCMI tempo’ dataset technical validation was performed after completion of the online BCI control experiment. Specifically, the EEG was visually inspected by an experienced researcher (author ID) to ensure it contained a reasonable level of noise and conformed to physiological expectations. The details are reported in15.
Finally, technical validation of the joint EEG-fMRI dataset was performed in a multi-step process. First the triggers, transmitted from the fMRI scanner to the EEG recording, were checked to ensure they occurred at expected times (once every 2 s) and could be used to co-register the EEG and fMRI recordings together. The fMRI data were manually pre-processed using the Statistical Parametric Mapping (SPM version 12) toolbox in Matlab and visually inspected at each stage to ensure correct recording of all parts of the data, including structural (anatomical) scans and functional scans. The associated EEG data were cleaned to remove fMRI scanner artefacts using the average artifact subtraction method24 implemented in the ‘BrainVision Analyser’ software (BrainProducts). The resulting EEG was then visually inspected by an experienced researcher (author ID) to ensure it was reasonably clear of artefacts and conformed to physiological expectations. Full details are provided in16.
Code availability
We have prepared scripts to demonstrate how to load the datasets into Matlab ready to be pre-processed. These scripts are listed and introduced in Table 3. All scripts are placed within the sub-folder ‘code’ in the respective BIDS formatted datasets. Interested readers are also directed to the BIDS Matlab repository, which provides tools for working with BIDS-structured data in Matlab (https://github.com/bids-standard/bids-matlab).
We have also provided the code used by the music generator in the experiments that generated the ‘BCMI tempo’ and ‘BCMI testing’ data-sets. This is located in the ‘code’ sub-folder in each dataset along with a document describing the music generator.
Please note, the music generator is not a single piece of code, but rather a collection of scripts and functions that work together with some proprietary software to generate the music. The code was written in Max/MSP version 6.1 and this software will be needed to get the music generator to work. Further details are also provided in20.
References
Juslin, P. N. & Sloboda, J. A. Music and emotion: Theory and research. Series in affective science (Oxford University Press, 2001).
Västfjäll, D. Emotion induction through music: A review of the musical mood induction procedure. Music Sci. Spec Issue, 173–211 (2001).
Sacks, O. The power of music. Brain. 129, 2528–32 (2006).
Huron, D. B. Sweet Anticipation: Music and the Psychology of Expectation. (MIT Press, 2006).
Zentner, M., Grandjean, D. & Scherer, K. R. Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion. 8, 494–521 (2008).
Sammler, D., Grigutsch, M., Fritz, T. & Koelsch, S. Music and emotion: Electrophysiological correlates of the processing of pleasant and unpleasant music. Psychophysiology 44, 293–304 (2007).
Juslin, P. N. & Västfjäll, D. Emotional responses to music: the need to consider underlying mechanisms. Behav. Brain Sci. 31, 559–75 (2008).
Koelsch, S. Towards a neural basis of music-evoked emotions. Trends Cogn. Sci. 14, 131–7 (2010).
Mueller, K. et al. Investigating brain response to music: A comparison of different fMRI acquisition schemes. NeuroImage 54, 337–343 (2011).
Peretz, I., Aube W. & Armony J. L. Toward a neurobiology of musical emotions. The Evolution of Emotional Communication: From Sounds in Nonhuman Mammals to Speech and Music in Man Oxford University Press (2013).
Hou, J. et al. Review on Neural Correlates of Emotion Regulation and Music: Implications for Emotion Dysregulation. Front. Psychol. 8, 501 (2017).
Daly, I. et. al. Affective brain-computer music interfacing. J. Neural Eng. 13(4), 046022 (2016).
Daly, I. et al. Neural correlates of emotional responses to music: an EEG study. Neurosci. Lett. 573, 52–57 (2014).
Nicolaou, N. et al. Directed Motor-Auditory EEG Connectivity Is Modulated by Music Tempo. Front. Hum. Neurosci. 11, 502 (2017).
Daly, I. et. al. Investigating music tempo as a feedback mechanism for closed-loop BCI control. Prog. Brain Res. 1, 158–169 (2014).
Daly, I. et al. Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music. Sci. Rep-UK. 9, 1 (2019).
Eerola, T. & Vuoskoski, J. K. A comparison of the discrete and dimensional models of emotion in music. Psychol. Music. 39, 18–49 (2010).
Daly, I. et al. Music-induced emotions can be predicted from a combination of brain activity and acoustic features. Brain cognition. 101, 1–11 (2015).
Daly, I. et. al. Automated identification of neural correlates of continuous variables. J. Neurosci. Meth. 242, 65–71 (2014).
Williams, D. et al. Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System. ACM Trans. Appl. Perce. 14, 1–13 (2017).
Cowie, R. et al. ‘FEELTRACE’: An Instrument For Recording PerceivedEmotion In Real Time. In Proceedings of the ISCA Workshop on Speech and Emotion, 19–24 (2000).
Daly, I. et al. An Affective Brain-computer music Interface. In Proceedings of the 6th International Brain-Computer Interface Meeting, organized by the BCI Society (2016).
Daly, I. et al. Changes in music tempo entrain movement related brain activity. In Proceedings of the 6th International Brain- Computer Interface Meeting (2014).
Allen, P. J., Josephs, O. & Turner, R. A Method for Removing Imaging Ar-tifact from Continuous EEG Recorded during Functional MRI. NeuroImage. 12, 230–239 (2000).
Gorgolewski, K. J. et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data. 3, 160044 (2016).
Pernet, C. R. et al. EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data. 6, 103 (2019).
Daly, I. et al. An EEG dataset recorded during affective music listening. OpenNeuro https://doi.org/10.18112/openneuro.ds002721.v1.0.1 (2020).
Daly, I. et al. A dataset recorded during development of an affective brain-computer music interface: calibration session. OpenNeuro https://doi.org/10.18112/openneuro.ds002722.v1.0.1 (2020).
Daly, I. et al. A dataset recorded during development of an affective brain-computer music interface: training sessions. OpenNeuro https://doi.org/10.18112/openneuro.ds002724.v1.0.1 (2020).
Daly, I. et al. A dataset recorded during development of an affective brain-computer music interface: testing session. OpenNeuro https://doi.org/10.18112/openneuro.ds002723.v1.1.0 (2020).
Daly, I. et al. A dataset recorded during development of a tempo-based brain-computer music interface. OpenNeuro https://doi.org/10.18112/openneuro.ds002720.v1.0.1 (2020).
Daly, I. et al. A dataset recording joint EEG-fMRI during affective music listening. OpenNeuro https://doi.org/10.18112/openneuro.ds002725.v1.0.0 (2020).
Daly, I., Scherer, R., Billinger, M. & Muller-Putz, G. FORCe: Fully Online and automated artifact Removal for brain-Computer interfacing. IEEE T. Neur. Sys. Reh. 23 (2014).
Acknowledgements
This work was supported by the EPSRC grants EP/J003077/1 and EP/J002135/1.
Author information
Authors and Affiliations
Contributions
Author I.D. recorded the data. Authors I.D., D.W. and N.N. analysed the data. Authors I.D. and N.N. wrote the manuscript. All authors planned the experiments. Authors E.M. and S.J.N. secured funding for the work and managed the project.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata files associated with this article.
About this article
Cite this article
Daly, I., Nicolaou, N., Williams, D. et al. Neural and physiological data from participants listening to affective music. Sci Data 7, 177 (2020). https://doi.org/10.1038/s41597-020-0507-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-020-0507-6
This article is cited by
-
Neural decoding of music from the EEG
Scientific Reports (2023)
-
Graph theoretical brain connectivity measures to investigate neural correlates of music rhythms associated with fear and anger
Cognitive Neurodynamics (2023)