Psychophysiology of positive and negative emotions, dataset of 1157 cases and 8 biosignals

Subjective experience and physiological activity are fundamental components of emotion. There is an increasing interest in the link between experiential and physiological processes across different disciplines, e.g., psychology, economics, or computer science. However, the findings largely rely on sample sizes that have been modest at best (limiting the statistical power) and capture only some concurrent biosignals. We present a novel publicly available dataset of psychophysiological responses to positive and negative emotions that offers some improvement over other databases. This database involves recordings of 1157 cases from healthy individuals (895 individuals participated in a single session and 122 individuals in several sessions), collected across seven studies, a continuous record of self-reported affect along with several biosignals (electrocardiogram, impedance cardiogram, electrodermal activity, hemodynamic measures, e.g., blood pressure, respiration trace, and skin temperature). We experimentally elicited a wide range of positive and negative emotions, including amusement, anger, disgust, excitement, fear, gratitude, sadness, tenderness, and threat. Psychophysiology of positive and negative emotions (POPANE) database is a large and comprehensive psychophysiological dataset on elicited emotions.


Background & Summary
The emotional response involves changes in subjective experience and physiology that mobilize individuals towards a behavioral response [1][2][3][4][5][6] . Theorists have debated for decades on the psychophysiology of human emotions focusing on several questions [7][8][9][10] . For instance, whether specific emotions produce a specific physiological response 3 , how different biosignals are correlated within an emotional response 5,6 , whether the physiological response allows predicting concurrent subjective experience 11 , what new features within a specific biosignal (e.g., the ECG wave) are influenced by emotions 12 , what improved methods of data processing can be used 13 , how emotions influence physiological patterns related to health 14,15 .
Physiological responses to the emotional stimuli were primarily of interest in psychology. However, emotions have recently also gained attention in other scientific fields, such as neuroscience 16 , product and experience design 17 , and computer science 18 . For instance, Affective Computing (an interdisciplinary field also known as Emotional AI) uses psychophysiological signals for developing algorithms that allow detecting, processing, and adapting to others' emotions 19,20 . To allow machines to learn about specific emotion features, researchers have to provide these machines with multiple descriptors of emotional response, including subjective experience of affect (e.g., valence and motivational tendency) and objective physiological measures (e.g., cardiovascular, electrodermal, and respiratory measures).
These basic science and applied problems require robust empirical material that provides a large and comprehensive dataset that offers abundant emotions, diverse physiological signals, and the number of participants providing high statistical power. Moreover, researchers use various methods to elicit emotions 21 , including film clips 22,23 , pictures 24 , video recording/social pressure 25,26 , and behavioral manipulations 27 . Thus, accounting for various methods of emotion elicitation might contribute to database versatility.

Procedure. Procedures common across the studies.
In most of our studies, participants were tested individually in a sound-attenuated and air-conditioned room. Study 5 involved opposite-sex couples tested together in the same room but in separate cubicles, with no interaction with each other. Participants were randomly assigned to the experimental conditions. We also randomized the order of affective stimuli within the studies. Detailed information on the order of affective stimuli in each study is available in the metadata file. All instructions were presented, and responses were collected via a PC with a 23-inch screen. The experiments were run in the e-Prime 2.0 (Study 1, 2, 3, 4) and 3.0 (Study 5, 6, 7) Professional Edition environment (Psychology Software Tools).
Upon arrival in the lab, participants provided informed consent, and the researcher applied sensors to obtain psychophysiological measurements. Studies began with a five-minute resting baseline (only Study 3 began with a three-minute baseline). During baseline, participants were asked to sit and remain still. Upon completing all studies, biosensors were removed, and the participants were debriefed. Study 1. After the baseline, participants completed the speech preparation task, which aimed at threat elici tation. Later, depending on randomization, affective pictures were presented on the PC screen to elicit high-approach motivation positive affect, low-approach motivation positive affect, or the neutral state for three minutes.

Study 2.
After the baseline, participants watched affective pictures (high-approach positive affect, low-approach positive affect, or neutral depending on randomization) for three minutes. Afterward, they were asked to prepare the speech which aimed at threat or anger elicitation (depending on randomization).

Study 4.
Participants were told that they would be participating in two unrelated studies. The purpose of the first study was presented as determining the relationship between language orientation and the psychophysiological reactions to film clips. The purpose of the second study was presented as evaluating consumer products. After baseline, participants solved linguistic tasks. Next, they watched fear or neutral state eliciting film clips (depending on randomization). After the emotion manipulation, participants reported social needs and evaluated six pairs of commercial products.

Study 5.
After completing the baseline, each participant was told to wait for their partner who would solve complex tasks. In fact, there were no tasks to be actually solved by any of the participants. Next, each participant completed three rounds consisting of 1) two minutes of watching the film clips while waiting for the partner; 2) receiving bogus information about the partners' success; and 3) sending the feedback. Participants watched one of the three film sets, including only positive emotions, negative emotions, or a neutral condition (depending on randomization). The film clips were presented in a counterbalanced order. Study 6. After the baseline, participants watched one of four 12-minute films' presentations eliciting only positive emotions, only negative emotions, the mix of positive and negative emotions, and neutral states (depending on randomization). After watching the set of films, participants were instructed to play an ultimatum social game 40 . Participants received the offer, which was considered unfair by most people taking part in this type of research ("6 USD for me and 0.80 USD for you"). Next, participants were asked to decide to accept or reject the offer. Before receiving the offer and after deciding to accept or reject the offer, there was a 2-minute waiting period for recording physiological processes. Speech preparation. In Study 1, we elicited threat with a well-validated social threat protocol 25,26,41 . Participants were asked to prepare a 2-minute speech on the topic "Why are you a good friend?". We informed participants that the speech would be recorded. Furthermore, participants received the information that they would be randomly selected to deliver the speech or not after the 30 s of speech preparation. However, after 30 s of preparations (anticipatory stress), each participant was informed that they were selected not to deliver the speech.
In Study 2, we randomly assigned participants to prepare a threat or anger-related speech. We used a similar method to elicit a threat as in Study 1, but participants were given 3 min to prepare the speech. Study 2 also intended to elicit anger with a similar method, i.e., anger recall task [42][43][44] . Participants were asked to prepare a speech on the topic "What makes you angry?". Participants had 3 minutes to prepare the speech. After 3 minutes of both threat-and anger-related speeches, we informed participants that they were selected not to deliver the speech.
Interpersonal communication. In Study 3, participants expressed their gratitude (a positive relational emotion) towards their benefactors via texting. This intervention was developed within the field of positive psychology 45 . Participants express their gratitude towards their acquaintance by sending a text message during the laboratory session (Gratitude Texting). This intervention involved the essential elements of gratitude expression, including identification and appreciation of a good event, recognition of the benefactors' role in generating the positive outcome, and the act of communicating gratitude itself 46 . In the control condition, we asked participants to send a neutral text message to their acquaintance with no suggestion regarding the topic. The control condition accounts for psychophysiological responses associated with texting in general 47 . Participants prepared their messages for three minutes.
Films. We used validated and reliable film clips selected from emotion-eliciting video clip databases 22,23,[48][49][50] . Each clip lasted two minutes (except for films in Study 4 that, in sum, lasted for 3 minutes 41 seconds). Most of the film clips were short excerpts from commercially available films. Within the sessions, clips were presented in a counterbalanced order. Table 1, along with the metadata spreadsheet, presents which films were used to elicit emotions in the studies. The names of the film descriptions used for emotion elicitation are also available in the metadata file (the "stimuli" spreadsheet).
We elicited positive emotions with the following film clips: 1) A Fish Called Wanda (Surprisingly, the homeowners get inside and discover Archie dancing while naked); 2) The Visitors (

Sensors & instruments. We present sensors and instruments used in our studies with examples illustrating their possible research applications.
Affect. Participants reported the affective experience to the emotional stimuli continuously with an electronic rating scale 51 . We investigated two dimensions of affect: valence (Study 3, 5, & 6) and approach/avoidance motivational tendency (Study 1,2, & 7). Valence is the degree of feeling pleasure or displeasure in response to a stimulus (e.g., object, event, or a person). Individuals experience positive valence while facing favorable objects or situations (e.g., smiling people or amusing events), and negative valence while facing unfavorable objects or situations (e.g., sad individuals) 24 . The approach/avoidance motivational tendency is the urge to move toward or away from an object 52 . Individuals experience high-approach motivation while facing desirable or appetitive objects or situations (e.g., delicious food or sexually attractive individuals), and high-avoidance motivation while facing undesirable or aversive objects or situations (e.g., accidents or infected individuals). We focused on valence because it is the most fundamental and well-studied dimension of the affect, and we focused on the approach/avoidance motivational tendency that is a rather novel dimension considered in the literature that might advance understanding emotions' functions 53 .
Participants reported valence on a scale from 1 (extremely negative) to 10 (extremely positive) or approach/ avoidance motivational tendency on a scale from 1 (extreme avoidance motivational tendency) to 10 (extreme approach motivational tendency). Participants were asked to adjust the rating scale position as often as necessary so that it always reflected how they felt at a given moment. For valence, we asked the participants to move the tag to the right side of the scale when they felt more positive or pleasant and to move the tag to the left side  www.nature.com/scientificdata www.nature.com/scientificdata/ of the scale when they felt more negative or unpleasant. For the approach/avoidance motivational tendency, we asked the participants to move the tag to the right side of the scale when they felt the motivation to go toward or engage with the stimulus and to move the tag to the left side of the scale when they felt the motivation to go away or disengage with the stimulus. Previous research indicated that rating scales are valid for reporting the intensity of valence and approach/avoidance motivation 24,51,54 .
Electrocardiography. We used two electrocardiographs (ECG), BioAmp with Powerlab 16/35 AD converter (ADInstruments, New Zealand) (Study 1,2,4 & 5) and Vrije Universiteit Ambulatory Monitoring System (VU-AMS, the Netherlands) (Study 3, 6 & 7). We used pre-gelled AgCl electrodes placed in a modified Lead II configuration. The signal was stored on a computer with other biosignals using a computer-based data acquisition and analysis system (LabChart 8.1; ADInstruments or VU-AMS Data, Analysis & Management Software; VU-DAMS 3.0). The ECG signal was sampled at a frequency of 1 kHz. ECG signal allows the computation of numerous indexes with the most popular involving 1) heart rate, which reflects the autonomic arousal, associated with, e.g., dually innervated sympathetic nervous system (SNS) and parasympathetic nervous system (PNS) activity, and is related to motivational intensity, action readiness, and engagement 56,57 , and 2) heart rate variability linked with stress, self-regulatory efforts, and recovery from stress 58 .
Impedance Cardiography. We recorded the impedance cardiography (ICG) signal continuously and noninvasively with the Vrije Universiteit Ambulatory Monitoring System (VU-AMS, the Netherlands) following psychophysiological guidelines 59,60 . We used pre-gelled AgCl electrodes placed in a four-spot electrode array for ICG 59 . The signal was stored on a computer with other biosignals using a computer-based data acquisition and analysis system (VU-DAMS 3.0). The ICG signal was sampled at a frequency of 1 kHz. ICG provided three channels: baseline impedance (Z0), sensed impedance signal (dZ), and its derivative over time (dZ/dt). In addition to ECG signal, ICG signal allows the computation of indexes linked to the pace and blood volume of the heartbeats, including (1) pre-ejection period reflecting sympathetic cardiac efferent activity which is associated, e.g., www.nature.com/scientificdata www.nature.com/scientificdata/ with motivational intensity and engagement 56,57 ; (2) stroke volume which is linked with stress 61 ; and (3) cardiac output which is used, e.g., to discriminate between challenge vs. threat stress response 62 .
Electrodermal activity. We recorded the electrodermal activity (EDA) with the GSR Amp (ADInstruments) at 1 kHz. We used electrodes with adhesive collars and sticky tape attached to the medial phalanges of digits II and IV of the left hand. The electrodes had a contact area of 8 mm diameter and were filled with a TD-246 sodium chloride skin conductance paste. The signal was stored on a computer with other biosignals using a computer-based data acquisition and analysis system (LabChart 8.1; ADInstruments). Skin conductance reflects beta-adrenergic sympathetic activity, and some examples of its use comprise mental stress, cognitive load, and autonomic arousal 66 .
Respiration. In Study 1, we recorded respiratory action with a piezo-electric belt, Pneumotrace II (UFI, USA), sampled at 1 kHz. The belt was attached around the upper chest near the level of maximum amplitude for thoracic respiration. The signal was stored on a computer with other biosignals using a computer-based data acquisition and analysis system (LabChart 8.1; ADInstruments). The respiratory action allows the computation of respiratory rate and depth associated, e.g., with mental stress 67 , arousal 68 , and increases in negative emotion, e.g., anger and fear 5 .
Fingertip skin temperature. In Study 1, we measured fingertip temperature with a temperature probe attached to a Thermistor Pod (ADInstruments, New Zealand). The thermometer was attached at the distal phalange of the left hand's V finger, sampled at 1 kHz. The signal was stored on a computer with other biosignals using a computerbased data acquisition and analysis system (LabChart 8.1; ADInstruments). Changes in digit temperature reflect sympathetically innervated peripheral vasoconstriction and vasodilation that decreases or increases the fingertip temperature due to lower or higher blood supply. For instance, the fingertip temperature decreases in response to stress 69 and increases in response to joy 70 . Fingertip temperature is usually lower than other body temperature measures, e.g., the axillary or oral temperature 71 . Moreover, fingertip skin temperature can be much lower for some participants due to individual differences in hand morphology as well as ambient temperature. For instance, thermoregulatory cold-induced vasodilation occurs when hands are exposed to cold weather in winter 72 . Figure 1 presents the experiments and the data acquisition setup. Stimuli were managed through E-Prime (Psychology Software Tools, Inc.). E-Prime software sent the markers to the data acquisition devices (LabChart and VU-AMS), by which we were able to synchronize and merge the recordings from different devices into a single data file. The rating scale, ECG, EDA, thermometer, and respiratory belt were directly connected to the Powerlab 16/35 and then to the acquisition personal computer (PC) over a USB port. The ECG and ICG were directly connected to the VU-AMS and then to the acquisition PC3 over a USB port. The blood pressure measures were collected via a finger cuff directly connected to the Finometers and then to the acquisition PC over a USB port. We synchronized LabChart and VU-AMS with Finometer data by manually adding the markers at the same time during data recording. Data were managed in the following manner: 1) Powerlab data was stored in LabChart 8.0; 2) VU-AMS data was stored in VU-DAMS; and 3) Finometer data were stored in BeatScope. The acquired data from each participant was exported with the timestamp provided by the acquisition PC and markers into the TXT data files. Data preprocessing. Physiological data collected across seven studies were exported from the acquisition formats by the first author [MBe]. The participants' number differs from the initial studies due to various issues such as device malfunction, signal artifacts, and missing data files. We presented data that had high signal quality. Thus, some participants' data from some channels (devices) were excluded, resulting in an 8% decrease in the participants' pool.

Data acquisition.
The exported TXT, CSV, and metadata files were preprocessed using Python 73,74 scientific libraries (e.g., pandas 1.1.5, numpy 1.19.2; see Code Availability, for detailed information) (Fig. 2a). All signals were resampled to 1 kHz, using the previous neighbor interpolation method (Fig. 2b). Signals from different devices were time-synchronized using synchronization markers generated during experiments. We marked the baselines and emotion elicitations within the files. Finally, data across studies were exported to a normalized form, consisting of a header, a predefined file structure, and a standardized subject naming convention.

Data records
The POPANE dataset is publicly available at the Open Science Framework repository 32 . (2022) 9:10 | https://doi.org/10.1038/s41597-021-01117-0 www.nature.com/scientificdata www.nature.com/scientificdata/ Metadata. We present auxiliary information about the experiments in the metadata spreadsheet. The metadata file includes participants' ID, sex, age, height, weight, experimental conditions for each study, stimuli order within a session, and information about missing data, outliers, and artifacts (sheets "Study 1-7"). Furthermore, the metadata file provides information on individuals that participated more than once across the studies by showing all their study-related IDs. The description of labels used for tagging discrete emotions is also available in the metadata file (the "stimuli" spreadsheet).
Dataset structure. The data repository consists of seven ZIP-compressed directories (folders), one for each study, e.g., "Study1" directory was compressed to "Study1.zip" archive file, "Study2" was compressed as "Study2. zip", etc. Each of these directories contains a set of CSV files with psychophysiological information for particular subjects. We used a consistent CSV file naming convention, i.e.: "S < study_id > _P < participant_id > _ < phase_ name > .csv", where "S" stands for study, "P" for participants, e.g.: S1_P10_Baseline.csv, or S6_P4_Amusement1. csv. The " < study_id > " & " < particpant_id > " are natural numbers identifying a study and a participant. The " < phase_name > " is the name of the phase of an experiment, e.g., "Baseline", or "Amusement1". The description of all experimental-phase labels is explained in the metadata spreadsheet. All psychophysiological signals recorded during the experiment for each individual are also available in a single CSV datafile named "S < study_id > _P < par-ticipant_id > _All.csv". All the other files for a particular participant named in the following manner: "S < study_ id > _P < participant_id > _ < phase_name > .csv" are files containing a subset of records (an excerpt) extracted from a basic "S < study_id > _P < participant_id > _All.csv" file. Thus, "S < study_id > _P < participant_id > _All. csv" files store either signals related to a particular experimental phase or signals gathered during time intervals, where no experimental conditions were present, i.e., signals that were not related to the affective manipulation.
Furthermore, we also included one additional component, i.e., "POPANE dataset". This component contains a set of ZIP-compressed directories with a set of CSV files with psychophysiological information for particular participants, baselines, and emotions. We grouped the datafiles from all studies into a single folder sorted by emotions. This simplifies the usage of our dataset as the single set of emotion-related data from all 1157 cases.
A sample from Study 1 is available for preview and testing and can be obtained from the data repository as "Study1_sample.zip". The compressed sample file size is 42 MB (208 MB uncompressed), as compared to 2.0 GB (9.3 uncompressed) of the complete dataset for Study 1. This provides potential users of the dataset with an opportunity to get the notion of the data without downloading the whole dataset. For the visualization of these sample data, see Fig. 2b. If no data are available for the participant's age, sex, height, and weight, we inserted a value of "−1". Following the header, each CSV file contains 7-12 columns, depending on the study. For studies in which data were gathered from more channels, there are more columns in CSV files. Sensor names used in all studies are consistent across all CSV files (see the metadata file). The first column of the data table (except for the header) contains timestamps, as provided by a clock on the main data acquisition (logging) computer -the timestamp format is time in seconds. In the last column, there is a marker that identifies the specific phase of the experiment. The metadata file provides a full explanation of the stimulus IDs used to mark the specific phase of the experiment, e.g., "−1" indicates the experimental baseline, while "107" indicates the neutral film clip "The Lover". The columns in between the timestamp and the marker contain the physiological data (see Table 3 for details).

Scripts.
We used different acquisition programs; therefore, the exported data had to be integrated into a common format. An automatic preprocessing procedure was implemented in Python scripts. We converted the raw acquired data (obtained with a proprietary acquisition software) into a consistent format and saved it in CSV files. Consequently, data from several sources were integrated to be easily imported into all common statistical software packages. We also prepared examples in IPython Jupyter Notebooks presenting how to load and visualize psychophysiological data from sample files for Study 1. Both the conversion scripts and the Notebooks can be obtained from our source code repository available at GitHub: https://github.com/psychosensing/popane-2021. First, we used validated methods (e.g., protocols and stimuli) to elicit emotions in our experiments. We used stimuli in line with well-established methods in the affective science 21 . Second, the data were collected by experimenters that completed 30 h training in psychophysiological research provided by MBe and LDK. Third, prior to performing preprocessing, the first author (MBe) visually inspected all physiological signals. Before inclusion in the database, MBe manually double-checked all datasets for missing or corrupted data. Table 4 presents missing data for each stimulus and physiological signal. The histograms in Figure 3 show the distributions of the selected physiological signals during the resting baseline. Figure 3 also presents that collected signals had standard ranges. For instance, most participants presented a healthy SBP and DBP range during the resting baseline of the experiments 75 . This figure does not present raw recordings (e.g., ECG in mV) that require further processing (e.g., breathing rate based on peak analysis).  5-7  120  348 0  0  20  20  0  29  29  29  29  -film  Benny & Joone  6  120  68  0  -18  18  1  10  10  10  10  -film  The visitors  5, 6  120  137 0  -20  20  0  28  28  28  28  -film  When Harry met Sally 5, 6  120  137 0  -19  19  3  29  29  29  29  --Anger  film  American History X  5-7  120  356 0  0  17  17  2  28  28  28   www.nature.com/scientificdata www.nature.com/scientificdata/ Quantitative validation. We evaluated the quality of the signal with the Signal-to-Noise Ratio (SNR). In order to calculate SNR across the diverse physiological signals, we used an algorithm based on the autocorrelation function of the signal, using the second-order polynomial for fitting the autocorrelation function curve 76 . The script we used for calculating SNR is available in the project's GitHub repository (https://github.com/psychosensing/popane-2021). We calculated SNR for all baselines and emotion elicitations across seven studies ( Table 5). The calculated SNR indicated the high quality of all collected signals 77 , SNR min = 5.67 dB, with mean SNR ranging from 37.82 dB to 67.39 dB depending on physiological signal and study. We identified outliers above SNRs' z-scores higher than 3.29 78 , resulting in 290 parts (1.09% of all calculated SNR values) identified as SNR outliers. Next, the first author (MBe) visually inspected all flagged data to determine whether it should be classified as artifacts, resulting in 257 SNR outlying data points being identified as artifacts (88% of the low SNR data; less than 0.96% of all calculated SNR values). Both outliers and artifacts are presented in the metadata file.
Previous studies. For each study represented in the dataset, we ran manipulation checks that contributed to the technical validation. We found that the stimuli produced expected affective and physiological responses in participants [33][34][35][36][37][38] . For instance, in Study 5, we found that individuals who watched the positive film clips reported more positive valence, whereas individuals who watched the negative film clips reported more negative valence, compared to individuals who watched the neutral film clips 36 . Furthermore, individuals in the positive and negative emotion conditions displayed greater physiological reactivity (e.g., SBP and DBP) than individuals in the neutral conditions 36 .

Usage Notes
The POPANE dataset is available at https://doi.org/10.17605/OSF.IO/94BPX. The data in the datasets are saved in CSV format. The dataset can be used to test hypotheses on positive and negative emotions, create psychophysiological models and/or standards, or as an example data for testing technical aspects of the analyses and/or validation of mathematical models. These data can be of interest for several scientific fields such as psychology, e.g., for investigating human emotions based on physiological and psychometric information, or computer science (machine learning) for implementing automatic emotion recognition, or clustering data related to particular emotions.
Limitations. There are some shortcomings of our dataset. First, some data are missing because recordings for some of the participants could not be reliably collected due to technical reasons. Second, this dataset cannot Fig. 3 Data histograms of baseline psychophysiological levels. This figure presents the distribution of the mean psychophysiological levels for resting baseline but does not present raw recordings (e.g., ECG in mV) that require further processing (e.g., analysis to calculate HR or HRV). (2022) 9:10 | https://doi.org/10.1038/s41597-021-01117-0 www.nature.com/scientificdata www.nature.com/scientificdata/ be employed to investigate psychophysiological differences between ethnicities, neither between the group ages, as more than 99% of the participants were Caucasian young adults. This is an important limitation because some studies indicated physiological differences in baseline levels and reactivity to some stressors depending on the participant's age 79,80 and ethnicity 81 . Moreover, some studies in the dataset recruited only male participants. This is important to control if the whole dataset would be used for testing hypotheses regarding sex differences 82 . Third, our dataset does not include participants diagnosed with cardiovascular disease. However, we did not collect information about other health issues, e.g., psychiatric or neurological diagnosis.
Fourth, this dataset is a posteriori use of the previously acquired data in already published independent studies. However, some participants (12%) took part in more than one study. We identified these participants in the metadata file. Thus, if the whole dataset is used to test hypotheses, researchers should consider this issue. In contrast, some authors might be particularly interested in the use of repeated data collected from the same participants, e.g., to test intraperson stability or change.
Fifth, most of the film clips were short excerpts from commercially available films. Thus, some of our participants might have already been familiar with them.
Sixth, in our studies, we measured autonomic nervous system (ANS) reactivity to nine discrete emotions. This is not an exhaustive list of affective states related to ANS activity. Future studies may focus on emotions that are examined less often in psychophysiological studies, including pride, craving, love, or embarrassment 6 . Furthermore, the emotions elicited in our studies were not balanced in valence, as some studies were focused on the differences between neutral conditions and positive emotions (Study 3) or negative emotions (Study 4).
In summary, the POPANE database is a large and comprehensive psychophysiological dataset on emotions. We hope that POPANE will provide individuals, companies, and laboratories with the data they need to perform their analyses to advance the fields of affective science, physiology, and psychophysiology. We invite you to visit the project website https://data.psychosensing.psnc.pl/popane/index.hml. GitHub repository. Scripts for converting data from proprietary acquisition software formats into consistent CSV files, as well as IPython Jupyter Notebooks presenting how to load the data from POPANE CSV files into Python Pandas DataFrame structure are available at the following GitHub repository: https://github.com/ psychosensing/popane-2021.

code availability
The code can be accessed on the public GitHub repository: https://github.com/psychosensing/popane-2021. It is licensed under MIT OpenSource license, i.e., the permission is granted, free of charge, to obtaining a copy of this software and associated files (e.g., the Jupyter IPython Notebooks), subject to the following conditions: the copyright notice and the MIT license permission notice shall be included in all copies or substantial portions of the software based on the scripts we published.
Scripts that we used to transform the data from proprietary acquisition formats into coherent CSV files utilized Python 3.6 83 . The list of the specific modules and their versions is available in the "requirements.txt" file in the GitHub repository.