Introduction

Music plays a significant role in human life and is present in today’s society, in both work and leisure settings1,2. Music listening engages brain networks involved in hearing, motion, emotion and learning3, thereby modulating emotions, mood and arousal4. Many people use music to accompany behaviours, by modulating arousal either to relax as in the case of sleep aid music, or to stimulate the nervous system as in the case of music that is used while studying5,6,7,8,9. Thanks to the availability of large playlist data via online streaming services, identifying what music people use to accompany their daily activities has become increasingly popular among music scientists10,11,12,13. This leads to the discover of new music listening habits that reflect how the general population uses music. For example, whereas clinical trials usually use slow, soft and instrumental music to help relaxation and sleep14,15,16, recent studies are showing that people use much more varied music to help initiate sleep12,17,18,19. Additionally, while background music for studying is usually recommended to be calm and non-complex20,21,22, a recent study showed that how respondents used music was very varied23. These cases illustrate how music is used in practice may vary from how it was assumed in theory. By analysing multiple activity types together and using big data, we hope to gain more insight into the nature of the variation and elaborate on how music is used during those activity types in practice. In the present explorative study, we will compare music used for sleep and music used while studying. We think that sleep and study music are excellent domains to start with because the two activities are hypothesised to differ in their optimal arousal levels. While music is supposed to lower arousal for sleep, music is also supposed to increase arousal for better concentration while studying. However, in theory, both types of music should have similar musical features, such as calm and non-complex tracks14,15,16,20,21,22. Any similarity between the two would be informative to investigate general patterns in background music listening behaviours. Furthermore, we hope to show a new way to explore our musical behaviour using a data-driven approach: big data can provide insight into how music is used in reality, which could be different from theories.

The first type of music we will investigate is music used for sleep. As sleep problems increase worldwide so do self-help strategies, such as listening to music, to help fall asleep or to improve sleep quality7,24,25,26. Studies show that up to 46% of respondents indicate that they use music to help themselves fall asleep7,27,28 and that listening to music before sleep improves sleep quality across adult populations6,9.

The second type of music we will investigate is music that is used while studying. This type of music experienced a surge in popularity after the introduction of the so-called ‘Mozart Effect’, the effect of listening to music that temporarily enhances cognitive performance29. The original study proposed that listening to Mozart’s sonata K488 increased performance on a spatial cognitive task included in an IQ test29. Nowadays, another explanation, the mood-arousal hypothesis, is more commonly accepted to account for the increased cognitive performance while listening to music. This hypothesis proposes that the effect is caused by an increase in arousal attributed to the pleasant music. In other words, it proposes mood as a mediator of increased cognitive performance instead of the mystic power of classical music8,30. Many students utilise this mood-boosting function of music while studying which supposedly should benefit learning20,31,32,33,34,35. However, it is not easy to generalise about which type of music is most suitable for studying purposes. In general, music with low complexity—that has no words, a stable tonality and minimal changes in tempo and amplitude—seems to lead to the best effects on cognitive performance22,36,37,38,39,40,41,42,43. This is consistent with EEG brain studies showing that the auditory system responds strongly to pattern deviations44, a process which behaviourally may result in an orienting response and ultimately that the listener gets distracted from studying45. The mood-arousal hypothesis predicts that pleasant and simple music induces the mood-boosting effect while unpleasant or too arousing music hinders the benefit. Just like how there is an inverted U-shape for groove preferences46 and how the Yerkes–Dodson law sees an inverted U-shape model as the link between arousal and performance, with optimal performance at moderate arousal levels47, there seems to be an inverted U-shape modelling the complexity of a piece of music and the mood-boosting effect. It appears that the type and difficulty of the task as well as individual differences, interact significantly with the beneficial effect of background music listening while studying, making it challenging to draw a simple answer23.

A recent survey with 140 participants shows that the music people listen to while studying varies considerably, such as non-vocal, vocal, calm, jazz, pop, classic and upbeat music23. Interestingly, these genres largely overlap with those present in the music playlists people use as sleep aids, based on a big data study of Spotify playlists12. Does this entail that the music used for sleep is similar to the music used for studying? Sleeping and studying involve different cognitive processes, and the desired mood and status for these activities are also different: Sleep music induces relaxation, helping the sleep process, whereas Study music induces optimal levels of arousal while not distracting the studying listener, helping cognitive performance. It would be puzzling, however, if the same type of music would be used to accompany two almost opposite activities, but it also opens new research possibilities, giving further insights into the interaction between the type of music, task, and individual differences in the effective use of background music.

This study is focused on determining to which extent the characteristics of music used for sleep and study are shared, and on what parameters they are similar or dissimilar. Firstly, we form a dataset of music for studying collected from publicly available Spotify playlists. Secondly, we compare three datasets to establish the differences and similarities among them: the newly formed Study playlist dataset, the Sleep playlist dataset12 and the Music Streaming Session dataset48, which is used as a proxy for General music. In doing so, we shed light on human behaviour surrounding music listening and on arousal modulation which can be used to advise music-based interventions.

Methods

In order to assess the similarities between sleep music and study music, we used a pre-existing dataset of sleep music, we formed a new dataset of study music, and we used a pre-existing general music dataset as a control. These datasets were compared qualitatively on a track level, a genre level, and on a cluster level. They were also compared quantitatively by individual audio features, and by the statistical distance between their audio features. After introducing each dataset, the methods for each analysis will be described below.

The datasets

The Study music dataset

To build the global dataset of Study music, we automatically searched the online streaming service Spotify for playlists that contained the words ‘study’ or ‘studying’ in the title or the description using the Python package GSA49 set to automatically exclude playlists with less than 50 followers. These titles and descriptions are added by the creators of the playlists, which can either be Spotify users or Spotify itself. They usually reflect how the creator intends the playlist to be used, for example ‘Focus-enhancing piano for your study session’. The search was performed in spring 2021 and yielded 801 playlists. As the search was done systematically but automatically, the playlist titles were inspected manually to verify the word ‘study’ was used to describe the act of listening to music while studying. Eleven playlists were excluded as they included different uses of the word study, such as ‘John Hopkins Psilocybin study’, or if they were not music such as ‘White Noise for Studying’. The dataset contains 172,819 tracks spread over 790 playlists. Of these tracks, 63,418 tracks appear in the dataset more than once, meaning there are 109,628 unique tracks in this dataset. With the Python package GSA49, we were able to retrieve 9 audio features measured by Spotify (see Table 1), as well as the tracks’ Spotify trackID, the playlists’ Spotify ID and the compound genre tag (see Genre Reduction section for more details on genre).

Table 1 Overview of the audio features that are accessible through the Spotify API and their descriptions as given by Spotify50.

The Sleep music dataset

The Sleep playlists dataset was created by extracting 225,626 tracks from 985 playlists including the words ‘Sleep’, ‘Sleepy’ or ‘Sleeping’ in the title or description12. Of these tracks, 130,150 are unique. The dataset contains the same nine audio features as the Study music dataset, the track and playlist IDs as well as genre tags.

The Study and Sleep datasets share 21,872 tracks. These tracks are mostly lo-fi and pop tracks.

The General music dataset

The Music Streaming Session dataset is a publicly available dataset released by Spotify on CrowdAI48 and contains the audio features of approximately 3.7 million unique tracks that were collected over multiple weeks in 2019. The tracks were listened to at all hours of the day by people across the world. We treat this dataset as a proxy for General music listening as the tracks were collected over different regions, people and times without reflecting a specific behaviour other than ‘music listening’. This dataset contains the same 9 audio features measured by Spotify as the previous two datasets but does not contain the tracks’ Spotify trackID, the playlists’ Spotify IDs or genre tags.

Qualitative comparison

Comparing genres between the datasets

All Spotify tracks are assigned a genre tag, such as “Icelandic post-punk” or “Instrumental maths rock”, and in most cases one single track has multiple genre tags. In order to understand the data better, we applied a genre reduction algorithm51. This algorithm aims to reduce the list of sub-genres provided by Spotify for a particular track such that: \(G\left( x \right) = argmax_{y} \left( {\sum\nolimits_{i = 1}^{n} {g_{y} \left( {x_{i} } \right)} } \right)\), where \(x\) is the list of sub-genres of a track, and \(G\left( x \right)\) is the main genre, obtained by calculating whether each predetermined main genre \(y\) is a substring of the sub-genre \(x_{i}\), and then choosing the main genre with the most occurrences.

A Python-implementation is available at GitHub.com/RebeccaJaneScarratt/Study-Sleep-Analyses. The list of main genres were created from a 14-item scale assessing preferences in music genres: the Short Test Of Music Personality (STOMP) scale52. We replaced Oldies with 5 additional genres that were added in Trahan et al.17 and added 4 genres ourselves. The 4 additional genres were chosen due to their high prevalence in the dataset. They stem from the genres that Spotify assigns to its tracks such as “soft sleep chill”. These genres are assigned based on information from listener playlists and the Spotify music curation team53. For a full overview of the 31 genres see Table 2.

Table 2 All main genres used to reduce the genre tags.

Comparing subgroups between the Sleep and Study music

In previous work on the Sleep Playlist dataset, a large variation of genres and audio features within the dataset was found. Subsequently, a k-means clustering analysis was performed on the audio features and revealed 6 distinct clusters. To assess similar variation in the unique Study dataset, we performed the same k-means clustering on the audio features in the same way as in Scarratt et al. (2021). This was performed in RStudio using its inbuilt k-means function, which divides the data into a certain number of clusters, k, by minimising within-cluster variance. A maximum of 1000 iterations were used. To select the optimal k, we used the elbow-method54. In our case, this resulted in an optimal division of the unique Study audio feature data into 3 clusters. These subgroups were then compared qualitatively using a radar plot visualization with the medians of each audio features per cluster. To include them in the radar plots, Loudness and Tempo were normalised.

Quantitative comparison

Comparing individual audio features between the datasets

To compare differences between the datasets, we used the nine audio features available from Spotify. These cover a wide range of both basic and compound musical measures. As the calculation of these audio features are proprietary, and we were unable to quantify exactly which calculations and transformations underlie each feature, we based our interpretation of the audio features on Spotify’s description as part of their Application Programming Interface (API) reference manual50 (see Table 1). The comparison of the individual audio features between the datasets was done using t-tests with Welsch correction, with Cohen's D as a measure of effect size. The audio features Instrumentalness and Acousticness exhibited a strong bimodal distribution. Therefore, we dichotomised these values with a split point at 0.5, and calculated statistical difference using the Chi-Square test, with Cramêr’s V as a measure of effect size. All p values were Bonferroni corrected.

Comparing statistical distance between the datasets

To gain an overlying measure of distance between the three datasets, we used the Kolomogorov-Smirnov (KS) distance statistic55. This value increases with the maximum distance between two sample’s empirical distribution function. Here we calculated the KS statistic for each of the audio features Danceability, Energy, Loudness, Speechiness, Acousticness, Instrumentalness, Liveness, Valence, and Tempo, between the three possible comparisons of datasets. To interpret the results, we then took the mean of each comparisons’ individual audio feature KS statistics, giving us one value for each comparison.

All statistical analyses were performed in RStudio version 1.3.959 using R version 4.0.0, running on Windows 10. The scripts used for analysing the dataset can be found at GitHub.com/OleAd/SpotifyStudyMusic. Figures were made using ggplot2 and the RainCloudPlots package56 or the Python packages Matplotlib and Plotly. The genre reduction script was performed in Spyder using Python version 3.9.

Results

As detailed in the methods section, to assess the similarities between sleep music and study music, we compared a Study music dataset with a Sleep music dataset and a General music dataset as a control. They were compared using qualitative and quantitative methods.

Qualitative comparison

Comparing genres between the datasets

The genres that appear more than 1000 times in either the unique Sleep music dataset or unique Study music dataset are listed in Table 3 to compare the prevalence of each genre in both datasets. As the Spotify API is not always able to identify the genre of a track, 29,274 tracks from the Study dataset and 32,885 tracks from the Sleep dataset were excluded from this genre analysis.

Table 3 Genres that appear more than 1000 times in either the unique Sleep music dataset or that appeared more than 1000 times in the unique Study music dataset sorted in descending order according to their prevalence in the Study dataset.

Comparing subgroups between the Sleep and Study datasets

Subgroup characteristics of Study music

When performing a k-means clustering analysis, the Study music dataset was found to have three distinct clusters. Cluster 1 in Table 4 (N = 35,729) is characterised by low Acousticness (M = 0.35), medium Instrumentalness (M = 0.46), medium Energy (M = 0.49) and high Danceability (M = 0.66). Because many of the tracks in this cluster are classified by Spotify as pop (N = 10,765), it was named ‘Pop Tracks’. Cluster 2 (N = 34,617) is characterised by high Acousticness (M = 0.952), high Instrumentalness (M = 0.902), low Energy (M = 0.132) and medium Danceability (M = 0.403). Many of the tracks are classical (N = 6296) and soundtrack (N = 5852), and therefore it was named ‘Classical Soundtrack Tracks’. Cluster 3 (N = 39,282) is characterised by low Acousticness (M = 0.271), low Instrumentalness (M = 0.196), medium Energy (M = 0.57) and high Danceability (M = 0.61) and is mainly composed of lo-fi tracks (N = 9468), and therefore it was named ‘Lo-fi Tracks’ (Table 4).

Table 4 Medians and interquartile ranges of all audio features for the 3 clusters of the Study dataset.
Subgroup characteristics of Sleep music

A k-means clustering analysis was performed on Sleep music in a previous study12. The following six clusters were identified: ‘Speechy Tracks’, ‘Radio Tracks’, ‘Acoustic Radio Tracks’, ‘Ambient Tracks’, ‘Instrumental Tracks’ and ‘Live Tracks’ (Table 5). The two biggest clusters were ‘Ambient Tracks’ (N = 117,240) which contained instrumental, ambient, meditation music and ‘Instrumental Tracks’ (N = 32,736), containing instrumental pieces and instrumental covers. They were both characterised by high Acousticness (M = 0.957 and M = 0.888 respectively), high Instrumentalness (M = 0.917 and M = 0.893) and low Energy (M = 0.0423 and M = 0.172). ‘Instrumental Tracks’ had higher Danceability (M = 0.655) than ‘Ambient Tracks’ (M = 0.207) as the instrumental music contained in the latter had a steadier pulse and beat compared to the ambient music from ‘Ambient Tracks’. Additionally, ‘Radio Tracks’ (N = 31,068) and ‘Acoustic Radio Tracks’ (N = 30,793) contained popular tracks that one could likely find on the radio, including pop, indie and soul music. They both had low Instrumentalness (M < 0.001 and M < 0.001), medium Energy (M = 0.597 and M = 0.278), high Danceability (M = 0.622 and M = 0.496) and low (M = 0.155) or high (M = 0.818) Acousticness respectively. These four subgroups are almost opposites and point towards two different behaviours when it comes to listening to music before sleep, either listening to soft, slow, instrumental tracks or listening to known non-instrumental music. The final two clusters either had high Speechiness (M = 0.334) compared to all other clusters (M < 0.06 in all other clusters), giving the ‘Speechy Tracks’ cluster (N = 8275), or high Liveness (M = 0.689), giving the ‘Live Tracks’ cluster (N = 5783).

Table 5 Medians of all audio features for the 6 clusters of the Sleep dataset.
Comparing the sub-groups of the two datasets

Seven audio features of all the clusters from the Study and the Sleep datasets were represented in a radar plot that highlights similarities between clusters between datasets. There appeared to be three groups between the clusters of the different datasets. Study clusters 1 (‘Pop Tracks’) and 3 (‘Lo-fi Tracks’) share similar features to Sleep clusters 1 (‘Speechy Tracks’) and 2 (‘Radio Tracks). Study cluster 2 (‘Classical Soundtrack Tracks’) has similar features to Sleep clusters 4 (‘Ambient Tracks’), 5 (‘Instrumental Tracks’) and 6 (‘Live Tracks’). Sleep cluster 3 (‘Acoustic Radio Tracks’) does not share consistent similarities to other clusters. The similarities between clusters were highlighted by isolating the two first groups in separate plots (see Figs. 1, 2).

Figure 1
figure 1

Radar plot of 7 audio features of Study clusters 1 (‘Pop Tracks’) and 3 (‘Lo-fi Tracks’) and Sleep clusters 1 (‘Speechy Tracks’) and 2 (‘Radio Tracks’).

Figure 2
figure 2

Radar plot of 7 audio features of Study cluster 2 (‘Classical Soundtrack Track’s) and Sleep clusters 4 (‘Ambient Tracks’), 5 (‘Instrumental Tracks’) and 6 (‘Live Tracks’).

Quantitative comparison

Comparing individual audio features between the datasets

The density plots of each audio feature per dataset can be found in Fig. 3. The results of the pairwise t-tests are represented by significance asterisks. Generally, all the comparisons are significant, except for Instrumentalness and Liveness where there is no significant difference between the Study and Sleep datasets. Due to the size of the datasets, it is expected to find many significant comparisons even after correcting for multiple comparisons. Therefore, we primarily interpret the effect size, Cohen’s D, for all audio features except Instrumentalness and Acousticness where Cramer’s V was used. Cramer’s V was interpreted using the following categories: < 0.20 = negligeable, 0.20–0.50 = Small, 0.50–0.90 = Moderate, > 0.90 = Large. The large effect sizes between the Sleep and General datasets are in Loudness (p < 0.001, d = 1.25, Energy (p < 0.001, d = 1.46) and Valence (p < 0.001, d = 0.93) as well as between the General and Study datasets in Energy (p < 0.001, d = 0.96), Loudness (p < 0.001, d = 0.75) and Valence (p < 0.001, d = 0.60). There were no large effect sizes between the Study and Sleep datasets, only a moderate effect size in Loudness (p < 0.001, d = 0.59).

Figure 3
figure 3

Density plots, t-test significance asterisks and Cramer’s V statistic in brackets between each pair of datasets for each audio feature for all audio features except for Acousticness and Instrumentalness where a chi-squared test was used. ****, p < 0.001 Bonferroni corrected.

Comparing statistical distance between the datasets

When comparing the three datasets pairwise to each other with the Kolmogorov–Smirnov distance, we found that Study music is more similar to Sleep music (0.149) than to General music (0.364). Sleep music is more similar to Study music (0.149) than to General music (0.252). However, Study music is more similar to General music (0.252) than Sleep music is to General music (0.364; see Fig. 4).

Figure 4
figure 4

The Kolmogorov–Smirnov distance between General music and Sleep music, Sleep music and Study music and between General music and Study music across all nine audio features (Instrumentalness, Acousticness, Speechiness, Liveness, Tempo, Loudness, Energy, Valence and Danceability).

Discussion

Using both qualitative and quantitative analyses based on tracks, genres or audio features, we aimed to exploratively compare Study music and Sleep music. Using datasets generated from Spotify’s database, we found that Sleep music and Study music were more similar to each other than to General music (see Fig. 4). Furthermore, Sleep and Study music share a similar distribution in all audio features, do not differ in Liveness or Instrumentalness, include similar genres and have similar subgroups.

Furthermore, the two datasets also included music from similar genres. Out of the 12 top genres in the Study playlist dataset (pop, lo-fi, classical, soundtrack, instrumental, jazz, house, sleep, rap, ambient, indie and rock), every genre was present in the Sleep playlists dataset except house. The Study playlist dataset also included 2319 tracks with the genre sleep (a genre that is named as such by Spotify and includes tracks that the creators of the track deemed as relevant for sleep), an overt demonstration that Study music shares similarities with Sleep music. Additionally, 21,872 unique tracks appeared both in the Sleep playlist dataset and in the Study dataset, indicating that many tracks are equally suited for sleeping and studying.

Our cluster analyses (Figs. 1, 2) demonstrated similarities in the characteristics of clusters between the Study and Sleep music datasets. Specifically, there seems to be three groups. One group contains ‘Speechy’, ‘Radio’, ‘Pop’ and ‘Lo-fi’ tracks, characterised by high Danceability, high Energy, medium–low Instrumentalness and Acousticness. Another group contains ‘Ambient, ‘Instrumental’, ‘Live’ and ‘Classical’ tracks, characterised by high Instrumentalness and Acousticness and medium–low Danceability and Energy, and finally one group has ‘Acoustic Radio Tracks’ and is characterised by high Acousticness, low Instrumentalness, low Energy and medium Danceability. We interpret that these different groupings of audio tracks reflect different preferences, given that each track was added to these datasets based on human action. The group of ‘Speechy’, ‘Radio’, ‘Pop’ and ‘Lo-fi’ tracks suggests a preference of certain individuals to listen to and therefore add these types of tracks to sleep or study playlists, whereas another preference is to listen to ‘Acoustic Radio Tracks’. Some individuals might like to swap music types frequently e.g., to find their optimal background music. However, we think that once established, an individual would stick to similar sounding tracks for a certain activity. Thus, we argue that these three different groupings suggest three separate listening habits.

When looking into the audio features, we found that the Instrumentalness and Liveness in Study and Sleep music were comparable and, although other features are statistically different between the two datasets, the effect size tended to be rather small. The characteristics of General music were more distanced to those of Sleep and Study music when all three were compared. General music is characterised by being more energetic, happier and louder than Sleep music and Study music.

A crucial question is why there is such an overlap between Study music and Sleep music? Given the very different nature of activities involved in sleep and study, it is surprising that people use similar types of music to accompany these tasks. Curiously, there might be a similarity in why people use music for Sleep and Study. For sleep, one of the most popular motivations is to distract one from thoughts that might disrupt sleep17. For studying, setting a good mood and helping concentration are popular motivations for using music23. In other words, in both activities, people use music to create a pleasant auditory environment and focus on a specific task. To do so, accompanying music should not attract too much attention as this will decrease performance21,45. Therefore, these two datasets might both contain music with the optimal stimulation amounts in order to create a suitable pleasant auditory environment.

Indeed, the shared features between Sleep and Study music are lower values of Valence, Energy, Tempo, Loudness, and higher values of Acousticness and Instrumentalness. In other words, both datasets included more music tracks that are less loud, less fast and instrumental than the General music playlist. Our typical responses for slow tempo non-vocal music are lower arousal, physiological responses and somewhat negative mood57,58,59.

However, everyone does not react to music in the same way. Age, culture, personality, musical expertise and musical familiarity might all influence how a certain individual reacts to a piece of music60,61,62,63,64,65,66. These individual differences might explain the intra-dataset variation in both Study music and Sleep music datasets. For example, individuals with high extraversion are said to require more stimulation before reaching the ideal arousal level whereas individuals with high introversion require less stimulation to reach the same arousal level61,67. Therefore, it is highly likely that individual differences, such as personality traits and other factors, influence the choice of music to accompany people’s sleep and study. Such individual differences may be reflected in the various subgroups found for Sleep and Study music (see Figs. 1, 2). Investigating which specific individuals choose to listen to while studying or going to sleep could help understand which traits (personality, age, culture…) influence music choices related to studying and sleeping.

The current study highlights a new study research trend which uses large datasets acquired from Spotify, namely a comparison of music used to study and to sleep. Having such a large dataset is beneficial for statistical power and general level analyses that would have been difficult to conduct otherwise. Because of company politics, Spotify does not provide any demographic information, which means that the present datasets do not allow us to investigate who is listening to the playlists. Therefore, we cannot rule out a potential mismatch between the groups of the listeners. For instance, most people listening to Study playlists may be students within a certain age range while those listening to Sleep playlists may reflect a different age range. Specifically, the large prevalence of Lo-fi in the Study playlists could reflect a younger population, due to Lo-fi’s relative recency as a genre. To better match and expand the people who are using the playlists, future studies could benefit from investigating music playlists listened to at work, using search terms like ‘office’ or ‘programming’.

Importantly, the overlap between the two datasets does not necessarily mean that the same person is using the same music tracks for both sleeping and studying. For drawing stronger associations between the Sleep and Study music as well as underlying cognitive mechanisms, interventional approaches (e.g., using smart watches), conventional experimental approaches, or a large-scale survey with more explicit demographic information could be considered, which would allow a look into the details of individual behaviours. Additionally, no information on how the playlists are used is available. While we know that the creator of the playlist had a specific behaviour in mind when creating the playlist, no conclusions can be drawn about how the followers or users of the playlists use them. However, it is likely that someone would use a playlist named ‘Music for Sleeping: Calm Piano Sleep Aid, Music for Relaxation And The Best Sleep Music’ to help themselves sleep and that a playlist called ‘Chill music mix to study to’ would be used while studying.

In conclusion, our results show that a similarity exists between Study music and Sleep music, in terms of genres, audio features, and in their comparison to General music. While it is not possible to give concrete explanation for this similarity from our study, we suggest that music creates a pleasant but not too disturbing atmosphere which enables one to focus on studying and to lower arousal for sleeping. Given the variety of genres and audio features within the playlists, it is clear that individual differences and preferences also come into play. The similarity of these two types of music in relation to their desired effect on arousal is surprising and suggests a re-evaluation of the role of arousal in studying, or of the influence of music on arousal. The exact influence of these music types on arousal is still unknown and should be investigated further to understand better the function of these music types. Nevertheless, our results highlight the benefit of using big data sets to compare different types of music, yielding strong evidence that music used for sleep and music used for studying, somewhat surprisingly, share many characteristics.

Data and code availability

The full datasets and the code are available at https://github.com/RebeccaJaneScarratt/Study-Sleep-Analyses.