Abstract
Humans can find music happy, sad, fearful or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form–function associations) and culturally idiosyncratic styles.
Introduction
Music, defined here as human-produced sound organized by melodies, rhythms or both, is found in every society where researchers have looked1,2,3,4. It suffuses social life, appearing in contexts as diverse as healing, dancing and infant care1,5,6, and occurs across the lifespan, through infancy7, childhood8, adolescence9, adulthood10 and old age11. The importance of music in the social lives of humans stems from its potent and diverse psychological effects, which range from pacifying infants12,13,14,15 to fomenting the collective, chaotic thrashing of rock concert mosh pits16.
A central question in the study of music is whether humans have evolved specialized cognitive adaptations to produce and respond to music. The psychology of music either comprises music-specific adaptations shaped by natural selection17,18,19 or arises as a by-product of cognitive abilities serving non-musical functions. According to this by-product account, also known as the ‘auditory cheesecake hypothesis’, music is a package of cognitively compelling stimuli moulded via cultural evolution to trigger features of human psychology that evolved for non-musical ends20,21. At least three features of music-related psychological processes can help determine whether the underlying cognitive systems are specialized adaptations: domain-specificity, early expression and universality17. A psychological process is domain-specific if it has evolved to operate on a particular class of information. It is expressed early if infants exhibit the response. Universality, which can refer either to a behaviour or to an underlying feature of human psychology, is a feature that deserves further elaboration.
A behaviour is universal when it is expressed in all human populations, excepting mitigating factors. For instance, music production was expressed by 100% of populations in a sample of 315 mostly non-industrial human societies, including geographically diverse hunter-gatherers, pastoralists and intensive agriculturalists1. This universality naturally coexists with variability: not every individual in every culture is an expert producer of music (as only some individuals have extensive music training); some cultures use music less frequently than others (as with the Tsimane, who generally do not produce music in groups22); and not every individual in every culture is equally motivated to produce music (as in individuals with musical anhedonia23, for whom music production might be less rewarding than is typical). The production of music is nonetheless considered universal, as even in these cases, there is evidence for the behaviour in every population studied. A behaviour can be near-universal (sometimes called a ‘statistical universal’) if it appears above a predefined threshold but not in 100% of cultures sampled2.
Unlike a behaviour, a universal psychological mechanism or predisposition can manifest variably, not necessarily appearing in every individual in all populations24. Jealousy exhibits considerable global variation, with individuals in some cultures reporting less severe jealousy25. Yet, this variation is structured: cross-culturally, the severity of jealousy covaries with the frequency of extramarital sex and expectations of parental investment25, suggesting that jealousy is a universal emotional response that functions to ensure either parental investment (for females) or paternity certainty (for males). Like jealousy, psychological responses to music can exhibit reliable cross-cultural differences while still reflecting universal predispositions that are variably expressed depending on the environment of an individual.
To organize our discussion, we heuristically distinguish among three psychological processes at work in human musicality: music production, music perception and musical response. Music production refers to the auditory, motor and vocal processes associated with singing or playing an instrument. Music perception refers to processing that translates sounds into neural activity, which is subsequently subjected to a variety of analyses, including auditory scene analysis and the extraction of musical structure, syntax or interval relations26. Finally, musical response refers to the higher-level semantic, aesthetic, emotional and behavioural responses and inferences that follow music production and its subsequent perception (Fig. 1).
The diagram identifies topics in music perception and musical response that are commonly studied in psychology. Topics are ordered vertically by their approximate level of abstraction, which distinguishes lower-level perceptual phenomena (such as the extraction of basic acoustic information in a stimulus) from higher-level musical responses (such as enculturation). This Review focuses in depth on emotional and behavioural responses, leaving aside other musical responses such as aesthetic appreciation.
In this Review, we synthesize the literature on universality, domain-specificity and development of psychological responses to music. We first briefly discuss the mechanics of music production and music perception, before focusing on emotional and then behavioural responses to music — two rapidly advancing areas of research. By surveying cross-cultural, developmental and neuroscientific approaches, we will demonstrate clear evidence for the universality and early development of emotional and behavioural responses to music. However, the evidence for domain-specificity is more mixed, suggesting that universal responses to music might draw on more general features of human psychology. We conclude by considering how cultural evolution interacts with universal aspects of human psychology to produce both cross-cultural similarities and cultural idiosyncrasies in the music of the world.
Music production, perception and response
Universality, development and domain-specificity have been key research areas for each of the music-related psychological processes (production, perception and response). For instance, music production is universal and the associated behaviours vary substantially less across cultures than within cultures1. Humans have manufactured musical instruments for at least 35,000 years27 and have likely produced vocal music for longer18,28,29. The universality and deep history of music production suggest that it is underlain by psychological mechanisms shared across humans.
Given the universality of music production, it is not surprising that many basic aspects of music perception are widespread and early-developing, such as mechanisms involved in hearing and understanding musical pitch (the psychological correlate of frequency, allowing it to be ordered on a frequency-related scale; in English, pitch is typically described as the highness or lowness of a tone)30,31,32,33. Perception starts with feature extraction, during which low-level acoustic features like timbre, intensity, location, pitch height and periodicity are decoded from the auditory stream34. This acoustic information is analyzed to process melodic, rhythmic, timbral and spatial groupings, eventually resulting in higher-level musical representations, such as tonal and metrical information (two foundational aspects of musical information) and harmonic structure34. The human auditory cortex is specialized for music perception35, separately from speech perception36,37, with special selectivity for vocal as opposed to instrumental music38 and with connections to reward systems found in the midbrain39,40. Whether the psychological mechanisms underlying music production and perception are best explained by domain-general processes, such as auditory scene analysis41, or domain-specific ones is up for debate, but the current overall picture is that many aspects of music production and perception form a basic part of human psychology that supports higher-level musical responses.
Musical response refers to the semantic, emotional, aesthetic and other behavioural responses and inferences that follow music production and perception (Fig. 1). Musical responses occur in both producers and listeners of music and include many apparently higher-level responses to music such as inferring musical meaning (‘this song is about birds’), inferring expressed emotions in music (‘this song sounds happy’), directly experiencing emotions evoked from music (a song makes a listener feel happy) and moving in response to music.
Whereas musical response is generally downstream of perception, the relationship is not completely linear or serial. Musical responses do not require the analysis of rhythmic or spatial groups; for instance, tones played in isolation (without other rhythmic or melodic structure) can convey meaning, such as by sounding ‘bright’, ‘feminine’ or ‘summery’42,43. Moreover, there are indications that motor regions of the brain not only respond to structural features like rhythm and metre but are also involved in extracting beat, raising the possibility of feedback loops between music perception and response44,45,46,47. Such feedback loops undoubtedly operate differently in the brain of a performer (who has more immediate access to motor information in music) than for a listener (who has less)48. Nevertheless, our heuristic distinction between music production, perception and response is justified by how humans process music psychologically26,49 and parallels distinctions used in language sciences50.
In this Review, we will largely leave aside the mechanics of music production and perception to concentrate on the domain-specificity, development and universality of musical responses. For instance, we do not discuss cultural variation in the perception of dissonance51, the effects of musical experience on auditory processing52, or the effects of antenatal exposure on auditory perception and neural development53,54. Our coverage will focus on two sets of musical responses that have received considerable research attention and are among the most important psychological effects of music. We will start by discussing emotional inferences and responses, especially recognizing expressed emotions in music. We will then address behavioural inferences and responses, particularly being soothed and dancing.
Emotional responses to music
Individuals overwhelmingly consume and deploy music for emotional regulation9,10,55,56,57,58,59. As such, much of the research on musical responses has focused on emotional responses. This research often adopts a basic emotions perspective, according to which there are basic or discrete emotions, such as happiness and fear, as well as complex or non-basic emotions such as jealousy and solemnity60. Basic emotions are said to be innately expressed and identified, whereas non-basic emotions are seen to be less biologically fundamental and more culturally variable60. As in the broader emotion literature, the main alternatives to a basic emotions perspective are dimensional perspectives, according to which emotions are organized around a few dimensions, most commonly valence (pleasantness) and arousal (activation)61,62,63.
Regardless of the model of emotions that researchers adopt, the studies of emotional musical responses reviewed here suggest that such responses are not supported by specialized adaptations. Whereas the psychological mechanisms underlying emotional responses seem to be largely conserved across populations, they reflect domain-general responses to emotion rather than music-specific psychological processes.
Cross-cultural similarities
Studies in which individuals were asked to rate emotions in foreign music have demonstrated that emotional expression is, to a modest degree, mutually intelligible across cultures64,65. For example, Mafa individuals in northern Cameroon accurately recognized emotions in western music designed to sound happy, sad and fearful66. Similarly, German, Norwegian, Korean and Indonesian individuals identified happy and sad instrumental performances by German musicians67. In another example, Indian, Japanese and Swedish listeners identified expressed emotions in the traditions of each other as well as in western music65,68. Finally, individuals from the USA and rural Cambodia tasked with creating music that expressed emotions like ‘sad’ or ‘happy’ created similar melodies69. The findings of these studies suggest broadly shared psychological mechanisms underlying the recognition of expressed emotions in music70.
Despite these similarities, culture still shapes how individuals recognize emotional expression in music. Participants might, on average, successfully recognize emotions in music from foreign cultures while nevertheless showing much lower accuracy than native participants. For example, although Mafa listeners successfully identified happiness, sadness and fear in western songs at a rate higher than chance, Canadian listeners accurately inferred the expressed emotion nearly twice as often66 (Fig. 2). Experimenters found similar results in several additional experiments65,67,68. In one, Canadian adults correctly identified joy, sadness and anger but not ‘peace’ in North Indian classical music64. In another, Swedish, Indian and Japanese participants identified anger, fear, happiness and sadness more successfully than supposedly ‘non-basic emotions’ like spirituality, solemnity and longing in western excerpts and the music of each other68. In a third study, Korean and Indonesian participants identified happiness and sadness in German music with relative ease but had difficulty recognizing surprise and disgust67. In fact, surprise and disgust were also hardest for Norwegians and Germans to recognize in German music (surprise tended to be confused with happiness and disgust was confused with fear and anger).
a, Mafa listeners in Cameroon and western listeners both identified happiness, sadness and fear in western music above chance, but the responses of western individuals were accurate much more often. b, Patients who underwent anteromedial temporal lobe excision (typically including the removal of the amygdala) had an impaired ability to recognize both scary music and fearful faces. Performance across the auditory and visual tasks was moderately correlated, raising the possibility that emotional recognition in music shares neural substrates with emotional recognition in faces. The emotions shown here represent a small subset of the emotions explored in this literature. Asterisk indicates that a significant correlation exists between the auditory and visual emotional tasks. Part a adapted with permission from ref. 66, Elsevier. Part b adapted with permission from ref. 100, Elsevier.
Some features of music are interpreted more variably across cultures than others, which further complicates the recognition of expressed emotion in music71. For instance, participants from the UK and participants from north-western Pakistani tribes made similar emotional inferences from features such as tempo, loudness and pitch. However, participants from the UK associated the major mode with happiness and the minor mode with sadness, whereas Pakistani participants apparently did not pay attention to mode in one study72 and exhibited the opposite set of responses in another73. In a similar vein, the extent to which both Chinese and Papua New Guinean participants associated the major and minor modes with positive and negative emotions, respectively, was predicted by their familiarity with western music74,75.
Although more precise evidence is needed concerning the exact effects of cross-cultural musical experience, together, the results above suggest that the recognition of expressed emotion in music involves a combination of culturally learned emotion cues and more universal psychological mechanisms.
Developmental trajectory
Children can identify some emotions in music by 3 or 4 years of age, although findings have been variable (Fig. 3a). For example, British 3-year-olds were presented with novel music for children and asked to indicate whether performances sounded ‘happy’ or ‘sad’. The children successfully identified happiness and sadness in both vocal and instrumental music76, with markedly better performance on ‘happy’ music. Likewise, Finnish and Hungarian children aged 3 and 4 years identified happiness and sadness in diverse musical performances (a folk song, stimuli produced by musicians) but not anger or fearfulness77. In another study, Canadian 5–8-year-olds identified high-arousal emotions (happiness and scariness) more successfully than low-arousal emotions (peacefulness and sadness) in musical stimuli designed for emotion recognition experiments; however, they were not as successful as 11-year-olds, who exhibited adult-like levels of accuracy78. Contrasting with evidence of early emotion recognition abilities, several studies have found that 3–4-year-olds failed to distinguish happy from sad songs79,80, although this might reflect experimenters using western classical music, complicating their interpretation.
The ages of onset for the emotional (part a) and behavioural (part b) psychological responses to music discussed in the text. Emotional inferences appear in blue, while behavioural responses have been separated into form–function inferences (plum), responses to infant-directed song (green) and responses to rhythm (light blue).
Although developmental changes to emotional recognition in music parallel changes to emotional recognition in non-musical speech81, it remains unclear to what extent developmental differences are due to culture-specific learning. On the one hand, inferring emotional expression from mode seems both to develop after 5 years of age and to be cross-culturally variable in adulthood, suggesting a role for cultural learning72,79. On the other hand, children and even adolescents have difficulty identifying anger and fear in music77,80,82, yet these are among the emotions that adults recognize in music most reliably across cultures64,66,67,68, suggesting that some developmental trajectories play out similarly the world over.
Whether infants and toddlers can recognize emotion in music remains an open question. Several studies conducted in North America show that 9-month-olds can discriminate happy music from sad music83,84,85. However, discrimination does not imply recognition and, with few exceptions86, very little research has investigated emotional recognition in music in toddlers and infants younger than 3 years of age. This gap is somewhat surprising, given that many developmental paradigms, such as measuring looking time toward cross-modally matched faces and musical examples, could be straightforwardly adapted for such investigations. Indeed, several findings have raised the possibility that infants and toddlers can infer emotional content in music. For example, infants are both surrounded by music and fascinated by it7,87, they are attentive to the emotions of individuals with whom they interact88,89, and infants show a distinct set of psychophysiological responses to unfamiliar foreign lullabies relative to non-lullabies14. Thus, studies on emotional recognition in young infants are feasible and will help resolve to what extent infants are predisposed to associating emotions with acoustic phenomena.
Mechanisms for emotional recognition
The evidence that emotional recognition in music involves universal psychological mechanisms does not imply that those mechanisms are domain-specific. Rather, at least three lines of research suggest that emotional recognition in music draws on the same domain-general mechanisms involved in judging expressed emotion from non-musical stimuli such as non-musical vocalizations and facial expressions.
First, vocalizations produced in both musical and non-musical contexts use similar cues to communicate emotion. For example, in both music and speech, variations in tempo, volume and pitch often (although not always) communicate similar emotional states90,91,92. Like happy-sounding speech in English and Tamil, happy-sounding music in western music and South Indian music uses larger pitch intervals93. Angry speech and angry music are both characterized by faster and louder vocalizations, contrasting with the slower and softer sounds of music not typically found in angry contexts such as lullabies1. Non-musicians incorporate cues, such as tempo and volume, when producing emotional music94. When asked to make music sound happier, sadder or angrier, Finnish 3–5-year-olds adjusted tempo, pitch and volume in ways that mimic emotion cues in speech95. Chinese adults even attributed arousal and valence to environmental sounds, such as clapping, thunder or a car engine, when those sounds displayed tempo, volume and pitch cues that signal emotion in music and speech96.
Second, activity in brain regions during emotional recognition in music seems to correlate with brain activity involved in processing emotions in non-musical stimuli97,98. For instance, damage to the amygdala impairs the recognition of both scary music and fearful faces, and the performance of patients on both tasks was correlated99,100. In other research, participants exhibited activity in the medial prefrontal cortex not only when asked to track the emotional content of musical and non-musical linguistic vocalizations101 but also when processing the emotional content of body movements, facial expressions and non-linguistic interjections (such as “aah”)102. Finally, watching movements and hearing sounds associated with emotions evoked similar neural representations in visual and auditory areas of the brain, respectively, which suggested that emotional stimuli presented in diverse modes can elicit common representational structures103.
Third, children exhibit similar developmental trajectories for recognizing emotion in speech and music. Children start to recognize some emotions in speech and music by the age of four; they are better at identifying happiness and sadness than fear or anger in speech and in music; and they are capable of identifying emotions in other languages, although they are most accurate when listening to their native language81,104,105,106. When asked to rate clips of speech, music and affect bursts (such as laughter), the performance of Australian children in three age groups (7–11 years, 12–14 years and 15–17 years) and adults (18–20 years) was not distinguishable when labelling speech and music, although they were more accurate when labelling affect bursts81. Thus, the same developmental changes that allow children to recognize emotion in speech appear to be involved in recognizing emotion in music.
Despite many indications that recognition of musical and non-musical emotion expression draws on the same cognitive mechanisms, how emotion is communicated in music remains unresolved107,108,109. Consistent with basic emotion theories, basic emotions (such as happiness and fear) appear to be recognized in music both earlier in development and, in some studies, more reliably within and across cultures relative to non-basic emotions (such as jealousy and solemnity)64,66,68,78,90. However, researchers do not agree on which emotions are basic; there is conflicting evidence on whether there are distinct physiological correlates distinguishing basic emotions; and many canonical findings on emotional expression in speech come from studies in which actors portrayed emotional states (such as by acting happy), which might not accurately reflect naturalistic emotional displays62. These criticisms have inspired dimensional perspectives on communication of emotion in music, especially those centring on valence and arousal61,62,63. In support of such theories, an analysis of 53 studies published since 2003 found that a dimensional structure based on valence and arousal explains more variance in participants’ recognition of emotions in music than does a structure based on five basic emotions (anger, fear, happiness, love-tenderness and sadness)62. In addition, English speakers from 60 countries rating unfamiliar, foreign songs from 86 societies largely agreed with one another in their ratings of the valence and arousal of songs5. Thus, valence and arousal are reliably detectable dimensions of musical expression by listeners.
Resolving how emotion is communicated in music is complicated by studies of emotions felt while listening to music, which are difficult to reconcile with either the basic emotion or the dimensional perspective. A series of experiments with French-speaking listeners resulted in a nine-factor solution for recognized emotions in music (with factors such as amazement, tranquillity and power) and a related, although distinct, nine-factor solution for emotions felt from music (with factors such as transcendence, peacefulness and tension)110. Neither nine-factor solution was accounted for by basic emotion or dimensional theories. In another study, experimenters presented thousands of music samples to participants from the USA and China and asked them to label how the music made them feel, either by choosing from a list of 28 emotional categories or by rating each sample on 11 distinct Likert scales111. Thirteen dimensions of subjective experience were shared across both cultures, including basic emotions such as fear, joy and sadness as well as non-basic emotions like annoyance, triumph and dreaminess. Contrary to basic emotion accounts, non-basic emotions exhibited higher correlations across cultures than presumably basic emotions. Meanwhile, valence and arousal exhibited lower cross-cultural convergence than many other subjective experiences, challenging the theory that emotions, whether in music or more broadly, are constructed from these basic building blocks.
Although the structure of emotional communication in music remains unresolved, a general conclusion is clear: there is little reason to suspect that humans have specialized cognitive mechanisms for expressing and recognizing emotion in music. Rather, existing evidence suggests that individuals employ domain-general mechanisms for emotional communication in both music and speech. In this light, some basic aspects of musical understanding accord with the view that music is embedded in biology as one of several types of vocal signals18. As with much of human behaviour, emotional expression in music involves ‘variations on a theme’, where universal predispositions are modified by cultural exposure24. Individuals from distinct cultures can recognize emotions in the music of one another, yet they more successfully recognize some emotions relative to others and can fail to accurately interpret some acoustic cues. Similarly, young children can recognize expressed emotions in music although with limited and variable success. Thus, the role of domain-general mechanisms for the expression of emotion in music demonstrates how the diversity of the world’s music is structured by pan-human psychological predispositions.
Behavioural responses to music
In addition to processing purely auditory information (like pitch or timbre) and inferring emotional content (like expressed emotion described in the previous section), listeners also make inferences about the behavioural functions of music. By behavioural functions, we mean the social and behavioural ends for which people apparently produce music, including soothing an infant, accompanying dance and healing illness. Although these functions can leave sonic signatures on a recording, such as the sound of thumping feet in a group dance, this is not necessarily the case, as the behavioural function is foremost determined by the goals of the performer.
Although behavioural functions are related to the emotional content of music, they are a separable concept of interest for at least two reasons. First, individuals worldwide produce music for specific behavioural functions, such as dance or infant care, and comparative research suggests that many of these specific behavioural functions themselves appear reliably across societies1,5. Second, genetic evolutionary theories often explain the evolution of music in the context of specific behavioural functions such as enabling dancing18,29, soothing infants18,28, signalling mate quality112 and promoting social bonding19. Insofar as the music faculty involves domain-specific cognitive adaptations, we should expect those adaptations to be specialized for these behavioural functions.
Here, we review evidence that universal characteristics of human psychology guide individuals to respond to particular acoustical forms in similar ways. For example, humans around the world find slow, melodic music soothing and dance in response to louder, rhythmically dominated songs. In many experiments and a variety of populations, naive listeners intuit these associations: not only do they expect associations between song form and function but they reliably identify the behavioural functions of unfamiliar songs. Research demonstrates that behavioural responses to music, particularly to dance songs and lullabies, develop early and reliably across societies, although existing studies cannot determine whether those responses reflect domain-specific mechanisms.
Universal behavioural functions
A general tendency across animals is for communicative behaviours to be shaped by their intended function, manifesting as form–function associations in vocalizations113. For instance, low-frequency, harsh vocalizations tend to signal hostility because they are reliable indicators of body size114,115. Similar form–function associations characterize many human vocalizations, including spontaneous laughter116 and infant-directed speech15.
A series of experiments has investigated form–function associations in music using three related approaches: asking naive participants whether they can infer relationships between the form and function of foreign music; computationally identifying the acoustic features associated with particular behavioural functions in music; and analyzing how those acoustic features explain the inferences of listeners1,5,15,117,118,119. These approaches are informative for two reasons. First, they test whether songs that share behavioural functions exhibit common acoustical designs across societies, helping uncover whether universals in human psychology guide both musical production and response. Second, they test whether individuals have shared conceptions of what songs should sound like5,119. Although it can be difficult to determine whether these conceptions result from cultural learning or intuitions that predate cultural encounters with music, studying them in young children or infants helps elucidate to what extent form–function intuitions are shaped by cultural experience117. Methodologically, form–function experiments share a basic structure1,5,15,117,118,119. Naive listeners are presented with random excerpts of foreign songs, typically field recordings from small-scale societies. They are then asked to evaluate the songs’ functions such as by rating them on scales or selecting behavioural functions in forced-choice tasks. Finally, researchers identify acoustic features that predict the inferences of listeners to deduce their intuitions about song functions.
Across a variety of populations — among young children117, in small-scale societies119, in massive online experiments conducted with English speakers1,15 and in multilingual online experiments with participants in 59 countries119 — naive listeners infer the behavioural function of foreign songs above chance5 (Fig. 4). At least three lines of evidence suggest that this performance reflects reliably developing intuitions grounded in a universal human psychology more so than encounters with similar music. First, the familiarity of listeners with globalized musical culture does not explain their ability to identify song functions. Individuals in smaller-scale societies with limited access to western music successfully identified form–function relationships, and listeners whose experiences more closely matched the culture of the singer (whether measured in linguistic or geographic distance) were only modestly more successful at identifying them119. Second, children performed roughly equivalently to adults (with significant but very small effects of age), suggesting that abilities to infer form–function relationships required little experience117. Third, individuals unfamiliar with particular song domains — namely, westerners unfamiliar with healing songs — nevertheless identified form–function relationships1,5, suggesting that intuitions develop even without exposure to the relevant domain.
Four lines of evidence indicate that naive listeners of diverse ages and cultural backgrounds can infer the behavioural functions of unfamiliar foreign songs. The four song types studied here (dance songs, healing songs, love songs and lullabies) represent a subset of the many behavioural ends for which individuals use music. In a forced-choice categorization, English-speaking participants in a massive online experiment (n = 29,357) successfully categorized dance songs, lullabies, healing songs and love songs at rates higher than chance (25%); love songs were the hardest to recognize (part a). Percentages below the right-hand panel show base rates of response for each type. English-speaking children (n = 2,624) successfully identified dance songs, lullabies and healing songs with only slight increases in accuracy across ages; love songs were not tested (part b). Adults in 49 countries (n = 5,524) who each spoke one of 28 non-English languages (part c) and participants in three smaller-scale societies in Indonesia, Ethiopia and Vanuatu (n = 116) were presented with the same foreign dance songs, lullabies, healing songs and love songs (part d). For both the non-English-speaking internet users and participants in smaller-scale societies, the experiment was completed in the local language only. Both the non-English-speaking internet users (part e, left half of violin plots) and participants in smaller-scale societies (part e, right half of violin plots) successfully identified dance songs, lullabies and healing songs (were rated above the average rating on the matching scale, indicted by both z-score ratings above zero on the violin plot); love songs were not recognizable. Part a, right, adapted with permission from ref. 1, ©The Authors, some rights reserved; exclusive licensee AAAS. Part b, right, © 2022 APA; adapted with permission from ref. 117. Part e adapted from ref. 119, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).
Analyses of acoustic properties of songs have provided strong evidence of form–function associations in the music of the world. For one, low-level acoustic properties of songs extracted using automated techniques, such as roughness or inharmonicity, reliably co-occurred with behavioural functions across diverse, distantly related human societies1. Moreover, a machine learning model successfully classified the behavioural functions songs on the basis of acoustic features, even when it was trained on data from songs from some societies (such as from 29 of 30 world regions or from all Old World societies) and evaluated using song data from other societies (such as from the 30th world region or all New World Societies)1. Acoustic features predicted not only the actual behavioural functions of the song but also the inferences of the behavioural functions by listeners1,117. Together, these analyses suggest that universal features of human psychology predispose individuals in any society to associate particular sounds with certain behavioural functions.
Development and domain specificity
The universality of form–function associations suggests that different psychological mechanisms are involved in responses to songs of distinct behavioural functions. Turning to development and domain-specificity, we focus here on responses to lullabies and dance songs, for several reasons. Lullabies and dance songs are the most stereotyped song domains across cultures and are identified by naive participants with the highest accuracy1,5,117,119. They have also been hypothesized to be central to the evolution of music19,120, such as in the context of credible signalling18,28,29. Among the different behavioural responses to music, responses to lullabies and dance songs are most likely to reflect evolved specialized adaptations, making them prime candidates to study early development and domain specificity.
As we review here, the psychological responses underlying both lullabies and dance songs appear early in development in human populations around the world. However, whether these responses reflect domain-specific cognitive processes remains unresolved.
Lullabies
Infants seem predisposed to responding to infant-directed songs and, in particular, to songs that are intended to soothe them or put them to sleep (lullabies) (Fig. 3b). Canadian infants preferred infant-directed songs over non-infant-directed songs121 and preferred maternal infant-directed song over maternal infant-directed speech122. Infants were soothed by familiar songs more than unfamiliar ones12 and, in at least one experimental paradigm, by lullabies more than play songs (vocal music directed towards children, often characterized by excitatory, amusing characteristics)13. Such behavioural responses are a likely reason why, across demographics, most parents in the USA sing to their infants daily7.
Several lines of research indicate that early-developing responses to lullabies are universal. Parents worldwide sing to their infants1, and infant-directed songs exhibit acoustic regularities14,15. Infants in the USA relaxed in response to foreign lullabies, more so than to non-lullabies, and relaxed the most to lullabies that exemplify infant-directedness14, suggesting that lullabies use common features to evoke similar psychological responses15. Although this idea might seem intuitive, consider that most lullabies infants hear come from their caregivers. Infants are highly sensitive to the identities of the individuals who interact with them, forming inferences about individuals on the basis of the language or dialect they speak123, the foods they eat124 and the music they produce125,126. Given this, infants might well be calmed by anything a trusted caregiver does for them. However, infants relax in response to lullabies produced by unfamiliar individuals in unfamiliar cultures and in unfamiliar languages that the infant cannot understand, showing that lullabies produced worldwide are well-designed to calm infants, even in the absence of rich social cues of caregiver identity.
Evidence is mixed for whether behavioural responses to lullabies reflect domain-specific adaptations. On the one hand, humans seem to respond most to lullabies during infancy, consistent with specialized cognitive mechanisms being expressed in the developmental stages when they are most useful. On the other hand, according to a preprint that has not yet undergone peer review, many English speakers use lullaby-like music (such as pop music, including a ‘lullaby’ genre label on Spotify) to fall asleep127, and many features of lullabies (such as lower tempo, loudness and energy) are reliably present in many other forms of music127. Furthermore, infants are soothed by many sounds other than lullabies, most notably shushing128,129. Sounds with minimal formant structure, including shushing or white noise, were effective at masking other sounds, such as tones or speech130, facilitating sleep in both infants129 and adults131 (as they were less likely to hear random sounds and be awoken by them). It therefore remains unclear whether lullabies soothe infants because of cognitive mechanisms specialized to respond to them or because the songs appeal to cognitive mechanisms that evolved for non-musical functions.
Dance songs
The perception and processing of rhythmic information, essential for the behaviours associated with dance, begin early (Fig. 3b). Newborns discriminated between languages with differing rhythmic profiles132, and the patterns of neural activity in Hungarian neonates indicated sensitivity to onsets and offsets of musical rhythm as well as the rate at which sounds are presented133,134. Indeed, the music perception abilities of infants are tuned-in to rhythms. European 2-month-olds perceived differences in rhythm and tempo in tone sequences135,136,137 whereas Canadian 7-month-olds showed EEG responses frequency-locked to rhythms138. Moreover, the developmental trajectory of rhythm perception is suggestive of perceptual narrowing: North American infants reacted similarly to disruptions of western and Balkan rhythms at 6 months yet did not react to disruptions of Balkan rhythms at 12 months139,140.
Infants also move in response to rhythms. In two experiments, Swiss and Finnish infants aged 5–24 months listened to clips of music, rhythm and speech141. Although no infant demonstrated entrainment — the synchronization of actions, such as body movements, to a recurring rhythmic event — the infants moved more to music and rhythms than to speech. In addition, although the youngest infants moved more inconsistently, the experimenters found no changes in the responses of infants between the ages of 7 and 24 months. These behaviours are commonly observed in naturalistic settings: in a sample of US parents of infants aged 0–24 months, the vast majority reported seeing their infant dance in the first year of life142. Humans appear to come into the world ready to respond to rhythm.
Despite their early music perception abilities, individuals nevertheless must learn to entrain to a beat143. Infants aged 8 months in the USA discriminated between synchronous and asynchronous dancing that they observed144, yet studies with Japanese infants and German preschoolers suggested that reliable beat entrainment does not appear to develop until toddlerhood145,146. Even so, the ability to synchronize to a beat is modest at such young ages, as any parent can tell you. Studies with participants from the USA suggest that the accuracy of synchronized movements does not approach adult levels until 10–12 years of age147,148.
Whether dance and other rhythmic behavioural responses to music reflect domain-specific specializations remains an open question. The only animals aside from humans who spontaneously perceive a beat and synchronize to it are parrots45,149,150. This observation has been taken as evidence that a capacity for rhythm is not a derived adaptation but rather a by-product of advanced vocal learning abilities, which both parrots and humans exhibit149. Vocal learning involves intrinsic rewards for predicting the temporal structure of auditory sequences and establishes tight reciprocal communication between motor planning regions and forebrain auditory structures45. As a result, individuals are motivated to produce synchronized action, such as dancing or singing to music, which is intrinsically rewarding. This explanation of beat entrainment is similar to that developed within a model of predictive coding of music, which also posits that synchronized action is a way of reducing reward prediction error (although without invoking the advanced ability of humans for vocal learning)151. Regardless, these explanations suggest that the rhythmic aspects of spontaneous dancing might derive their pleasurable outcomes152,153,154,155 via cognitive mechanisms that are not specific to music.
Some observations still raise the possibility that the cognitive mechanisms involved in beat perception and entrainment are domain-specific adaptations45. First, the capacity for beat perception and synchronization is not shared with the closest living relatives of humans, chimpanzees156. Second, a complex neural architecture underpins rhythmic entrainment in humans44. Third, humans can and do entrain to rhythms for long periods of time, unlike parrots, who only entrain for shorter durations150. Fourth, beat entrainment is typically a social activity in humans2,157, whereas in parrots it is not. Last, two genetic loci associated with the self-reported ability to synchronize (by clapping) to a beat are in ‘human accelerated regions’158— that is, in regions of the human genome that have substantially diverged from chimpanzees. Together, these observations have been taken as evidence for the hypothesis that humans evolved specialized adaptations for music, potentially through gene–culture coevolution (namely the interaction of genetic and cultural evolutionary processes), which could have contributed to the evolution of human musicality if the cultural invention of music subsequently selected for domain-specific (music) adaptations19,45.
Although each of the five observations above represents a promising area to test the domain-specificity of rhythmic abilities, each is still consistent with rhythmic entrainment being a by-product of vocal learning. The social aspects of dance could simply reflect the profound sociality of humans as opposed to any specialization for rhythm. The complex neural architecture, increased motivation for rhythmic engagement, and the absence of beat perception and synchronization in non-human primates could all reflect selection for sophisticated vocal learning in the human lineage159,160,161. By studying the overlap of mechanisms involved in beat perception and synchronization with those of vocal learning, future research will better pinpoint whether human psychology is specialized for rhythm.
In summary, humans appear universally predisposed to find lullabies soothing and to move rhythmically in response to dance songs, and these predispositions appear early in the populations where they have been studied. However, current research cannot establish whether lullabies and dance songs stem from domain-specific, evolved specializations or are instead by-products of mechanisms that evolved for non-musical functions. More generally, work on behavioural responses to music advances the understanding of musical diversity and function. It demonstrates that music is not a fixed biological response, adapted for a single end like mating or group bonding. Rather, it is deployed for many social goals, some of which appear to be universal, particularly soothing infants and dancing. This universality reflects shared features of human psychology, which predispose humans to respond in particular ways to certain sounds and which, in turn, produce form–function relationships in the music of the world.
Cultural transmission of music
The music of the world exhibits both profound similarities and striking idiosyncrasies. These patterns of universality and diversity can emerge and persist through cultural evolution, which both crafts ubiquitous musical traditions adapted to shared features of human psychology and canalizes idiosyncratic cultural differences in musicality162.
As an example, consider the universal tendency of vocal music to be composed of predominantly small melodic intervals and rhythmic patterns defined by integer ratios1. These characteristics could reflect biological specializations to produce music18,19,120. Alternatively, they could also emerge as individuals preferentially adopt and perform music that is easier to learn and transmit163,164, paralleling how language-like systems evolve to become more transmissible across generations165,166,167. For instance, Scottish participants were asked to imitate random drum sequences; their attempts became the model stimuli for the next group of participants, who in turn produced sequences for a subsequent group. Over the course of the study, as participants transmitted their attempts, random sequences evolved into rhythmically structured patterns163. The patterns exhibited near-universal rhythmic features, such as hierarchical structure and isochronous beats, arguably because they were easier to learn and transmit. Similarly, participants who produced and transmitted sets of whistled signals eventually developed whistled patterns that exhibited some but not all melodic near-universals168. Ubiquitous musical features might emerge simply as performances adapt to the constraints of memory and learning; biological adaptation need not be the primary explanation for such effects.
Cultural evolution can produce widespread patterns in music through mechanisms beyond making performances easier to learn and reproduce. Researchers increasingly focus on how individuals produce and selectively retain cultural products evaluated as best satisfying the goals of an individual, a process labelled ‘subjective selection’169. Subjective selection seems to underlie the evolution not only of useful technology169,170 but also of many domains of so-called ‘symbolic’ culture, including social norms171,172, fictional narratives173,174, and religious practices and beliefs175,176,177. Subjective selection is a promising explanation for some musical universals. As long as individuals consistently perceive certain musical features to be useful for producing particular ends, then cross-cultural convergence should be expected169. If individuals everywhere tend to dance to certain sounds or to be soothed by certain sounds or regard certain sounds as communicating particular emotions, then cultural evolution should lead to similarities as individuals craft and retain music that seems to best satisfy those ends. As we have shown, shared features of human psychology indeed predispose humans to respond to music in similar ways. Such predispositions might result from human-specific adaptations, such as the physical limits of human auditory perception, or they might result from constraints that are shared across species114. Cultural evolution likely exploits these shared psychological predispositions to produce compelling performances, yielding reliable cross-cultural associations between musical form and emotional content65,66,67 or musical form and behavioural function1,5,118,119.
Cultural transmission also sustains and drives musical diversity. Differences in music can emerge for many reasons, such as social structure6,178, motor constraints179 or stochasticity163. These differences can, in turn, stabilize as the cultural exposure of individuals canalizes how they produce or respond to music180. For instance, Australian undergraduates show memory advantages for melodies in familiar compared to unfamiliar tuning systems181,182. Similarly, North American and western European adults have difficulty remembering or producing rhythmic patterns that do not exhibit a familiar metrical structure (isochrony)183,184,185,186. These types of biases seem to develop early, as infants become accustomed to the music they are exposed to139,140. Such musical enculturation, a topic of longstanding interest in music research187, has been corroborated by cross-cultural studies, which reveal patterns consistent with a core set of musical universals underlying broad cross-cultural diversity. For example, according to a preprint that has not yet undergone peer review, in 39 participant groups across 15 countries, differences were documented in the distributions of preferred rhythmic integer ratios in a tapping task, often reflecting local musical traditions188. Nevertheless, all participant groups favoured small integer ratios, indicating that discrete representations of rhythm were universal. As cultural traditions diverge and differences become canalized, music diversifies189,190,191,192, but it apparently always retains some universal properties.
By crafting products that are memorable, transmissible and (most importantly) compelling for achieving specific ends, such as dancing or communicating emotion, cultural evolution creates auditory cheesecake. In other words, generations of cultural transmission and ingenious tinkering interact to produce compelling auditory stimuli that appeal to psychological mechanisms that exist for non-musical functions.
Summary and future directions
In this Review, we provided evidence of the universality and early development of many psychological responses to music yet uncovered few indications of innate domain-specificity. Although the systems underlying these responses could become specialized for or adapted to music over the course of development193, the current evidence is consistent with music communicating emotions, soothing infants, urging individuals to dance, and inducing other emotional and behavioural responses by appealing to features of human psychology that have evolved for non-musical functions. Moving forward, it will become important to further investigate how genetic and cultural evolution give rise to musical behaviour while expanding the musical responses under consideration. In that vein, our Review highlights four key topics for future work to address.
First, research in neuroscience and genetics provides powerful new tools to study the neural and genetic mechanisms underlying musical responses. These tools, in turn, will allow researchers to better assess whether humans have evolved specialized adaptations for responding to music. For instance, research has shown that the neural and genetic mechanisms involved in beat perception and synchronization are also involved in vocal learning, consistent with the by-product account reviewed above45,158,159. Similar approaches applied to other emotional and behavioural responses can help map out the proximate and ultimate reasons humans find music so compelling.
Second, future research will help clarify how universal psychological responses give rise to the profound musical diversity observed in human societies. Although explaining and studying musical diversity is a focus in ethnomusicology6,194,195, cognitive and behavioural research on music has, with few exceptions179,196, overlooked the question of why musical traditions vary in the ways that they do. As researchers gain a better grasp of how and why psychology and culture vary across populations25,197,198, the ability to explain the drivers of musical diversity will also improve.
Third, research on psychological responses beyond emotion will help elucidate the diverse social roles of music. Most research on psychological responses to music has focused on emotional communication, yet music has many other effects, including many beyond the emotional and behavioural responses covered here. Across cultures, individuals use music to heal illness, mourn death, tell stories, greet visitors and demonstrate virtuosity1. Music can influence the content, vividness and sentiment of directed imagination199 and help induce mystical experiences for individuals taking psychedelic drugs200. Songs can evoke animals, as in the Sámi yoik tradition201, as well as communicate a staggering richness of information202. Depending on the culture, people can interpret differences in pitch to mean that particular sounds are hot, far, smooth, old, full, active, happy, sleepy, wintry, masculine, and either like a crocodile or like individuals who follow crocodiles43. Strikingly, even these inferences are, to some degree, interpretable across cultures, suggesting cross-domain and cross-culturally consistent mappings that connect concepts, acoustic features and other sensory information43,69,203. Research on responses beyond emotion can advance our understanding not only of the diverse effects of music but also of the more general processes involved in deriving meaning from sensory stimuli.
Last, musical aesthetics represents a uniquely controversial and difficult topic for future research. The most obvious aspect of music perception is that music sounds good. However, aesthetic value in music is poorly understood204. This gap is demonstrated by the ongoing difficulty of accurately predicting individual music preferences205, even by corporations that benefit hugely from doing so such as music streaming and recommendation services like Spotify or Apple Music. Research investigating why music is pleasant will expand the understanding of why individuals produce and listen to music.
Research connecting the psychology of music to its cultural and biological evolutionary roots has exploded in the last two decades, uncovering new insights about the origins of this pervasive yet puzzling behaviour. We expect that successful research in these four topics will accelerate scientific insights, helping uncover not just why humans produce and respond to music but also how cultural and biological evolution interact more generally to shape human behaviour.
References
Mehr, S. A. et al. Universality and diversity in human song. Science 366, eaax0868 (2019).
Savage, P. E., Brown, S., Sakai, E. & Currie, T. E. Statistical universals reveal the structures and functions of human music. Proc. Natl Acad. Sci. 112, 8987–8992 (2015).
Trehub, S. E., Becker, J. & Morley, I. Cross-cultural perspectives on music and musicality. Philos. Trans. R. Soc. B Biol. Sci. 370, 20140096 (2015).
Cross, I. Music, cognition, culture, and evolution. Ann. N. Y. Acad. Sci. 930, 28–42 (2001).
Mehr, S. A., Singh, M., York, H., Glowacki, L. & Krasnow, M. M. Form and function in human song. Curr. Biol. 28, 356–368 (2018).
Lomax, A. Folk Song Style and Culture (Routledge, 1968).
Yan, R. et al. Across demographics and recent history, most parents sing to their infants and toddlers daily. Philos. Trans. R. Soc. Lond. B Biol. Sci. 376, 20210089 (2021).
Mehr, S. A. Music in the home: new evidence for an intergenerational link. J. Res. Music Educ. 62, 78–88 (2014).
North, A. C., Hargreaves, D. J. & O’Neill, S. A. The importance of music to adolescents. Br. J. Educ. Psychol. 70, 255–272 (2000).
Juslin, P. N. & Laukka, P. Expression, perception, and induction of musical emotions: a review and a questionnaire study of everyday listening. J. New Music Res. 33, 217–238 (2004).
Laukka, P. Uses of music and psychological well-being among the elderly. J. Happiness Stud. 8, 215–241 (2007).
Cirelli, L. K. & Trehub, S. E. Familiar songs reduce infant distress. Dev. Psychol. 56, 861–868 (2020).
Cirelli, L. K., Jurewicz, Z. B. & Trehub, S. E. Effects of maternal singing style on mother-infant arousal and behavior. J. Cogn. Neurosci. 32, 1213–1220 (2020).
Bainbridge, C. M. et al. Infants relax in response to unfamiliar foreign lullabies. Nat. Hum. Behav. 5, 256–264 (2021).
Hilton, B. C. et al. Acoustic regularities in infant-directed speech and song across cultures. Nat. Hum. Behav. 6, 1545–1556 (2022).
Riches, G. Embracing the chaos: mosh pits, extreme metal music and liminality. J. Cult. Res. 15, 315–332 (2011).
McDermott, J. & Hauser, M. The origins of music: innateness, uniqueness, and evolution. Music Percept. 23, 29–60 (2005).
Mehr, S. A., Krasnow, M. M., Bryant, G. A. & Hagen, E. H. Origins of music in credible signaling. Behav. Brain Sci. 44, e60 (2020).
Savage, P. E. et al. Music as a coevolved system for social bonding. Behav. Brain Sci. 44, e59 (2021).
Pinker, S. How the Mind Works (W. W. Norton & Company, 1997).
Marcus, G. F. Musicality: instinct or acquired skill? Top. Cogn. Sci. 4, 498–512 (2012).
Patel, A. D. & von Rueden, A. Where they sing solo: accounting for cross-cultural variation in collective music-making in theories of music evolution. Behav. Brain Sci. 44, e85 (2021).
Martínez-Molina, N., Mas-Herrero, E., Rodríguez-Fornells, A., Zatorre, R. J. & Marco-Pallarés, J. Neural correlates of specific musical anhedonia. Proc. Natl Acad. Sci. USA 113, E7337–E7345 (2016).
Barrett, H. C. Towards a cognitive science of the human: cross-cultural approaches and their urgency. Trends Cogn. Sci. 24, 620–638 (2020).
Scelza, B. A. et al. Patterns of paternal investment predict cross-cultural variation in jealous response. Nat. Hum. Behav. 4, 20–26 (2020).
Koelsch, S. Toward a neural basis of music perception — a review and updated model. Front. Psychol. 2, 110 (2011).
Conard, N. J., Malina, M. & Münzel, S. C. New flutes document the earliest musical tradition in southwestern Germany. Nature 460, 737–740 (2009).
Mehr, S. A. & Krasnow, M. M. Parent-offspring conflict and the evolution of infant-directed song. Evol. Hum. Behav. 38, 674–684 (2017).
Hagen, E. H. & Bryant, G. A. Music and dance as a coalition signaling system. Hum. Nat. 14, 21–51 (2003).
Krumhansl, C. L. The cognition of tonality — as we know it today. J. New Music Res. 33, 253–268 (2004).
Krumhansl, C. L. & Keil, F. C. Acquisition of the hierarchy of tonal functions in music. Mem. Cognit. 10, 243–251 (1982).
Dolscheid, S., Hunnius, S., Casasanto, D. & Majid, A. Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychol. Sci. 25, 1256–1261 (2014).
Stevens, C. J. Music perception and cognition: a review of recent cross-cultural research. Top. Cogn. Sci. 4, 653–667 (2012).
Hodges, D. A. & Sebald, D. C. Music in the Human Experience: An Introduction to Music Psychology (Routledge, 2011).
Norman-Haignere, S., Kanwisher, N. G. & McDermott, J. H. Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition. Neuron 88, 1281–1296 (2015).
Chen, X. et al. The human language system, including its inferior frontal component in ‘Broca’s area’, does not support music perception. Preprint at bioRxiv https://doi.org/10.1101/2021.06.01.446439 (2023).
Albouy, P., Benjamin, L., Morillon, B. & Zatorre, R. J. Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science 367, 1043–1047 (2020).
Norman-Haignere, S. V. et al. A neural population selective for song in human auditory cortex. Curr. Biol. 32, 1470–1484.e12 (2022).
Zatorre, R. J. & Salimpoor, V. N. From perception to pleasure: music and its neural substrates. Proc. Natl Acad. Sci. USA 110, 10430–10437 (2013).
Mas-Herrero, E., Zatorre, R. J., Rodriguez-Fornells, A. & Marco-Pallarés, J. Dissociation between musical and monetary reward responses in specific musical anhedonia. Curr. Biol. 24, 699–704 (2014).
Trainor, L. J. The origins of music in auditory scene analysis and the roles of evolution and culture in musical creation. Philos. Trans. R. Soc. B 370, 20140089 (2015).
Walker, P. & Smith, S. Stroop interference based on the synaesthetic qualities of auditory pitch. Perception 13, 75–81 (1984).
Eitan, Z. & Timmers, R. Beethoven’s last piano sonata and those who follow crocodiles: cross-domain mappings of auditory pitch in a musical context. Cognition 114, 405–422 (2010).
Cannon, J. J. & Patel, A. D. How beat perception co-opts motor neurophysiology. Trends Cogn. Sci. 25, 137–150 (2021).
Patel, A. D. Vocal learning as a preadaptation for the evolution of human beat perception and synchronization. Philos. Trans. R. Soc. B Biol. Sci. 376, 20200326 (2021).
Nozaradan, S., Peretz, I., Missal, M. & Mouraux, A. Tagging the neuronal entrainment to beat and meter. J. Neurosci. 31, 10234–10240 (2011).
Herff, S. A. et al. Prefrontal high gamma in ecog tags periodicity of musical rhythms in perception and imagination. eNeuro 7, ENEURO.0413-19.2020 (2020).
Dean, R. T. & Bailes, F. Relationships between generated musical structure, performers’ physiological arousal and listener perceptions in solo piano improvisation. J. New Music Res. 45, 361–374 (2016).
Juslin, P. N. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Phys. Life Rev. 10, 235–266 (2013).
Spivey, M., McRae, K. & Joanisse, M. The Cambridge Handbook of Psycholinguistics (Cambridge University Press, 2012).
McDermott, J. H., Schultz, A. F., Undurraga, E. A. & Godoy, R. A. Indifference to dissonance in native Amazonians reveals cultural variation in music perception. Nature 25, 21–25 (2016).
Zhao, T. C. & Kuhl, P. K. Musical intervention enhances infants’ neural processing of temporal structure in music and speech. Proc. Natl Acad. Sci. USA 113, 5212–5217 (2016).
Webb, A. R., Heller, H. T., Benson, C. B. & Lahav, A. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation. Proc. Natl Acad. Sci. USA 112, 3152–3157 (2015).
Ullal-Gupta, S., Vanden Bosch der Nederlanden, C. M., Tichko, P., Lahav, A. & Hannon, E. E. Linking prenatal experience to the emerging musical mind. Front. Syst. Neurosci. 7, 48 (2013).
Linnemann, A., Ditzen, B., Strahler, J., Doerr, J. M. & Nater, U. M. Music listening as a means of stress reduction in daily life. Psychoneuroendocrinology 60, 82–90 (2015).
van Goethem, A. & Sloboda, J. The functions of music for affect regulation. Music Sci. 15, 208–228 (2011).
Denora, T. Music as a technology of the self. Poetics 27, 31–56 (1999).
Hays, T. & Minichiello, V. The meaning of music in the lives of older people: a qualitative study. Psychol. Music 33, 437–451 (2005).
Saarikallio, S., Alluri, V., Maksimainen, J. & Toiviainen, P. Emotions of music listening in Finland and in India: comparison of an individualistic and a collectivistic culture. Psychol. Music 49, 989–1005 (2021).
Juslin, P. N. What does music express? Basic emotions and beyond. Front. Psychol. 4, 596 (2013).
Flaig, N. K. & Large, E. W. Dynamic musical communication of core affect. Front. Psychol. 5, 72 (2014).
Cespedes-Guevara, J. & Eerola, T. Music communicates affects, not basic emotions - a constructionist account of attribution of emotional meanings to music. Front. Psychol. 9, 215 (2018).
Gomez, P. & Danuser, B. Relationships between musical structure and psychophysiological measures of emotion. Emotion 7, 377–387 (2007).
Balkwill, L. & Thompson, W. F. A cross-cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music Percept. 17, 43–64 (1999).
Balkwill, L.-L., Thompson, W. F. & Matsunaga, R. Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners. Jpn. Psychol. Res. 46, 337–349 (2004).
Fritz, T. et al. Universal recognition of three basic emotions in music. Curr. Biol. 19, 573–576 (2009).
Argstatter, H. Perception of basic emotions in music: culture-specific or multicultural? Psychol. Music 44, 674–690 (2016).
Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T. & Beller, G. Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion 13, 434–449 (2013).
Sievers, B., Polansky, L., Casey, M. & Wheatley, T. Music and movement share a dynamic structure that supports universal expressions of emotion. Proc. Natl Acad. Sci. USA 110, 70–75 (2013).
Swaminathan, S. & Schellenberg, E. G. Current emotion research in music psychology. Emot. Rev. 7, 189–197 (2015).
Wang, X., Wei, Y. & Yang, D. Cross‐cultural analysis of the correlation between musical elements and emotion. Cogn. Comput. Syst. https://doi.org/10.1049/ccs2.12032 (2021).
Athanasopoulos, G., Eerola, T., Lahdelma, I. & Kaliakatsos-Papakostas, M. Harmonic organisation conveys both universal and culture-specific cues for emotional expression in music. PLoS One 16, e0244964 (2021).
Lahdelma, I., Athanasopoulos, G. & Eerola, T. Sweetness is in the ear of the beholder: chord preference across United Kingdom and Pakistani listeners. Ann. N. Y. Acad. Sci. 1502, 72–84 (2021).
Fang, L., Shang, J. & Chen, N. Perception of western musical modes: a Chinese study. Front. Psychol. 8, 1–8 (2017).
Smit, E. A., Milne, A. J., Sarvasy, H. S. & Dean, R. T. Emotional responses in Papua New Guinea show negligible evidence for a universal effect of major versus minor music. PLoS One 17, e0269597 (2022).
Franco, F., Chew, M. & Swaine, J. S. Preschoolers’ attribution of affect to music: a comparison between vocal and instrumental performance. Psychol. Music 45, 131–149 (2017).
Stachó, L., Saarikallio, S., Van Zijl, A., Huotilainen, M. & Toiviainen, P. Perception of emotional content in musical performances by 3-7-year-old children. Music Sci. 17, 495–512 (2013).
Hunter, P. G., Glenn Schellenberg, E. & Stalinski, S. M. Liking and identifying emotionally expressive music: age and gender differences. J. Exp. Child Psychol. 110, 80–93 (2011).
Dalla Bella, S., Peretz, I., Rousseau, L. & Gosselin, N. A developmental study of the affective value of tempo and mode in music. Cognition 80, B1–10 (2001).
Dolgin, K. G. & Adelson, E. H. Age changes in the ability to interpret affect in sung and instrumentally-presented melodies. Psychol. Music 18, 87–98 (1990).
Vidas, D., Dingle, G. A. & Nelson, N. L. Children’s recognition of emotion in music and speech. Music Sci. 1, 205920431876265 (2018).
Vidas, D., Calligeros, R., Nelson, N. L. & Dingle, G. A. Development of emotion recognition in popular music and vocal bursts. Cogn. Emot. 34, 906–919 (2020).
Flom, R. & Pick, A. D. Dynamics of infant habituation: infants’ discrimination of musical excerpts. Infant Behav. Dev. 35, 697–704 (2012).
Flom, R., Gentile, D. A. & Pick, A. D. Infants’ discrimination of happy and sad music. Infant Behav. Dev. 31, 716–728 (2008).
Xiao, N. G. et al. Older but not younger infants associate own-race faces with happy music and other-race faces with sad music. Dev. Sci. 21, 12537 (2018).
Nawrot, E. S. The perception of emotional expression in music: evidence from infants, children and adults. Psychol. Music 31, 75–92 (2003).
Mendoza, J. K. & Fausey, C. M. Everyday music in infancy. Dev. Sci. 24, 1–15 (2021).
Davidov, M., Zahn-Waxler, C., Roth-Hanania, R. & Knafo, A. Concern for others in the first year of life: theory, evidence, and avenues for research. Child Dev. Perspect. 7, 126–131 (2013).
Roth-Hanania, R., Davidov, M. & Zahn-Waxler, C. Empathy development from 8 to 16 months: early signs of concern for others. Infant Behav. Dev. 34, 447–458 (2011).
Juslin, P. N. & Laukka, P. Communication of emotions in vocal expression and music performance: different channels, same code? Psychol. Bull. 129, 770–814 (2003).
Ilie, G. & Thompson, W. F. A comparison of acoustic cues in music and speech for three dimensions of affect. Music Percept. 23, 319–330 (2006).
Ilie, G. & Thompson, W. F. Experiential and cognitive changes following seven minutes exposure to music and speech. Music Percept. 28, 247–264 (2011).
Bowling, D. L., Sundararajan, J., Han, S. & Purves, D. Expression of emotion in eastern and western music mirrors vocalization. PLoS One 7, e31942 (2012).
Kragness, H. E. & Trainor, L. J. Nonmusicians express emotions in musical productions using conventional cues. Music Sci. 2, 205920431983494 (2019).
Saarikallio, S., Tervaniemi, M., Yrtti, A. & Huotilainen, M. Expression of emotion through musical parameters in 3- and 5-year-olds. Music Educ. Res. 21, 596–605 (2019).
Ma, W. & Thompson, W. F. Human emotions track changes in the acoustic environment. Proc. Natl Acad. Sci. USA 112, 14563–14568 (2015).
Proverbio, A. M., De Benedetto, F. & Guazzone, M. Shared neural mechanisms for processing emotions in music and vocalizations. Eur. J. Neurosci. 51, 1987–2007 (2020).
Koelsch, S. Brain correlates of music-evoked emotions. Nat. Rev. Neurosci. 15, 170–180 (2014).
Gosselin, N., Peretz, I., Johnsen, E. & Adolphs, R. Amygdala damage impairs emotion recognition from music. Neuropsychologia 45, 236–244 (2007).
Gosselin, N., Peretz, I., Hasboun, D., Baulac, M. & Samson, S. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision. Cortex 47, 1116–1125 (2011).
Escoffier, N., Zhong, J., Schirmer, A. & Qiu, A. Emotional expressions in voice and music: same code, same effect? Hum. Brain Mapp. 34, 1796–1810 (2013).
Peelen, M. V., Atkinson, A. P. & Vuilleumier, P. Supramodal representations of perceived emotions in the human brain. J. Neurosci. 30, 10127–10134 (2010).
Sievers, B. et al. Visual and auditory brain areas share a representational structure that supports emotion perception. Curr. Biol. 31, 5192–5203.e4 (2021).
Morton, J. B. & Trehub, S. E. Children’s understanding of emotion in speech. Child Dev. 72, 834–843 (2001).
Grosbras, M. H., Ross, P. D. & Belin, P. Categorical emotion recognition from voice improves during childhood and adolescence. Sci. Rep. 8, 1–11 (2018).
Chronaki, G., Wigelsworth, M., Pell, M. D. & Kotz, S. A. The development of cross-cultural recognition of vocal emotion during childhood and adolescence. Sci. Rep. 8, 1–17 (2018).
Keltner, D., Sauter, D., Tracy, J. & Cowen, A. Emotional expression: advances in basic emotion theory. J. Nonverbal Behav. 43, 133–160 (2019).
Ruba, A. L. & Repacholi, B. M. Do preverbal infants understand discrete facial expressions of emotion? Emot. Rev. 12, 235–250 (2020).
Hoemann, K., Devlin, M. & Barrett, L. F. Comment: emotions are abstract, conceptual categories that are learned by a predicting brain. Emot. Rev. 12, 253–255 (2020).
Zentner, M., Grandjean, D. & Scherer, K. R. Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion 8, 494–521 (2008).
Cowen, A. S., Fang, X., Sauter, D. & Keltner, D. What music makes us feel: at least 13 dimensions organize subjective experiences associated with music across different cultures. Proc. Natl Acad. Sci. USA 117, 1924–1934 (2020).
Miller, G. In The Origins of Music (eds Wallin, N. L., Merker, B. & Brown, S.) 329–360 (MIT Press, 2000).
Searcy, W. A. & Nowicki, S. The Evolution of Animal Communication: Reliability and Deception in Signaling Systems (Princeton University Press, 2006).
Morton, E. S. On the occurrence and significance of motivation-structural rules in some bird and mammal sounds. Am. Nat. 111, 855–869 (1977).
Clutton-Brock, T. H. & Albon, S. D. The roaring of red deer and the evolution of honest advertisement. Behaviour 69, 145–170 (1979).
Bryant, G. A. et al. The perception of spontaneous and volitional laughter across 21 societies. Psychol. Sci. 29, 1515–1525 (2018).
Hilton, C. B., Thierry, L. C., Yan, R., Martin, A. & Mehr, S. Children infer the behavioral contexts of unfamiliar foreign songs. J. Exp. Psychol. Gen. https://doi.org/10.1037/xge0001289 (2022).
Trehub, S. E., Unyk, A. M. & Trainor, L. J. Adults identify infant-directed music across cultures. Infant Behav. Dev. 16, 193–211 (1993).
Yurdum, L. et al. Cultural invariance in musical communication. In Proceedings of the Annual Meeting of the Cognitive Science Society 44 (Cognitive Science Society, 2022).
Fink, B., Bläsing, B., Ravignani, A. & Shackelford, T. K. Evolution and functions of human dance. Evol. Hum. Behav. 42, 351–360 (2021).
Trainor, L. J. Infant preferences for infant-directed versus noninfant-directed playsongs and lullabies. Infant Behav. Dev. 19, 83–92 (1996).
Nakata, T. & Trehub, S. E. Infants’ responsiveness to maternal speech and singing. Infant Behav. Dev. 27, 455–464 (2004).
Kinzler, K. D., Dupoux, E. & Spelke, E. S. The native language of social cognition. Proc. Natl Acad. Sci. USA 104, 12577–12580 (2007).
Liberman, Z., Woodward, A. L., Sullivan, K. R. & Kinzler, K. D. Early emerging system for reasoning about the social nature of food. Proc. Natl Acad. Sci. USA 113, 9480–9485 (2016).
Mehr, S. A. & Spelke, E. S. Shared musical knowledge in 11-month-old infants. Dev. Sci. https://doi.org/10.1111/desc.12542 (2017).
Mehr, S. A., Song, L. A. & Spelke, E. S. For 5-month-old infants, melodies are social. Psychol. Sci. 27, 486–501 (2016).
Scarratt, R. J., Heggli, O. A., Vuust, P. & Jespersen, K. V. The music that people use to sleep: universal and subgroup characteristics. Preprint at PsyArxiv https://doi.org/10.31234/osf.io/5mbyv (2021).
Möller, E. L., de Vente, W. & Rodenburg, R. Infant crying and the calming response: Parental versus mechanical soothing using swaddling, sound, and movement. PLoS One 14, 1–16 (2019).
Spencer, J. A. D., Moran, D. J., Lee, A. & Talbert, D. White noise and sleep induction. Arch. Dis. Child. 65, 135–137 (1990).
Hawkins, T. E. & Stevens, S. S. The masking of pure tones and of speech by white noise. J. Acoust. Soc. Am. 22, 6–13 (1950).
Ebben, M. R., Yan, P. & Krieger, A. C. The effects of white noise on sleep and duration in individuals living in a high noise environment in New York City. Sleep Med. 83, 256–259 (2021).
Gasparini, L., Langus, A., Tsuji, S. & Boll-Avetisyan, N. Quantifying the role of rhythm in infants’ language discrimination abilities: a meta-analysis. Cognition 213, 104757 (2021).
Winkler, I., Háden, G. P., Ladinig, O., Sziller, I. & Honing, H. Newborn infants detect the beat in music. Proc. Natl Acad. Sci. USA 106, 2468–2471 (2009).
Háden, G. P., Honing, H., Török, M. & Winkler, I. Detecting the temporal structure of sound sequences in newborn infants. Int. J. Psychophysiol. 96, 23–28 (2015).
Baruch, C. & Drake, C. Tempo discrimination in infants. Infant Behav. Dev. 20, 573–577 (1997).
Demany, L., McKenzie, B. & Vurpillot, E. Rhythm perception in early infancy. Nature 266, 718–719 (1977).
Otte, R. A. et al. Detecting violations of temporal regularities in waking and sleeping two-month-old infants. Biol. Psychol. 92, 315–322 (2013).
Cirelli, L. K., Spinelli, C., Nozaradan, S. & Trainor, L. J. Measuring neural entrainment to beat and meter in infants: effects of music background. Front. Neurosci. 10, 229 (2016).
Hannon, E. E. & Trehub, S. E. Tuning in to musical rhythms: Infants learn more readily than adults. Proc. Natl Acad. Sci. USA 102, 12639–12643 (2005).
Hannon, E. E. & Trehub, S. E. Metrical categories in infancy and adulthood. Psychol. Sci. 16, 48–55 (2005).
Zentner, M. & Eerola, T. Rhythmic engagement with music in infancy. Proc. Natl Acad. Sci. USA 107, 5768–5773 (2010).
Kim, M. & Schachner, A. The origins of dance: characterizing the development of infants’ earliest dance behavior. Dev. Psychol. 59, 691–706 (2023).
Hannon, E. E., Nave-Blodgett, J. E. & Nave, K. M. The developmental origins of the perception and production of musical rhythm. Child Dev. Perspect. 12, 194–198 (2018).
Hannon, E. E., Schachner, A. & Nave-Blodgett, J. E. Babies know bad dancing when they see it: older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays. J. Exp. Child Psychol. 159, 159–174 (2017).
Yu, L. & Myowa, M. The early development of tempo adjustment and synchronization during joint drumming: a study of 18- to 42-month-old children. Infancy 26, 635–646 (2021).
Kirschner, S. & Tomasello, M. Joint drumming: social context facilitates synchronization in preschool children. J. Exp. Child Psychol. 102, 299–314 (2009).
Drake, C., Jones, M. R. & Baruch, C. The development of rhythmic attending in auditory sequences: attunement, referent period, focal attending. Cognition 77, 251–288 (2000).
McAuley, J. D., Jones, M. R., Holub, S., Johnston, H. M. & Miller, N. S. The time of our lives: life span development of timing and event tracking. J. Exp. Psychol. Gen. 135, 348–367 (2006).
Schachner, A., Brady, T. F., Pepperberg, I. M. & Hauser, M. D. Spontaneous motor entrainment to music in multiple vocal mimicking species. Curr. Biol. 19, 831–836 (2009).
Patel, A. D., Iversen, J. R., Bregman, M. R. & Schulz, I. Experimental evidence for synchronization to a musical beat in a nonhuman animal. Curr. Biol. 19, 827–830 (2009).
Vuust, P., Heggli, O. A., Friston, K. J. & Kringelbach, M. L. Music in the brain. Nat. Rev. Neurosci. 23, 287–305 (2022).
Bernardi, N. F., Bellemare-Pepin, A. & Peretz, I. Enhancement of pleasure during spontaneous dance. Front. Hum. Neurosci. 11, 572 (2017).
Foster Vander Elst, O., Vuust, P. & Kringelbach, M. L. Sweet anticipation and positive emotions in music, groove, and dance. Curr. Opin. Behav. Sci. 39, 79–84 (2021).
Cirelli, L. K. & Trehub, S. E. Dancing to Metallica and Dora: case study of a 19-month-old. Front. Psychol. 10, 1073 (2019).
Witek, M. A. G., Clarke, E. F., Wallentin, M., Kringelbach, M. L. & Vuust, P. Syncopation, body-movement and pleasure in groove music. PLoS One 9, e94446 (2014).
Schachner, A. Auditory-motor entrainment in vocal mimicking species: additional ontogenetic and phylogenetic factors. Commun. Integr. Biol. 3, 290–293 (2010).
Laland, K., Wilkins, C. & Clayton, N. The evolution of dance. Curr. Biol. 26, R5–R9 (2016).
Niarchou, M. et al. Genome-wide association study of musical beat synchronization demonstrates high polygenicity. Nat. Hum. Behav. https://doi.org/10.1038/s41562-022-01359-x (2022).
Cahill, J. A. et al. Positive selection in noncoding genomic regions of vocal learning birds is associated with genes implicated in vocal learning and speech functions in humans. Genome Res 31, 2035–2049 (2021).
Jarvis, E. D. Evolution of vocal learning and spoken language. Science 366, 50–54 (2019).
Gordon, R. L. et al. Linking the genomic signatures of human beat synchronization and learned song in birds. Philos. Trans. R. Soc. B Biol. Sci. 376, 20200329 (2021).
Savage, P. Cultural evolution of music. Palgrave Comm. 5, 16 (2019).
Ravignani, A., Delgado, T. & Kirby, S. Musical evolution in the lab exhibits rhythmic universals. Nat. Hum. Behav. 1, 0007 (2017).
Lumaca, M., Haumann, N. T., Vuust, P., Brattico, E. & Baggio, G. From random to regular: neural constraints on the emergence of isochronous rhythm during cultural transmission. Soc. Cogn. Affect. Neurosci. 13, 877–888 (2018).
Kirby, S., Cornish, H. & Smith, K. Cumulative cultural evolution in the laboratory: an experimental approach to the origins of structure in human language. Proc. Natl Acad. Sci. USA 105, 10681–10686 (2008).
Gibson, E. et al. How efficiency shapes human language. Trends Cogn. Sci. 23, 389–407 (2019).
Ferdinand, V., Kirby, S. & Smith, K. The cognitive roots of regularization in language. Cognition 184, 53–68 (2019).
Verhoef, T. & Ravignani, A. Melodic universals emerge or are sustained through cultural evolution. Front. Psychol. 12, 668300 (2021).
Singh, M. Subjective selection and the evolution of complex culture. Evol. Anthropol. 31, 266–280 (2022).
Allen, K. R., Smith, K. A. & Tenenbaum, J. B. Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. Proc. Natl Acad. Sci. USA 117, 29302–29310 (2020).
Singh, M., Wrangham, R. W. & Glowacki, L. Self-interest and the design of rules. Hum. Nat. 28, 457–480 (2017).
Fitouchi, L., André, J. & Baumard, N. Moral disciplining: the cognitive and evolutionary foundations of puritanical morality. Behav. Brain Sci. https://doi.org/10.1017/S0140525X22002047 (2021).
Dubourg, E. & Baumard, N. Why imaginary worlds? The psychological foundations and cultural evolution of fictions with imaginary worlds. Behav. Brain Sci. 45, e276 (2021).
Singh, M. The sympathetic plot, its psychological origins, and implications for the evolution of fiction. Emot. Rev. 13, 183–198 (2021).
Singh, M. The cultural evolution of shamanism. Behav. Brain Sci. 41, e66 (2018).
Hong, Z. & Henrich, J. The cultural evolution of epistemic practices: the case of divination. Hum. Nat. 32, 622–651 (2021).
Singh, M. Magic, explanations, and evil: the origins and design of witches and sorcerers. Curr. Anthropol. 62, 2–29 (2021).
Feld, S. Sound structure as social structure. Ethnomusicology 28, 383–409 (1984).
Miton, H., Wolf, T., Vesper, C., Knoblich, G. & Sperber, D. Motor constraints influence cultural evolution of rhythm. Proc. R. Soc. B Biol. Sci. 287, 20202001 (2020).
Demorest, S. M., Morrison, S. J., Nguyen, V. Q. & Bodnar, E. N. The influence of contextual cues on cultural bias in music memory. Music Percept. 33, 590–600 (2016).
Herff, S. A., Olsen, K. N. & Dean, R. T. Resilient memory for melodies: the number of intervening melodies does not influence novel melody recognition. Q. J. Exp. Psychol. 71, 1150–1171 (2018).
Herff, S. A., Olsen, K. N., Dean, R. T. & Prince, J. Memory for melodies in unfamiliar tuning systems: investigating effects of recency and number of intervening items. Q. J. Exp. Psychol. 71, 1367–1381 (2018).
Povel, D.-J. & Essens, P. Perception of temporal patterns. Music Percept. 2, 411–440 (1985).
Povel, D. J. Internal representation of simple temporal patterns. J. Exp. Psychol. Hum. Percept. Perform. 7, 3–18 (1981).
Collier, G. L. & Wright, C. E. Temporal rescaling of simple and complex ratios in rhythmic tapping. J. Exp. Psychol. Hum. Percept. Perform. 21, 602–627 (1995).
Polak, R. et al. Rhythmic prototypes across cultures: a comparative study of tapping synchronization. Music Percept. 36, 1–23 (2018).
Hannon, E. E. & Trainor, L. J. Music acquisition: effects of enculturation and formal training on development. Trends Cogn. Sci. 11, 466–472 (2007).
Jacoby, N. et al. Universality and cross-cultural variation in mental representations of music revealed by global comparison of rhythm priors. PsyArXiv https://doi.org/10.31234/osf.io/b879v (2021).
Le Bomin, S., Lecointre, G. & Heyer, E. The evolution of musical diversity: the key role of vertical transmission. PLoS One 11, e0151570 (2016).
Brown, S. et al. Correlations in the population structure of music, genes and language. Proc. R. Soc. B Biol. Sci. 281, 2072 (2013).
Pamjav, H., Juhász, Z., Zalán, A., Németh, E. & Damdin, B. A comparative phylogenetic study of genetics and folk music. Mol. Genet. Genomics 287, 337–349 (2012).
Youngblood, M., Baraghith, K. & Savage, P. E. Phylogenetic reconstruction of the cultural evolution of electronic music via dynamic community detection (1975–1999). Evol. Hum. Behav. 42, 573–582 (2021).
Asano, R., Boeckx, C. & Fujita, K. Moving beyond domain-specific vs. domain-general options in cognitive neuroscience. Cortex 154, 259–268 (2022).
Feld, S. Sound and Sentiment: Birds, Weeping, Poetics, and Song in Kaluli Expression (University of Pennsylvania Press, 1982).
Nettl, B. The Study of Ethnomusicology: Thirty-one Issues and Concepts (University of Illinois Press, 2005).
Savage, P. E. et al. Sequence alignment of folk song melodies reveals cross-cultural regularities of musical evolution. Curr. Biol. 32, e1–e8 (2022).
Henrich, J. The Weirdest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous (Farrar, Straus and Giroux, 2020).
Smaldino, P. E., Lukaszewski, A., von Rueden, C. & Gurven, M. Niche diversity can explain cross-cultural differences in personality structure. Nat. Hum. Behav. 3, 1276–1283 (2019).
Herff, S. A., Cecchetti, G., Taruffi, L. & Déguernel, K. Music influences vividness and content of imagined journeys in a directed visual imagery task. Sci. Rep. 11, 15990 (2021).
Strickland, J. C., Garcia-Romeu, A. & Johnson, M. W. Set and setting: a randomized study of different musical genres in supporting psychedelic therapy. ACS Pharmacol. Transl. Sci. 4, 472–478 (2021).
Aubinet, S. The problem of universals in cross-cultural studies: insights from Sámi animal melodies (yoik). Psychol. Music https://doi.org/10.1177/03057356211024346 (2021).
Fritz, T. H., Schmude, P., Jentschke, S., Friederici, A. D. & Koelsch, S. From understanding to appreciating music cross-culturally. PLoS One 8, e72500 (2013).
Sievers, B., Lee, C., Haslett, W. & Wheatley, T. A multi-sensory code for emotional arousal. Proc. R. Soc. B 286, 20190513 (2019).
Pinker, S. Sex and drugs and rock and roll. Behav. Brain Sci. 44, e109 (2021).
Mehr, S. A., Krasnow, M. M., Bryant, G. A. & Hagen, E. H. Toward a productive evolutionary understanding of music. Behav. Brain Sci. 44, e122 (2021).
Acknowledgements
The authors thank Alex Mackiel for assistance with the preparation of Fig. 3, and members of The Music Lab for feedback on the manuscript. M.S. acknowledges IAST funding from the French National Research Agency (ANR) under grant ANR-17-EURE-0010 (Investissements d’Avenir programme). S.A.M. acknowledges funding from the US NIH Director’s Early Independence Award DP5OD024566 and the Royal Society of New Zealand Te Apārangi Rutherford Discovery Fellowship RDF-UOA2103.
Author information
Authors and Affiliations
Contributions
The authors contributed equally to all aspects of the article.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Reviews Psychology thanks Asifa Majid, Steffen Herff, and the other, anonymous, reviewer for their contribution to the peer review of this work.
Additional information
We dedicate this article to Sandra Trehub (1938–2023), whose pioneering and inspiring work touched every corner of the psychology of music.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Glossary
- Auditory scene analysis
-
The auditory system process involved in gathering information about which sounding objects are present in the environment and determining where they are located.
- Harmonic structure
-
The grouping of harmonies in a musical example, where harmonies are combinations of tones (such as chords) that are functionally related to one another; when listeners hear a melody, they automatically build representations of its potential harmonic structure.
- Integer ratios
-
In music, the organization of pitch or duration information in a melody or rhythm via a simple ratio of integers, such as a duration pattern of 2:1, where the first musical event is twice as long as the second.
- Isochronous beat
-
Periodic rhythm in which beats have the same duration; most music is structured around the isochronous beat, and it is typically perceived as the basic rhythmic foundation of the music (for example, when one taps one’s foot to music, one typically taps to the isochronous beat).
- Major mode
-
In western classical and popular music, a collection of notes (which can be played at the same time, as in a chord, or not, as in a melody) the third note of which is four semitones from the tonal centre.
- Minor mode
-
In western classical and popular music, a collection of notes (which can be played at the same time, as in a chord, or not, as in a melody) the third note of which is three semitones from the tonal centre.
- Timbre
-
Perceived quality of a sound that makes notes produced by different sources, such as the human voice and a piano, sound different from each other, even when produced at the same pitch, duration and intensity.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Singh, M., Mehr, S.A. Universality, domain-specificity and development of psychological responses to music. Nat Rev Psychol 2, 333–346 (2023). https://doi.org/10.1038/s44159-023-00182-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s44159-023-00182-z