Lateralized behavior and cardiac activity of dogs in response to human emotional vocalizations

Over the recent years, the study of emotional functioning has become one of the central issues in dog cognition. Previous studies showed that dogs can recognize different emotions by looking at human faces and can correctly match the human emotional state with a vocalization having a negative emotional valence. However, to this day, little is known about how dogs perceive and process human non-verbal vocalizations having different emotional valence. The current research provides new insights into emotional functioning of the canine brain by studying dogs’ lateralized auditory functions (to provide a first insight into the valence dimension) matched with both behavior and physiological measures of arousal (to study the arousal dimension) in response to playbacks related to the Ekman’s six basic human emotions. Overall, our results indicate lateralized brain patterns for the processing of human emotional vocalizations, with the prevalent use of the right hemisphere in the analysis of vocalizations with a clear negative emotional valence (i.e. “fear” and “sadness”) and the prevalent use of the left hemisphere in the analysis of positive vocalization (“happiness”). Furthermore, both cardiac activity and behavior response support the hypothesis that dogs are sensitive to emotional cues of human vocalizations.

hemisphere in processing the acoustic stimulus (e.g. if the subject turns his head towards the speaker with the right ear leading, the acoustic input is processed primarily by the left hemisphere, or at least in the initial attention to the stimulus [14][15][16]. Specifically, in dogs a striking left head-orienting bias was observed in response to thunderstorm playbacks, confirming the right hemisphere advantage in attending to threatening and alarming stimuli 17 . On the contrary, conspecific vocalizations elicited a significant head-turning bias to the right (left hemisphere advantage). The specialization of the left hemisphere in processing the vocalizations of familiar conspecifics has been also reported for other animal models, such as nonhuman primates 18,19 , horses 20 , cats 21 , and sea lions 22 . Nevertheless, recent studies employing the orienting paradigm in other species found an inconsistent pattern of the head-turning response to conspecific calls. For instance, Vervet monkeys showed a right hemisphere dominant activity 23 while no bias was found for Barbary macaques 24 . Moreover, a sex-specific asymmetries was shown for mouse lemurs, in particular male individuals displayed a left hemisphere bias in response to conspecific vocalizations with negative emotional valence 16 . This contradictory pattern might be due to a different phylogenetic distribution of hemispheric specialization and lateralization in closely related species 25 or to the different emotional valence of the message conveyed. Furthermore, within the canine species, it has been reported that the left hemisphere involvement in attending to conspecific vocalizations depends on the characteristics of the sound, for example on the temporal acoustic features of the calls 26 . When dogs were presented with the reversed versions of specific vocalizations of play, disturbance and isolation, they showed a shift in their head-orienting behavior from a right-ear orienting bias (normal call versions) to a left-ear orienting bias (play calls) or to no asymmetry (disturbance and isolation calls 26 ). In addition, recent studies describe a right hemisphere dominant activity to process conspecific vocalizations when they elicit intense emotions 17,27 .
Dogs show also an asymmetric head-turning behavior in response to human vocalizations. They displayed a significant bias to turn the head with the right ear leading (left hemisphere activity) when presented with a familiar spoken command in which the salience of meaningful phonemic (segmental) cues was artificially increased; on the other hand, they showed a significant head-turning behavior to the left side (right hemisphere dominant activity) in response to commands with artificially increased salience of intonational or speaker-related (suprasegmental) vocal cues 28 . Nevertheless, the more recent results of Andics et al. 29,30 showed the opposite pattern of the hemispheres activity. Using the fMRI technique, they found a right hemisphere advantage in processing meaningful words and a left hemispheric bias for distinguishing intonationally marked words.
Overall, although these experiments showed lateralized auditory functions in the canine brain and provide insights into mechanisms of interspecific vocal perception, it remains unclear how dogs perceive and process the six basic emotions expressed by human non-verbal vocalizations. One of the possible methods employed to assess brain emotional functioning in the animal kingdom consists in observing and analyzing physiological (e.g. cardiac activity) and behavior responses to specific stimuli in experimental conditions that resemble as much as possible the natural ones 31 . Regarding the physiological response, there is now scientific evidence that cardiac activity could be considered a valid indicator to predict different emotional states in dogs [32][33][34][35] .
As to the behavior response, a recent study scored dogs' behaviors in order to investigate emotional contagion to conspecific and human emotional sounds 10 . Although results indicate that for both canine and human sounds dogs express more stress behaviors after hearing sounds with a negative emotional valence, further studies are required to determine valid and reliable behavior indicators for positively valenced sounds 10 .
The study of behavior lateralization has the potential to provide new insights into animal emotional processing 36 . An increasing body of evidence shows common lateralized neural patterns for emotional processing across all vertebrate classes, with specialization of the right hemisphere for processing withdrawal and negative emotions (e.g. fear and aggression) and a dominant role of the left hemisphere for processing positive emotions and approach 37,38 . Thus, external manifestation of hemispheric dominance (e.g. head-turning behavior) matched with both behavior and physiological responses could represent useful tools for understanding the valence of an emotion perceived by an animal during a particular situation, facilitating the categorization of the emotion along with valence and arousal dimensions [39][40][41] . In the light of this evidence, the aim of the present work was to investigate dogs' emotional responses to human non-verbal emotional vocalizations by measuring subjects' head-turning bias (valence dimension) and the related behavior and cardiac activities (arousal dimension).
A significant main effect of playbacks was observed in the overall increase of the heart rate values compared to the baseline (see Fig. 3B, AUC) (i.e. the area above baseline and under curve (F(5,131) = 4.242, P = 0.001) after controlling for the effect of playback order (F(6,131) = 1.485, P = 0.188) and vocalization gender (F(1,131) = 1.586, P = 0.210) (GLMM analysis): pairwise comparisons revealed that the AUC values were higher for "anger" stimulus than for the other emotional vocalizations: "anger" vs. "sadness" (P = 0.000); "anger" vs. "happiness" (P = 0.001); "anger" vs. "fear" (P = 0.002); "anger" vs. "surprise" (P = 0.017) and "anger" vs. "disgust" (P = 0.049). In addition, the analysis revealed that "disgust" stimulus induced higher AUC values than "sadness" (P = 0.008). No effects of sex (F(1,131) = 0.096, P = 0.757) and age (F(1,131) = 1.761, P = 0.187) were found. As to the questionnaire, the analysis revealed a statistically significant effect of query 6 indicating that the higher the scores for "attachment or attention-seeking behaviors", the more likely dogs had lower AUC values after  hearing emotional playbacks. No other statistically significant effects were found (P > 0.05 for all the remaining queries of the questionnaire, see Table 1).
Regarding the overall decrease of the heart rate values compared to the baseline (i.e. the area under baseline and above curve, AAC), the GLMM analysis revealed that the higher the scores for trainability, the more likely dogs had lower AAC values (β(SE) = −27.611(10.678); [95%-CI = −48.736; −6.487]; P = 0.011) (see Table 1).
Finally, tail-wagging behavior was observed during five occasions and 3 of these occurred after "surprise" and 2 after "happiness" sounds. In addition, after "surprise" playbacks dogs approached the speakers 2 times (given the low frequency of these observed behaviors, statistical analysis was not performed).

Discussion
Previous studies have reported that dogs' olfactory system works in an asymmetrical way to decode different emotions conveyed by human odors 32 . Our results demonstrate that this asymmetry is also manifested in the auditory sensory domain since dogs showed an asymmetrical head-orienting response to the playbacks of different human non-verbal emotional vocalizations. In particular, they turned the head with their left ear leading in response to "fear" and "sadness" human vocalizations. Given that in the head-orienting paradigm the head-turning direction indicates an advantage of the contralateral hemisphere in processing sounds 14 , the left head turning in response to "fear" and "sadness" vocalizations here reported suggests the prevalent activation of the right hemisphere. This finding is consistent with the general hypothesis of the right hemisphere dominant role in the analysis of intense emotional stimuli (e.g. horse [42][43][44] ; dog 45 ). Other evidences come from studies on cats, showing that, using the same head-orienting paradigm, they turned the head with their left ear leading in response to dogs' "disturbance" and "isolation" vocalizations 21 .
Furthermore, dogs' right hemisphere activation to process stimuli of negative emotional valence has also been reported by studies on motor functions (e.g. tail wagging behavior, see Siniscalchi et al. 35 ) and on sensory domains (e.g. vision 46 ; olfaction 47 ). Specifically, a bias to the left side (right hemisphere) in the head-turning response has been observed when dogs were presented with visual alarming stimuli (i.e. black silhouette of a snake and of a cat displaying an agonistic aversive posture 46 ) and a right nostril preferential use (right hemisphere) to investigate conspecific "isolation" odours 32 . Our data from the arousal dimension indicate that although both "sadness" and "fear" vocalizations are processed mainly by the right hemisphere, after hearing "sadness" playbacks dogs were less stressed than after hearing "fear" (see scattergrams, Fig. 5). The latter could be explained by the fact that despite both "fear" and "sadness" vocalizations are characterized by negative valence, they can differ on the functional and communicative level. In some individuals, "sadness" vocalizations could be clearly an approach evoking call while "fear" vocalizations could produce a different reaction in the receiver (approach/withdrawal) depending on the social context in which it is produced and perceived. However, considering the communicative function of these vocalizations, it could be hypothesized that the "fear" ones may elicit stronger reactions in the listener, explaining the higher arousal and stress behaviors registered in response to this vocalization. Moreover, in the light of recent findings 48,49 , the higher arousal and stressed behaviors showed by dogs after hearing "fear" vocalizations, which is a higher-arousal emotion compared to "sadness", suggest the occurrence of a cross-species emotional contagion between human and dogs. Nevertheless, further investigations are needed to address this issue.
As Fig. 3 shows, there was a clear tendency for dogs to turn their head to the left side in response to "anger" playbacks, but it didn't reach statistical significance. Previous studies hypothesized that dogs perceive the "anger" emotion to have a negative emotional valence 50 . It has been recently reported indeed that dogs showed a left gaze bias while looking at human negative facial expressions (angry faces), suggesting the right hemisphere involvement in processing the emotional message conveyed 51 . Furthermore, dogs looked preferentially at the lower face region of unfamiliar humans showing a negative expression ("sadness" and "angry"), avoiding consequently an eye contact with a potential threatening stimulus 50 . The high emotional valence attributed to the anger emotion is also attested by the longer time employed to correctly associate a reward to a human angry face rather than a happy one 7 . One possible explanation for the weaker left orienting bias observed in response to the "anger" vocalizations with respect to "fear" and "sadness", is that these sounds displayed an acoustic feature resembling the one of canine "threatening growls" (harsh, low frequency call). Although the emotional valence of this canine vocalization is similar to the "anger" one (most likely eliciting a right hemisphere activity), overall, a specialization of the left hemisphere for processing conspecific vocalizations has been observed 17 . In addition, fMRI studies identified two auditory regions in the dog brain, one bilaterally located and the other one in the left dorsal auditory cortex, both responding selectively to conspecific sounds 52 . Hence, it cannot be entirely ruled out the possibility that some subjects might have misinterpreted the "anger" vocalizations, categorizing them as a conspecific call. As a consequence, this phenomenon might have produced a sort of left hemisphere "interference" in processing the sound. On the other hand, as results for the head orienting response to "anger" sounds were marginally significant, it would be interesting to test this condition in future studies using a larger sample of dogs in order to verify if the lack of statistical significance is only a question of statistical power.
Regarding the "happiness" vocalization", a clear right bias in the head-orienting response (left hemisphere advantage) was observed. Previous studies have reported a left-hemisphere specialization for approach behavior 53 . Specifically in dogs, a left-brain activation was indirectly observed throughout asymmetric tail wagging movements to the right side in response to stimuli that could be expected to elicit approach tendencies, such as seeing the owner 11 . Thus, the involvement of the left hemisphere in the analysis of "happiness" vocalizations suggests that dogs perceived this sound as an expression of a positive emotional state that could elicit approaching behaviors, having a central role in the beginning and maintaining the dog-human interaction (note that tail wagging behaviors were observed during "happiness" playbacks). This evidence is supported by recent fMRI studies indicating a left bias for more positive human sounds 52 and an increase of functional connectivity in the left hemisphere in response to positive rewarding speech compared to neutral one 29 .
Overall, results from latency to resume feeding, cardiac activity and stress levels suggested that hearing "happiness" vocalization induced, as expected, low arousal levels with respect to hearing "fear" and "anger" but not "sadness". The latter suggests that relying solely on the arousal dimension would not make it clear to distinguish between the emotions conveyed by sadness and happiness vocalizations (see scattergrams, Fig. 5). In dogs, this hypothesis is supported by recent findings that indicate that parasympathetic deactivation (i.e. arousal increasing) is associated with a more positive emotional state elicited by different positive stimuli (food or social rewards 33 ).
Regarding "surprise" and "disgust" vocalizations, we found no biases in dogs' head-turning response. This result may suggest that the dogs perceived these sounds to be less distinguishable than the others in terms of both emotional valence and degree of familiarity. In particular, concerning the "disgust" vocalizations, our results fit in with the hypothesis of Turcsàn et al. 8 about the ambiguous valence that this emotion could have for dogs. In everyday life, different objects or situations eliciting a "disgust" emotion in the owner could be attractive for the dog (e.g. feaces) or, on the contrary, could be associated with a negative outcome (e.g. scolding). Thus, dogs' behavior responses (approaching or withdrawal) and the emotional valence attributed (negative or positive) could be strictly dependent on the individual experiences. Regarding surprise, evidence from human studies reported that this emotion could be perceived as both positive and negative, depending on the goal conduciveness of the surprising event 54 (note that in our experiments, during hearing surprise sounds, although the arousal levels were similar to those observed in response to sadness, tail wagging behavior and approaching behaviors to the speaker were observed). More interestingly, recent cognitive and psychophysiological studies indicate the possibility that surprise may be a (mildly) negative emotion 55 . The latter would be very similar to the slight left orienting (but not statistically significant) bias (right-hemisphere activation) observed here in dogs.
Overall, our results provide evidences about the existence of an emotional modulation of the dog brain to process basic human non-verbal emotional vocalizations. In particular, results from our experiments have shown that dogs process human emotional vocalizations in an asymmetrical way, predominantly using the right hemisphere in response to vocalizations with a clear negative emotional valence (i.e. "fear" and "sadness") and the left hemisphere in response to "happiness" playbacks. In addition, both cardiac activity and behavior response support the hypothesis that dogs are sensitive to emotional cues of human vocalizations, indicating that coupling the use of valence and arousal dimensions is a useful tool to deeply investigate brain emotional functioning in the animal kingdom.

Materials and Methods
Subjects. Thirty-six domestic dogs of various breeds were recruited for this study. We excluded 6 dogs: two dogs, because they showed distress soon after entry into the room; two dogs did not respond to any playbacks (i.e. did not stop feeding behavior); one dog was influenced by the owner during the test; one dog due to procedural problem (connection lost between the cardiac wireless system for telemetric measurements and the computer). Hence the final sample consisted of 14 males (3 neutered) and 16 females (6 neutered) whose ages ranged from 1 to 13 years (3.90 ± 2.83; mean ± S.D.; see Suppl. Table 1). All subjects were pets living in households. To join the study, dogs were required to be food motivated, healthy and experimentally naïve. They also had to fast for at least 8 hours before the testing session. Before the experiment begun, clinical and audiological evaluation for hearing impairment were performed on all the sample by two veterinarians of the Department of Veterinary Medicine, University of Bari. None of the tested dogs had hearing impairment.
Stimuli. Seven men and seven women, aged between 24 and 37 years, were asked to pronounce a set of non-verbal vocalizations, each expressing one of the six basic emotions 33 : happiness, surprise, disgust, fear, sadness and anger. According to Sauter et al. 13 , happiness sounds were laughs, disgust sounds were retches, fear sounds were screams, sadness sounds were sobs and anger sounds were growls. Surprise sounds were strong expirations producing "oh" vocalizations (see Fig. 6).
The sounds were produced in an anechoic chamber and each vocalization was digitally recorded employing Roland Edirol R-09HR, at a 24-bit quantization and 96 kHz sampling rate. The recordings were done in mono in order to avoid possible left-right asymmetries during playbacks.
Each acoustic stimulus was edited using Audition 2.0 (Adobe Inc.) so that it contained about 1 second of sound (vocalization) preceded and followed respectively by 2 s and 3 s of silence. Furthermore stimuli were equalized and their amplitude were homogenized in order to reach an average loudness of 69 dB when measured from the dog's position. In addition recordings were filtered to remove background noises. Protmex MS6708 Portable Digital Decibel Sound Level Meter was used to ensure that the speakers broadcast at the same volume.
In order to select the most significant and clear vocalizations, all recordings were then presented to 10 volunteers, five men and five women, aged between 20 and 30 years, in a casual order but identical between subjects, and played at constant volume. After listening to each auditory stimulus, they were asked to fill in a questionnaire, indicating if it expressed a positive or negative emotion, which of the six basic emotions it represented and rating on a 3-point-scale how clearly they perceived the emotion conveyed (see Table 2 supplementary materials). A sub-sample of 18 vocalizations (three x each basic emotion) was then selected according to questionnaire results, so that three sets of the six emotional vocalizations were obtained (see supplementary material for the criteria selection, Suppl. Table 2 and emotional vocalizations sets' details, Suppl. Table 3).
Apparatus. Experiment was carry out in an isolated room of the Department of Veterinary Medicine, University of Bari. Two speakers (FBT-200W8RA ® ) connected to a sound mixer were used to play simultaneously acoustic samples. A bowl, fastened to the floor with adhesive tape and full of dogs' favorite food, was placed between the speakers, centrally (2,60 m from each speaker) and aligned with them. Furthermore, two plastic panels (30 cm high, 50 cm in depth) were located on the two side of the bowl at a distance of 30 cm, to help dogs to maintain a central position during the test (see Fig. 7).
A digital video camera was used to record dogs' responses to acoustic stimuli. It was positioned on a tripod directly in front of the bowl, facing the subject and at a distance of about 2 m.
Procedure. Each dog was presented with one of the three sets made up of the six basic emotional vocalizations (12 subjects per set). The playbacks' order of each set was randomized between subjects. The test consisted of three weekly trials. In each trial two different vocalizations (one per emotion) were played.
The owner led the dog to the bowl on a loose leash. Once the subject took the right position (facing the video camera and centrally positioned between the two speakers) and soon after it started feeding, the owner let the dog off the leash and positioned himself 3 m behind the dog. Owners were instructed to stand still and not to interact with their dogs during the test. After 10 seconds from the owner positioning, the first stimulus was played. The two different vocalizations were played with at least 30 seconds interval between them. If after hearing the vocalization the subject did not resume feeding within this interval, the other playback was postponed. The maximum time allowed to resume feeding was 5 minutes. In the event of not resuming to feed before the session end, the missing vocalization was presented in the subsequent session.
Two experimenters from an adjacent room via a closed-circuit video system controlled stimuli playbacks. It consisted of a webcam, used to monitor the subjects' reaction and position, and two computers (one inside the test room and the other outside it), connected by a local area network, to control the stimuli playbacks.
Head-orienting response. First, a % Response index (%Res) for each dog head-orienting response to human vocalizations was calculated using the formula %Res = (L + R + NT/L + R + NT + N), where L and R signify respectively the number of Left and Right head-orienting responses, NT the number of times in which the dog stopped feeding without turning his head toward the speakers and N signifies "No response" (i.e. if the dog did not turn its head within five seconds after the playback). Given that dogs respond turning their head in different directions according to the emotional valence of the sound heard 17 , three responses were considered: turn right, turn left and no response, when the dog did not turn its head within 5 seconds from the sound playback. After a pilot test we decided to abandon the multiple presentation of the same acoustic stimulus since habituation to human vocalizations occurred very quickly. Lateral asymmetries in the direction of head-turning responses for each dog were scored as follows: a score of 1.0 represents head turning to the left side, −1.0 head turning to the right side and a score of 0 indicates no turns of the head.
Behavior score. Dogs' behavior was video recorded continuously throughout the experiment. Scores for stress/anxiety and affiliative behaviors were computed allocating a score of 1 for each behaviors displayed. A total of 28 behaviors were considered (see Suppl. Table 4 supplementary for the entire behavior list). The reactivity time (i.e. time elapsing from playback start and feeding stop) and the latency time (i.e. the time to resume feeding from the bowl after playbacks) were also measured; the maximum time allowed to resume feeding was 5 minutes.
For both, head-orienting responses and behavior scores, video footages were analyzed by two trained observers who were blind to the testing paradigm. The inter observer reliability was assessed by means of independent parallel coding of videotaped sessions and calculated as percentage agreement; percentage agreement was always more than 94%.
Cardiac activity. The evaluation of dogs' heart rate response during session was carried out following the methodology previously described by Siniscalchi and colleagues 32,35 . Briefly, the cardiac activity was recorded continuously during sessions, using the PC-Vetgard +tm Multiparameter wireless system for telemetric measurements (see Fig. 7). The heart rate response was calculated from the onset of the sound and during the following 25 s. If the dog did not resume feeding within this interval, the heart rate response was analysed till it resumed to feed (maximum time allowed was 5 minutes). Dogs became accustomed to vests, keeping the electrodes in contact with their chest, during weekly visit to the laboratory before the experimental test until they showed no behavior signs of stress.
The heart rate (HR) curve obtained during the pre-experimental phase (ECG R-R intervals during the recording period) was used in order to calculate the HR basal average (baseline). The highest (HV) and lowest values (LV) of the HR response to different playbacks were scored. In addition, the area delimited by the HR curve and the baseline was computed for each dog and each sound separately using Microsoft Excel ® . The Area Under Curve (above baseline and under curve, AUC) was then graphically separated from the Area Above Curve (under baseline and above curve, AAC). Each area value was then calculated and expressed as number of pixels (Adobe Photoshop Elite ® ). HR changes for each dog during presentations of different emotional vocalizations were then analyzed by comparing different area values with the corresponding baseline.

Questionnaire.
A modified version of the questionnaire, deriving from the Hsu and Serpell study 56 , was submitted to owners before the beginning of the session, in order to gather information on the canine-human relationship of their dogs (see Suppl. Table 5). Owners were asked to rate dogs' response in a given situation on a four-point scale, where a score of zero represented no reaction to the stimulus while a score of four represented a strong reaction to it. The total score for each query was calculated by adding up the score obtained for each of the given situations.

Statistical Analysis
Head orienting response. Given that data for %Res were not normally distributed, the analysis was conducted by means of non-parametric tests (Friedman's ANOVA).
A binomial GLMM analysis was performed to assess the influence of "emotion category", "vocalization gender", "playback order", "sex" and "age" on the test variable: "head orienting response" with the "query scales" as covariants and "subjects" as random variable. To detect differences between the emotion categories Fisher's Least Significant Difference (LSD) pairwise comparisons were performed. In addition, asymmetries at group-level (i.e. emotion category) were assessed via One-Sample Wilcoxon Signed Ranks Test, to report significant deviations from zero.
Reactivity and latency to resume feeding. For both reactivity and latency data, as they contained censored measurements, survival analysis methods were used 57 . Specifically mixed effects Cox regression modeling and Kaplan Meier estimates were used to analyze reactivity and the latency to resume feeding with the "emotion category" as the main factor (after a visual inspection of the data we decided to indicate "anger" as a reference category) and "subjects" as random variable. Mixed effects Cox proportional hazard models were used to analyze the effect of "vocalization gender", "playback order", "sex", "age", "Stress-behaviors" and "query scales" on the test variables: "reactivity" and "latency to resume feeding". Cardiac activity and behavior score. GLMM analyses was performed to assess the influence of "emotion category", "vocalization gender", "playback order", "sex" and "age" on the test variables: "HV", "LV", "AUC", "AAC" and "Stress-behaviors" with the "query scales" as covariants and "subjects" as random variable. To detect differences between the emotion categories Fisher's Least Significant Difference (LSD) pairwise comparisons were performed.
Ethics statement. The experiments were conducted according to the protocols approved by the Italian Minister for Scientific Research in accordance with EC regulations and were approved by the Department of Veterinary Medicine (University of Bari) Ethics Committee EC (Approval Number: 3/16); in addition, before the experiment began, the procedure was explained to owners and written informed consent was obtained.
Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.