Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Positive and negative contexts predict duration of pig vocalisations

Abstract

Emotions are mental states occurring in response to external and internal stimuli and thus form an integral part of an animal’s behaviour. Emotions can be mapped in two dimensions based on their arousal and valence. Whilst good indicators of arousal exist, clear indicators of emotional valence, particularly positive valence, are still rare. However, positively valenced emotions may play a crucial role in social interactions in many species and thus, an understanding of how emotional valence is expressed is needed. Vocalisations are a potential indicator of emotional valence as they can reflect the internal state of the caller. We experimentally manipulated valence, using positive and negative cognitive bias trials, to quantify changes in pig vocalisations. We found that grunts were shorter in positive trials than in negative trials. Interestingly, we did not find differences in the other measured acoustic parameters between the positive and negative contexts as reported in previous studies. These differences in results suggest that acoustic parameters may differ in their sensitivity as indicators of emotial valence. However, it is important to understand how similar contexts are, in terms of their valence, to be able to fully understand how and when acoustic parameters reflect emotional states.

Introduction

Emotions are mental states occurring in response to external or internal stimuli which influence an individual’s fitness1. In the dimensional model of emotions, emotional states can be mapped in two-dimensional space, where valence (positive versus negative) and arousal (intensity) characterise each dimension1,2. Emotional states in non-human animals have been demonstrated in a broad range of species across the taxonomic scale (e.g. insects:3,4; birds:5,6; mammals:7,8). However, clear indicators of positively valenced emotion are rare, because identifying consistent correlates on the valence dimension is difficult9,10. Emotions are multi-componential and each component can vary across the two dimensions of arousal and valence1. The four pillars of emotion assessment in humans are physiological, behavioural, cognitive and subjective verbal report11,12. Quantifying emotions in non-human animals relies on indirect assessment of three of these pillars: physiology, behaviour and cognition; the fourth pillar is solely accessible in humans. Cognitive bias testing assesses the effect of emotion on cognition in animals13,14,15. However, this test requires lengthy specialised training of animals to implement, which itself could potentially alter the emotional state of the indivudals being assessed16,17. Vocalisations, however, can be reliably measured and may allow the quantification of emotions in non-human animals as they reflect the inner state of the caller, providing information about an individuals’ affective state10,18,19. Thus, emotions expressed in vocalisations may play an important role in shapping social interactions among indiviuduals.

Morton’s motivational-structural rules state that acoustic characteristics of vocalisations are predictable from the context in which they are produced20. Vocalisations produced in more ‘hostile’ contexts are expected to have lower frequencies and longer durations than vocalisations produced in ‘friendly’ or ‘fearful’ contexts20. However, in addition to between-context variation in acoustic characteristics, variation within vocalisation types may also be context-dependent, and may reflect the internal state of the caller21,22,23. In the domestic pig, the most common vocalisation is the grunt, which functions predominantly as a contact call24,25. Information encoded in grunts includes body size25, individual identity26, personality type27 and emotional state28. Pigs’ vocal responses to mental and physical stressors differ depending on the nature of the stressor29. The cognitive bias paradigm has been applied to assess of the effects of various factors on emotional valence in pigs7,30,31.

The aim of this study was to test whether vocalisations produced in oppositely valenced contexts differ in their acoustic characteristics. We based the choice of acoustic parameters to measure on previous studies (see Table 1) and measured these in grunts produced in two different situations. In the first situation, the valence of the context was experimentally manipulated using cognitive bias positive and negative training trials. In the second situation, response to the probe in ambiguous test trials provided the measure of the valence of the context for the individual. The valence of each context was defined as approaching a stimulus in positive contexts and avoiding a stimulus in negative contexts32. We then analysed the effect of optimistic versus pessimistic responding on the acoustic characteristics of grunts, within each situation, in two separate analyses.

Table 1 Overview of the acoustic parameters measured and the underlying rationale for each parameter included in the analysis.

Methods

Animals and housing

We studied crossbred PIC 337, Large White x Landrace pigs, allocated into groups of 18 per pen, balanced for weight and sex, where they remained from 4–10 weeks of age at a research farm (Agri-Food and Biosciences Institute, Hillsborough, Northern Ireland). All test subjects were 10 weeks old at the time of cognitive bias testing. The experiment consisted of three replicates. Within each replicate there were two types of housing environment treatment; a barren environment and an enriched environment (cf.27,33 and see electronic supplementary material for full details). The barren environment had a partially slatted concrete floor and the minimum legal space allowance of 0.41 m2 per pig, whereas the enriched environment had a solid floor with straw bedding and a greater space allowance of 0.62 m2 per pig.

Ethical note

Ethical approval for this research was granted from University of Lincoln’s College of Science Ethics Committee (COSREC62, 8/9/15). All procedures conformed to the ASAB/ABS Guidelines for the Use of Animals in Research.

Cognitive bias testing

Twenty-seven pigs were tested in a spatial cognitive bias task (see Supplementary Methods for full details of training). On the cognitive bias test day, each pig was exposed once individually to each of the ambiguous probe locations: near positive (NP), middle (M) and near negative (NN), with pseudo-randomization between two positive (P) and negative (N) training trials resulting in 9 trials per test (i.e. P, N, M, N, P, NN, N, P, NP). Ambiguous locations were unrewarded but in positive and negative training trials the locations were baited as in training. Latency to reach the bowl was recorded in all trials. Individuals were given 30 seconds to approach the bowl and if they did not approach in that time they were recorded as having a latency of 30 sec and returned to the start box for the next trial.

The raw latency to reach the probe data violated the assumptions of the linear models, therefore we classified responses to the probe within each as “optimistic”, “pessimistic” or “neutral” (cf.31). Pigs’ responses in each trial were classified in relation to the mean speed of approach to the trained positive and negative targets from the final training session using the following formula:

$$adjusted\,score=\frac{x}{[(y+z)/2]}\,\times 100$$

where x = latency to contact the bowl during the cognitive bias test, y = mean latency to contact the bowl during the positive training trials, z = mean latency to contact the bowl during the negative training trials. As per Carreras et al. (2016), adjusted scores were calculated as percentages; if the adjusted score was ≤75% the response was classified as “optimistic”; if the score was >75 and <125% the response was classified as “neutral”; and if the score was ≥125% the response was classified as “pessimistic”.

Acoustic recording and analysis

Vocalisations produced during the cognitive bias test were recorded at a distance of 1–4 meters using a Sennheiser ME66/K6 (Sennheiser Electronic GmbH & Co. KG, Wedemark, Germany) directional microphone connected to a Marantz PMD660 (D&M Professional, Kanagawa, Japan) solid state audio recorder (.wav format, sampling frequency: 44.1 kHz, resolution: 16 bit). The microphone was placed on one corner of the arena at a height of 1.5 m and pointed into the test arena (see Fig. S1 for microphone location above test arena). The recorder was switched on at the start of the test session for each individual and all trials were recorded.

All good quality grunts with a high signal-to-noise ratio from the positive, negative and ambiguous trials were selected for analysis (see Fig. 1 for spectrograms). Grunts were chosen for analysis on the condition that they were at least three calls apart, if produced in a bout, to avoid pseudoreplication. A total of 125 calls were produced when a subject responded optimistically; 153 calls were produced when a subject responded pessimistically; 4 calls were produced when the subject’s response was neutral and these 4 calls were excluded from subsequent analyses. A further 14 calls that had been produced in positive and negative trials where the subject had responded ‘incorrectly’ (i.e. responded pessimistically in the positive trial and optimistically in the negative trial) were also excluded. This resulted in a total of 264 calls from 23 individuals for analysis (78 calls from ambiguous test trials and 186 calls from the positive and negative training trials: see Table S1 for full details of number of calls contributed per individual). Using a custom built script in PRAAT34, we measured 11 acoustic parameters from each call (see Table 1 for list of parameters measured and supplement for further details of acoustic analysis). Some parameters (amplitude modulations and rate) could not be measured in every call and this resulted in some missing values, hence the sample size differs between acoustic parameters (Table 2).

Figure 1
figure1

Grunt vocalisations of domestic pig. Grunt vocalisations from individual 71: (a) is from a positive trial with duration = 0.372 seconds and (b) is from a negative trial with duration = 0.411 seconds. F0 = fundamental frequency and F1 = the first formant (calls are available as audio files “Positive context grunt” and “Negative context grunt” respectively in the supplementary material). Visualisation settings: view range = 0–8000 Hz, window length = 0.03 sec, dynamic range = 65 dB, time steps = 700, frequency steps = 250, Gaussian window.

Table 2 Raw means and SDs, along with results from models for the effect of training trial type, i.e. Negative or Positive, controlled for sex, environment, and individual identity along with statistical results, sample size (N) and P values.

Statistical analysis

Data analysis was conducted using R v. 3.0.2 (R Development Core Team 2008). The statistical analysis of the acoustic parameters was split into two analyses: (1) the trained positive/negative trials, and; (2) the ambiguous trials. This was because these two trial types, trained versus ambiguous, were fundamentally different. In the trained trials the bowls were baited with either a positive or negative stimulus and thus, were expected to elicit either a positive or negative valence state during that trial. The ambiguous trials, on the other hand, were not baited and in addition, in each of the 3 ambiguous trials, this was the first time the individual had encountred the probe in that location.

Linear mixed models were used to investigate the effect of response type (optimistic or pessimistic) on the acoustic parameters in the positive/negative trials and the ambiguous trials separately, accounting for experimental design (see electronic supplementary material for full details of models). Model residuals were visually inspected for normality and homoscedasticity. Data are presented as mean ± SD.

Results

Positive and negative training trials

The duration of calls produced in the negative training trials was significantly longer than the duration of calls produced during the positive training trials (raw mean ± SD: negative = 0.43 ± 0.15, positive = 0.35 ± 0.15, p = 0.0017). There was no significant difference between the positive and negative training trials in the other acoustic parameters investigated (see Table 2). To correct for multiple testing, we applied a Bonferroni correction; this led to a corrected value of p < 0.0046 (i.e. 0.05/11), which did not affect the interpretation of any of the results.

Ambiguous trials

There were no significant effect of response type (optimistic or pessimistic) on the acoustic parameters of the grunts from the ambiguous trials (see Table 2).

Discussion

We found that emotional valence is encoded in the duration of calls. Grunts produced during contexts of positive valence were shorter than those produced during contexts of negative valence. Call duration is affected by respiration, and changes in the action or tension of the respiratory muscles may explain the shorter duration of vocalisations in positive compared to negative contexts10. Longer calls produced in response to negative stimuli may function to increase the salience of these, presumably more urgent, calls to conspecifics. Previous studies in pigs have also found shorter call duration in positive contexts compared to negative contexts28,35, with similar findings in horses23, elephants21 and dogs36. Thus, call duration appears to provide a consistent indicator of emotional valence in mammalian vocalisations10.

Interestingly, we did not find significant differences in the other measured acoustic parameters between positive and negative contexts, which differs from recent studies22,37,38. There are several non-mutually exclusive explantions why the outcomes differ among studies: firstly, the emotional valence experienced by indiviudals may not have differed enough to be reflected in the measured acoustic parameters. The only difference between the two contexts was the position of the bowl and whether a reward was received or not, and the trials were carried out in quick succession. The dimensional model views valence as a continuum. Other studies may have induced emotions further apart on the continuum, whilst our experimental contexts may have induced emotions that were closer to each other on the continuum. Thus, an acoustic parameter would need to be very sensitive to subtle differences in valence to be used as an indicator of valences that are closer together on the continuum. Secondly, the experimental design of previous studies differed from ours in several other aspects, including day of recording, number of individuals present during recording and the functional relevance of the context e.g. social isolation, startling and aggression as negative contexts and food anticipation and affiliative interactions as positive contexts22,28,37,38. Thirdly, it may be that the calls produced here were influenced more by unmeasured factors than the valence of the contexts. Leliveld and colleagues found that the context accounted for only 1.5–11.9% of the variability in the acoustic parameters they measured28, suggesting that many other factors influence the acoustic characteristics of vocalisations.

Vocalisations may be more associated with the expression of arousal than valence39,40. Indeed, one of the main challenges in assessing emotional valence is eliciting oppositely valenced states whilst not affecting arousal, since negatively valenced states also tend to be higher arousal states37. For assessing the characteristics of vocalisations in such conditions, this is problematic, as high arousal is known to impact the acoustic characteristics of vocalisations41. Fundamental frequency parameters and energy quartiles are robust measures of emotional arousal in mammals10,42,43. We did not detect any statistically significant differences in these parameters between the positively and negatively valenced contexts (see Table 2). This suggests that level of arousal did not differ between the positive and negative contexts used here.

In the ambiguous trials, individuals were presented with an ambiguous cue, i.e. the bowl located in the near positive, middle and near negative locations. In contrast to the positive and negative trained trials, we did not find any significant associations between optimistic/pessimistic response type and any of the acoustic parameters measured during the ambiguous trials. However, there was a trend for calls to be shorter when produced during an optimistic response than a pessimistic response. As the training trials were rewarded/unrewarded, we predicted they would induce emotional states of positive and negative valence respectively44. In contrast, responses in ambiguous trials provide a measure of the animals’ underlying emotional state14, rather than inducing any valenced emotional state per se. Thus, optimistic and pessimistic responding in the ambiguous trials may not have been directly comparable to optimistic and pessimistic responding in the training trials.

Emotions are typically measured through behaviour, for example through body posture45, facial expression46,47, and vocalisations48. For social species, the ability to recognise the emotional state of conspecifics is likely to be adaptive as it may enable them to predict the future behaviour of the individual and adjust their behaviour accordingly49,50,51. Individuals of different species are able to perceive, and respond to, differences in emotional state based on differences in the acoustic structure of the same type of vocalisations produced in different contexts51,52,53,54. Changes in the characteristics of vocalisations could potentially function as a signal of emotional state to conspecifics, however further research is required to fully understand the effect of valence on acoustic parameters in animal vocalisations. Recent studies on emotional contagion suggest that cues relating to emotional state can elicit similar changes in behaviour in individuals who have no direct experience of the original cue55,56,57. Thus, changes in vocalisations corresponding with emotions could function to signal emotional state to conspecifics and to regulate social interactions.

In conclusion, duration of vocalisation appears to be a consistent indicator of valence across species10, and our results suggest that it may also be a sensitive indicator of small differences in valence. This is encouraging, as call duration has the advantage of being easy to measure and apply in a wide variety of contexts. To assess whether valence affects the acoustic parameters of calls, it is necessary to understand how similar or dissimilar oppositely valenced states are to each other. This could help uncover which acoustic parameters are sensitive to small changes in valence, and which others may only be affected by large differences in valence.

Data Availability

The datasets generated and analysed during this study are available from the corresponding author on request.

References

  1. 1.

    Mendl, M., Burman, O. H. P. & Paul, E. S. An integrative and functional framework for the study of animal emotion and mood. Proc. R. Soc. B Biol. Sci. 277, 2895–2904 (2010).

    Article  Google Scholar 

  2. 2.

    Russell, J. A. Core Affect and the Psychological Construction of Emotion. Psychol. Rev. 110, 145–172 (2003).

    PubMed  Article  Google Scholar 

  3. 3.

    Bateson, M., Desire, S., Gartside, S. E. & Wright, G. A. Agitated honeybees exhibit pessimistic cognitive biases. Curr. Biol. 21, 1070–1073 (2011).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  4. 4.

    Perry, C. J., Baciadonna, L. & Chittka, L. Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science 353, 1529–1531 (2016).

    ADS  CAS  PubMed  Article  Google Scholar 

  5. 5.

    Wichman, A., Keeling, L. J. & Forkman, B. Cognitive bias and anticipatory behaviour of laying hens housed in basic and enriched pens. Appl. Anim. Behav. Sci. 140, 62–69 (2012).

    Article  Google Scholar 

  6. 6.

    Cussen, V. A. & Mench, J. A. Personality predicts cognitive bias in captive psittacines, Amazona amazonica. Anim. Behav. 89, 123–130 (2014).

    Article  Google Scholar 

  7. 7.

    Douglas, C., Bateson, M., Walsh, C., Bédué, A. & Edwards, S. A. Environmental enrichment induces optimistic cognitive biases in pigs. Appl. Anim. Behav. Sci. 139, 65–73 (2012).

    Article  Google Scholar 

  8. 8.

    Schino, G., Massimei, R., Pinzaglia, M. & Addessi, E. Grooming, social rank and ‘optimism’ in tufted capuchin monkeys: a study of judgement bias. Anim. Behav. 119, 11–16 (2016).

    Article  Google Scholar 

  9. 9.

    Belin, P. et al. Human cerebral response to animal affective vocalizations. Proc. R. Soc. B Biol. Sci. 275, 473–481 (2008).

    Article  Google Scholar 

  10. 10.

    Briefer, E. F. Vocal expression of emotions in mammals: Mechanisms of production and evidence. J. Zool. 288, 1–20 (2012).

    Article  Google Scholar 

  11. 11.

    Scherer, K. R. & Grandjean, D. Facial expressions allow inference of both emotions and their components. Cogn. Emot. 22, 789–801 (2008).

    Article  Google Scholar 

  12. 12.

    Shuman, V., Clark-Polner, E., Meuleman, B., Sander, D. & Scherer, K. R. Emotion perception from a componential perspective. Cogn. Emot. 31, 47–56 (2017).

    PubMed  Article  Google Scholar 

  13. 13.

    Baciadonna, L. & McElligott, A. G. The use of judgement bias to assess welfare in farm livestock. Anim. Welf. 24, 81–91 (2015).

    Article  Google Scholar 

  14. 14.

    Harding, E. J., Paul, E. S. & Mendl, M. Cognitive bias and affective state. Nature 427, 312 (2004).

    ADS  CAS  PubMed  Article  Google Scholar 

  15. 15.

    Paul, E. S., Harding, E. J. & Mendl, M. Measuring emotional processes in animals: The utility of a cognitive approach. Neurosci. Biobehav. Rev. 29, 469–491 (2005).

    PubMed  Article  Google Scholar 

  16. 16.

    Roelofs, S., Boleij, H., Nordquist, R. E. & van der Staay, F. J. Making Decisions under Ambiguity: Judgment Bias Tasks for Assessing Emotional State in Animals. Front. Behav. Neurosci. 10, 1–16 (2016).

    Article  Google Scholar 

  17. 17.

    Novak, J., Bailoo, J. D., Melotti, L., Rommen, J. & Würbel, H. An exploration based cognitive bias test for mice: Effects of handling method and stereotypic behaviour. PLoS One 10, 1–16 (2015).

    Google Scholar 

  18. 18.

    Gogoleva, S. S., Volodin, I. A., Volodina, E. V., Kharlamova, A. V. & Trut, L. N. Sign and strength of emotional arousal: Vocal correlates of positive and negative attitudes to humans in silver foxes (Vulpes vulpes). Behaviour 147, 1713–1736 (2010).

    Article  Google Scholar 

  19. 19.

    Manteuffel, G., Puppe, B. & Schön, P. C. Vocalization of farm animals as a measure of welfare. Appl. Anim. Behav. Sci. 88, 163–182 (2004).

    Article  Google Scholar 

  20. 20.

    Morton, E. S. On the occurrence and significance of motivational-structural rules in some bird and mamal sounds. Am. Nat. 111, 855–869 (1977).

    Article  Google Scholar 

  21. 21.

    Soltis, J., Blowers, T. E. & Savage, A. Measuring positive and negative affect in the voiced sounds of African elephants (Loxodonta africana). J. Acoust. Soc. Am. 129, 1059–1066 (2011).

    ADS  PubMed  Article  Google Scholar 

  22. 22.

    Maigrot, A.-L., Hillmann, E. & Briefer, E. Encoding of Emotional Valence in Wild Boar (Sus scrofa) Calls. Animals 8, 85 (2018).

    Article  Google Scholar 

  23. 23.

    Briefer, E. F. et al. Segregation of information about emotional arousal and valence in horse whinnies. Sci. Rep. 4, 1–11 (2015).

    Google Scholar 

  24. 24.

    Kiley, M. The vocalizations of ungulates, their causation and function. Z. Tierpsychol. 31, 171–222 (1972).

    CAS  PubMed  Article  Google Scholar 

  25. 25.

    Garcia, M., Wondrak, M., Huber, L. & Fitch, W. T. Honest signaling in domestic piglets (Sus scrofa domesticus): vocal allometry and the information content of grunt calls. J. Exp. Biol. 219, 1913–1921 (2016).

    PubMed  PubMed Central  Article  Google Scholar 

  26. 26.

    Blackshaw, J. K., Jones, D. N. & Thomas, F. J. Vocal individuality during suckling in the intensively housed domestic pig. Appl. Anim. Behav. Sci. 50, 33–41 (1996).

    Article  Google Scholar 

  27. 27.

    Friel, M., Kunc, H. P., Griffin, K., Asher, L. & Collins, L. M. Acoustic signalling reflects personality in a social mammal. R. Soc. Open Sci. 3, 1–9 (2016).

    Article  Google Scholar 

  28. 28.

    Leliveld, L. M. C., Düpjan, S., Tuchscherer, A. & Puppe, B. Behavioural and physiological measures indicate subtle variations in the emotional valence of young pigs. Physiol. Behav. 157, 116–124 (2016).

    CAS  PubMed  Article  Google Scholar 

  29. 29.

    Düpjan, S., Schön, P. C., Puppe, B., Tuchscherer, A. & Manteuffel, G. Differential vocal responses to physical and mental stressors in domestic pigs (Sus scrofa). Appl. Anim. Behav. Sci. 114, 105–115 (2008).

    Article  Google Scholar 

  30. 30.

    Brajon, S., Laforest, J. P., Schmitt, O. & Devillers, N. The way humans behave modulates the emotional state of piglets. PLoS One 10, 1–17 (2015).

    Article  CAS  Google Scholar 

  31. 31.

    Carreras, R. et al. Effect of gender and halothane genotype on cognitive bias and its relationship with fear in pigs. Appl. Anim. Behav. Sci. 177, 12–18 (2016).

    Article  Google Scholar 

  32. 32.

    Bateson, M. Optimistic and pessimistic biases: a primer for behavioural ecologists. Curr. Opin. Behav. Sci. 12, 115–121 (2016).

    Article  Google Scholar 

  33. 33.

    Asher, L., Friel, M., Griffin, K. & Collins, L. M. Mood and personality interact to determine cognitive biases in pigs. Biol. Lett. 12, 20160402 (2016).

    PubMed  PubMed Central  Article  Google Scholar 

  34. 34.

    Reby, D. & McComb, K. Anatomical constraints generate honesty: Acoustic cues to age and weight in the roars of red deer stags. Anim. Behav. 65, 519–530 (2003).

    Article  Google Scholar 

  35. 35.

    Tallet, C. et al. Encoding of Situations in the Vocal Repertoire of Piglets (Sus scrofa): A Comparison of Discrete and Graded Classifications. PLoS One 8, (2013).

  36. 36.

    Taylor, A. M., Reby, D. & McComb, K. Context-related variation in the vocal growling behaviour of the domestic dog (canis familiaris). Ethology 115, 905–915 (2009).

    Article  Google Scholar 

  37. 37.

    Briefer, E. F., Tettamanti, F. & McElligott, A. G. Emotions in goats: Mapping physiological, behavioural and vocal profiles. Anim. Behav. 99, 131–143 (2015).

    Article  Google Scholar 

  38. 38.

    Maigrot, A. L., Hillmann, E., Anne, C. & Briefer, E. F. Vocal expression of emotional valence in Przewalski’s horses (Equus przewalskii). Sci. Rep. 7, 1–11 (2017).

    CAS  Article  Google Scholar 

  39. 39.

    Bachorowski, J.-A. & Owren, M. J. The sounds of emotion: Production and perception of affect-related vocal acoustics. Ann. N. Y. Acad. Sci. 1000, 244–265 (2003).

    ADS  PubMed  Article  Google Scholar 

  40. 40.

    Laukka, P., Juslin, P. N. & Bresin, R. A dimensional approach to vocal expression of emotion. Cogn. Emot. 19, 633–653 (2005).

    Article  Google Scholar 

  41. 41.

    Goudbeek, M. & Scherer, K. Beyond arousal: Valence and potency/control cues in the vocal expression of emotion. J. Acoust. Soc. Am. 128, 1322 (2010).

    ADS  PubMed  Article  Google Scholar 

  42. 42.

    Linhart, P., Ratcliffe, V. F., Reby, D. & Špinka, M. Expression of emotional arousal in two different piglet call types. PLoS One 10, (2015).

  43. 43.

    Briefer, E. F. et al. Horse vocalisations supplementary material. Sci. Rep. 4, (2015).

  44. 44.

    Rolls, E. T. Emotion and decision-making explained: A précis. Cortex 59, 185–193 (2014).

    PubMed  Article  Google Scholar 

  45. 45.

    Wemelsfelder, F., Haskell, M., Mendl, M. T., Calvert, S. & Lawrence, A. B. Diversity of behaviour during novel object tests is reduced in pigs housed in substrate-impoverished conditions. Anim. Behav. 60, 385–394 (2000).

    CAS  PubMed  Article  Google Scholar 

  46. 46.

    McLennan, K. M. et al. Development of a facial expression scale using footrot and mastitis as models of pain in sheep. Appl. Anim. Behav. Sci. 176, 19–26 (2016).

    Article  Google Scholar 

  47. 47.

    Finlayson, K., Lampe, J. F., Hintze, S., Würbel, H. & Melotti, L. Facial indicators of positive emotions in rats. PLoS One 11, 1–24 (2016).

    Article  CAS  Google Scholar 

  48. 48.

    Burgdorf, J. & Panksepp, J. The neurobiology of positive emotions. Neurosci. Biobehav. Rev. 30, 173–187 (2006).

    PubMed  Article  Google Scholar 

  49. 49.

    Müller, C. A., Schmitt, K., Barber, A. L. A. & Huber, L. Dogs can discriminate emotional expressions of human faces. Curr. Biol. 25, 601–605 (2015).

    PubMed  Article  CAS  Google Scholar 

  50. 50.

    Brudzynski, S. M. Ethotransmission: Communication of emotional states through ultrasonic vocalization in rats. Curr. Opin. Neurobiol. 23, 310–317 (2013).

    CAS  PubMed  Article  Google Scholar 

  51. 51.

    Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A. & de Boer, B. Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Curr. Zool. 63, 445–456 (2017).

    PubMed  PubMed Central  Article  Google Scholar 

  52. 52.

    Briefer, E. F. et al. Perception of emotional valence in horse whinnies. Front. Zool. 14, 1–12 (2017).

    Article  Google Scholar 

  53. 53.

    Faragó, T., Pongrácz, P., Range, F., Virányi, Z. & Miklósi, Á. ‘The bone is mine’: affective and referential aspects of dog growls. Anim. Behav. 79, 917–925 (2010).

    Article  Google Scholar 

  54. 54.

    Filippi, P. et al. Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proc. R. Soc. B Biol. Sci. 284, 1–9 (2017).

    Article  Google Scholar 

  55. 55.

    Reimert, I., Fong, S., Rodenburg, T. B. & Bolhuis, J. E. Emotional states and emotional contagion in pigs after exposure to a positive and negative treatment. Appl. Anim. Behav. Sci. 193, 37–42 (2017).

    Article  Google Scholar 

  56. 56.

    Quervel-Chaumette, M., Faerber, V., Faragó, T., Marshall-Pescini, S. & Range, F. Investigating empathy-like responding to conspecifics’ distress in pet dogs. PLoS One 11, 1–15 (2016).

    Article  CAS  Google Scholar 

  57. 57.

    Edgar, J. L., Lowe, J. C., Paul, E. S. & Nicol, C. J. Avian maternal response to chick distress. Proc. R. Soc. B Biol. Sci. 278, 3129–3134 (2011).

    CAS  Article  Google Scholar 

  58. 58.

    Maruščáková, I. L. et al. Humans (Homo sapiens) judge the emotional content of piglet (Sus scrofa domestica) calls based on simple acoustic parameters, not personality, empathy, nor attitude toward animals. J. Comp. Psychol. 129, 121–131 (2015).

    PubMed  Article  Google Scholar 

Download references

Acknowledgements

We thank Luigi Baciadonna and Elodie Briefer for their invaluable help and advice on the acoustic analysis and we acknowledge David Reby who wrote the PRAAT script used in the acoustic analysis. We would also like to thank the staff at AFBI Hillsborough for their care of the pigs and use of the experimental rooms and Grace Carroll for her help with data collection. We thank the three anonymous reviewers for their helpful comments on the manuscript. The work presented here was funded as part of a BBSRC Grant No. (BB/K002554/2) and M. Friel was funded by DEL-NI and Queen’s University Belfast.

Author information

Affiliations

Authors

Contributions

Conceived and designed the experiment (M.F., L.M.C., H.K., L.A.). Performed the experiment (M.F., K.G.). Statistical analyses (M.F.). Contributed to reagents, materials and analytical tools (L.M.C., H.K.). Contributed to the writing of the manuscript (M.F., K.G., H.K., L.A. and L.M.C.).

Corresponding author

Correspondence to Lisa M. Collins.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Friel, M., Kunc, H.P., Griffin, K. et al. Positive and negative contexts predict duration of pig vocalisations. Sci Rep 9, 2062 (2019). https://doi.org/10.1038/s41598-019-38514-w

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing