Article | Published:

This is an unedited manuscript that has been accepted for publication. Nature Research are providing this early version of the manuscript as a service to our customers. The manuscript will undergo copyediting, typesetting and a proof review before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers apply.

DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations

Neuropsychopharmacology (2019) | Download Citation

Abstract

Rodents engage in social communication through a rich repertoire of ultrasonic vocalizations (USVs). Recording and analysis of USVs has broad utility during diverse behavioral tests and can be performed noninvasively in almost any rodent behavioral model to provide rich insights into the emotional state and motor function of the test animal. Despite strong evidence that USVs serve an array of communicative functions, technical and financial limitations have been barriers for most laboratories to adopt vocalization analysis. Recently, deep learning has revolutionized the field of machine hearing and vision, by allowing computers to perform human-like activities including seeing, listening, and speaking. Such systems are constructed from biomimetic, “deep”, artificial neural networks. Here, we present DeepSqueak, a USV detection and analysis software suite that can perform human quality USV detection and classification automatically, rapidly, and reliably using cutting-edge regional convolutional neural network architecture (Faster-RCNN). DeepSqueak was engineered to allow non-experts easy entry into USV detection and analysis yet is flexible and adaptable with a graphical user interface and offers access to numerous input and analysis features. Compared to other modern programs and manual analysis, DeepSqueak was able to reduce false positives, increase detection recall, dramatically reduce analysis time, optimize automatic syllable classification, and perform automatic syntax analysis on arbitrarily large numbers of syllables, all while maintaining manual selection review and supervised classification. DeepSqueak allows USV recording and analysis to be added easily to existing rodent behavioral procedures, hopefully revealing a wide range of innate responses to provide another dimension of insights into behavior when combined with conventional outcome measures.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Barker DJ. et al. Ultrasonic vocalizations: evidence for an affective opponent process during cocaine self-administration. Psychopharmacology. 2014;231:909–18. https://doi.org/10.1007/s00213-013-3309-0.

  2. 2.

    Browning JR, Whiteman AC, Leung LY, Lu XM, Shear DA. Air-puff induced vocalizations: a novel approach to detecting negative affective state following concussion in rats. J Neurosci Methods. 2017;275:45–9. https://doi.org/10.1016/j.jneumeth.2016.10.017.

  3. 3.

    Chabout J, Sarkar A, Dunson DB, Jarvis ED. Male mice song syntax depends on social contexts and influences female preferences. Front Behav Neurosci. 2015;9:76 https://doi.org/10.3389/fnbeh.2015.00076.

  4. 4.

    Liu RC, Miller KD, Merzenich MM, Schreiner CE. Acoustic variability and distinguishability among mouse ultrasound vocalizations. J Acoust Soc Am. 2003;114:3412–22.

  5. 5.

    Portfors CV. Types and functions of ultrasonic vocalizations in laboratory rats and mice. J Am Assoc Lab Anim Sci. 2007;46:28–34.

  6. 6.

    Seagraves KM, Arthur BJ, Egnor SE. Evidence for an audience effect in mice: male social partners alter the male vocal response to female cues. J Exp Biol. 2016;219:1437–48. https://doi.org/10.1242/jeb.129361.

  7. 7.

    Chabout J. et al. A Foxp2 mutation implicated in human speech deficits alters sequencing of ultrasonic vocalizations in adult male mice. Front Behav Neurosci. 2016;10:197 https://doi.org/10.3389/fnbeh.2016.00197.

  8. 8.

    Hernandez C, Sabin M, Riede T. Rats concatenate 22 kHz and 50 kHz calls into a single utterance. J Exp Biol. 2017;220:814–21. https://doi.org/10.1242/jeb.151720.

  9. 9.

    Borta A, Wöhr M, Schwarting R. Rat ultrasonic vocalization in aversively motivated situations and the role of individual differences in anxiety-related behavior. Behav Brain Res. 2006;166:271–80.

  10. 10.

    Burgdorf J, Panksepp J, Moskal JR. Frequency-modulated 50 kHz ultrasonic vocalizations: a tool for uncovering the molecular substrates of positive affect. Neurosci Biobehav Rev. 2011;35:1831–6.

  11. 11.

    Jelen P, Soltysik S, Zagrodzka J. 22-kHz ultrasonic vocalization in rats as an index of anxiety but not fear: behavioral and pharmacological modulation of affective state. Behav Brain Res. 2003;141:63–72.

  12. 12.

    Knutson B, Burgdorf J, Panksepp J. Ultrasonic vocalizations as indices of affective states in rats. Psychol Bull. 2002;128:961.

  13. 13.

    Wright JM, Gourdon JC, Clarke PB. Identification of multiple call categories within the rich repertoire of adult rat 50-kHz ultrasonic vocalizations: effects of amphetamine and social context. Psychopharmacology. 2010;211:1–13. https://doi.org/10.1007/s00213-010-1859-y.

  14. 14.

    Panksepp JB. et al. Affiliative behavior, ultrasonic communication and social reward are influenced by genetic variation in adolescent mice. PLoS ONE. 2007;2:e351 https://doi.org/10.1371/journal.pone.0000351.

  15. 15.

    Scattoni ML, Ricceri L, Crawley JN. Unusual repertoire of vocalizations in adult BTBR T+tf/J mice during three types of social encounters. Genes Brain Behav. 2011;10:44–56. https://doi.org/10.1111/j.1601-183X.2010.00623.x.

  16. 16.

    Sugimoto H. et al. A role for strain differences in waveforms of ultrasonic vocalizations during male–female interaction. PLoS ONE. 2011;6:e22093 https://doi.org/10.1371/journal.pone.0022093.

  17. 17.

    Van Segbroeck M, Knoll AT, Levitt P, Narayanan S. MUPET-mouse ultrasonic profile ExTraction: a signal processing tool for rapid and unsupervised analysis of ultrasonic vocalizations. Neuron. 2017;94:465–85. https://doi.org/10.1016/j.neuron.2017.04.005.

  18. 18.

    Hanson JL, Hurley LM. Female presence and estrous state influence mouse ultrasonic courtship vocalizations. PLoS ONE. 2012;7:e40782 https://doi.org/10.1371/journal.pone.0040782.

  19. 19.

    Yang M, Loureiro D, Kalikhman D, Crawley JN. Male mice emit distinct ultrasonic vocalizations when the female leaves the social interaction arena. Front Behav Neurosci. 2013;7:159 https://doi.org/10.3389/fnbeh.2013.00159.

  20. 20.

    Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15:550 https://doi.org/10.1186/s13059-014-0550-8.

  21. 21.

    Grimsley JM, Monaghan JJ, Wenstrup JJ. Development of social vocalizations in mice. PLoS ONE. 2011;6:e17460 https://doi.org/10.1371/journal.pone.0017460.

  22. 22.

    Ellenbroek B, Youn J. Rodent models in neuroscience research: is it a rat race?. Dis Models Mech. 2016;9:1079–87. https://doi.org/10.1242/dmm.026120.

  23. 23.

    Farabet C, Couprie C, Najman L, Lecun Y. Learning hierarchical features for scene labeling. IEEE Trans Pattern Anal Mach Intell. 2013;35:1915–29. https://doi.org/10.1109/TPAMI.2012.231.

  24. 24.

    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44. https://doi.org/10.1038/nature14539.

  25. 25.

    Sainath TN. et al. Deep convolutional neural networks for large-scale speech tasks. Neural Netw. 2015;64:39–48. https://doi.org/10.1016/j.neunet.2014.08.005.

  26. 26.

    Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39:1137–49. https://doi.org/10.1109/TPAMI.2016.2577031.

  27. 27.

    Barker DJ, Herrera C, West MO. Automated detection of 50-kHz ultrasonic vocalizations using template matching in XBAT. J Neurosci Methods. 2014;236:68–75. https://doi.org/10.1016/j.jneumeth.2014.08.007.

  28. 28.

    Burkett ZD, Day NF, Penagarikano O, Geschwind DH, White SA. VoICE: a semi-automated pipeline for standardizing vocal analysis across models. Sci Rep. 2015;5:10237 https://doi.org/10.1038/srep10237.

  29. 29.

    Reno JM, Marker B, Cormack LK, Schallert T, Duvauchelle CL. Automating ultrasonic vocalization analyses: the WAAVES program. J Neurosci Methods. 2013;219:155–61. https://doi.org/10.1016/j.jneumeth.2013.06.006.

  30. 30.

    Zala SM, Reitschmidt D, Noll A, Balazs P, Penn DJ. Automatic mouse ultrasound detector (A-MUD): a new tool for processing rodent vocalizations. PLoS One. 2017;12:e0181200 https://doi.org/10.1371/journal.pone.0181200.

  31. 31.

    Johnson AM, Grant LM, Schallert T, Ciucci MR. Changes in Rat 50-kHz ultrasonic vocalizations during dopamine denervation and aging: relevance to neurodegeneration. Curr Neuropharmacol. 2015;13:211–9.

  32. 32.

    Kershenbaum A, Sayigh LS, Janik VM. The encoding of individual identity in dolphin signature whistles: how much information is needed?. PLoS One. 2013;8:e77671 https://doi.org/10.1371/journal.pone.0077671.

  33. 33.

    Deecke VB, Janik VM. Automated categorization of bioacoustic signals: avoiding perceptual pitfalls. J Acoust Soc Am. 2006;119:645–53.

  34. 34.

    Torquet EEN. Mouse Tube. 2015. 〈https://mousetube.pasteur.fr〉.

  35. 35.

    Ketchen DJ, Shook CL. The application of cluster analysis in strategic management research: an analysis and critique. Strateg Manag J. 1996;17:441–58.

Download references

Acknowledgements

The authors thank Dr. David J. Barker, Dr. Aaron M. Johnson, Dr. David Euston, and Dr. Jonathan Chabout for their contribution of vocalization recordings and Dr. Michele Kelly for editing.

Author information

Author notes

  1. These authors contributed equally: Kevin R. Coffey, Russell G. Marx

Affiliations

  1. Psychiatry & Behavioral Sciences, University of Washington, Seattle, WA, 98104, USA

    • Kevin R. Coffey
    • , Russell G. Marx
    •  & John F. Neumaier

Authors

  1. Search for Kevin R. Coffey in:

  2. Search for Russell G. Marx in:

  3. Search for John F. Neumaier in:

Contributions

Kevin Coffey and Russell Marx designed and coded the software, created the figures, and wrote and edited the manuscript. John Neumaier wrote and edited the manuscript.

Corresponding author

Correspondence to John F. Neumaier.

Supplementary information

About this article

Publication history

Received

Revised

Accepted

Published

DOI

https://doi.org/10.1038/s41386-018-0303-6