Background & Summary

Comparative research of nonhuman animals can potentially shed light on the evolution of language and speech1. For instance, the study of animal vocal communication may reveal the roots of syntax and semantics2,3. Nonhuman vocalizations are often cryptic to a human observer, and with little prior knowledge about the animal-relevant acoustics, identifying essential information in them becomes an arduous task. Still, the discovery of nuances, which may be subtle to the human ear but important to the communicating animal, may become plausible if facilitated by large recording datasets. Here we present an extremely large collection of vocalizations of Egyptian fruit bats (Rousettus aegyptiacus). Moreover, many of the vocalizations in this database are accompanied by relevant information such as the identities of the emitter and the addressee of the vocalization, and the related behavioural context. Bats are social mammals which use rich vocal communication49, and have been found to possess the capability of vocal learning1013. Being social animals which almost exclusively interact with each other in the dark, and together with the versatile vocal skills found in this group (e.g., refs 14,15), bats make an interesting model for vocal communication studies. Egyptian fruit bats live in colonies of dozens to thousands, and may live (at least) to the age of 25 years16. They are extremely vocal, with most vocal interactions involving mildly-aggressive encounters in the roost17, and their vocalizations are composed of sequences of multi-harmonic low-fundamental syllables13,17. In this study, all the recordings were conducted in acoustically isolated cages, specifically designed for this purpose (Fig. 1). Bats were housed in these cages for periods of several months and were recorded around the clock with microphones and video cameras, which enabled detailed annotation of the interactions. Importantly, this study did not a-priory focus on specific types of vocalizations (such as songs, alarms, distress, etc.), hence the dataset mostly includes those vocalizations accompanying the everyday pairwise interactions of bats. Furthermore, the data covers the complete repertoire used by the bats during their housing in the experimental setup. The dataset also includes vocalizations of bats which were born inside the experimental setup, and recorded from birth to adulthood, under different experimental conditions (in isolation or in a group). Therefore, this collection enables the tracking of the vocal ontogeny of bat pups13. We provide the raw recordings (audio files of a few seconds each), which are usually of high signal-to-noise ratio. Retrieving the relevant, voiced, segments from a recorded audio track is often the first obstacle in analysing audio data. Thus, we provide a fairly accurate segmentation of the data (generated based on the method described in ref. 13), which paves an easy and straightforward way to process and analyse the vocalizations. The presented rich dataset can be potentially used to enhance our understanding of the origins of semantics (as in ref. 17), the ontogeny of a mammalian vocal communication (as in ref. 13), or even the putative use of syntax, as was observed, for instance, in courtship songs of birds18,19 and songs of a few bat species7,20.

Figure 1: The recording setup.
figure 1

(a) Colony chamber (length: 190 cm; width: 90 cm; height: 82 cm). Chambers of this type housed Colony treatment adults and pups, as well as groups of weaned pups in both treatments. (b) Isolation chamber (length: 120 cm; width: 70 cm; height: 60 cm). Chambers of this type housed Isolation treatment mothers with their pre-weaned pups. Legend: 1. wooden box; 2. outer box for acoustic isolation; 3. A window allowing transition between the ‘roost’ compartment and the ‘foraging’ compartment, and allowing some light from the ‘foraging’ compartment to penetrate the ‘roost’ during the day; 4. foam for echoes reduction; 5. plastic mesh for facilitating hanging from the ceiling; 6. airflow ventilators; 7. feeding skewers; 8. lights (active during daytime); 9. ultrasonic microphones; 10. infra-red sensitive video cameras; 11. loudspeakers (not used in this study).

Methods

Animal retrieval and care

All adult bats (Rousettus aegyptiacus), females in their late pregnancy and males, were caught in a natural roost near Herzliya, Israel. This roost is inhabited by a colony of 5,000 to 10,000 bats. All recorded pups were born in the experimental setup to wild-caught females. The bats were kept in acoustic chambers, large enough to allow flight (Fig. 1), and fed with a variety of fruits. Pups were separated from their mothers, and joined together (if were previously isolated; see below), after all pups were observed feeding on fruit by themselves. All experiments were reviewed and approved by the Institutional Animal Care and Use Committee of Tel Aviv University (Number L-13-016). The use of bats was approved by the Israeli National Park Authority.

Experimental setup

Two types of chambers were used to house the bats: colony chambers (Fig. 1a) for most of the recordings, and (smaller) isolation chambers (Fig. 1b) for the recording of preweaned isolated pups. The chambers were acoustically isolated (see ref. 13 for isolation verification methods) and their walls were covered with foam to diminish echoes. The chambers were continuously monitored with IR-sensitive cameras and omnidirectional electret ultrasound microphones (Avisoft-Bioacoustics Knowles FG-O). Audio recordings were conducted using Avisoft-Bioacoustics UltraSoundGate 1216H A/D converter with a sampling rate of 250 kHz.

Recording settings

Two types of treatments are included in our data: colony and isolation (Table 1). In a colony treatment adult bats were housed together, usually a few females and one male, in a colony chamber (Fig. 1a), and pups were born to the females in this chamber. In the isolation treatment each pregnant female was housed alone in a private isolation chamber (Fig. 1b), and gave birth to one pup in this chamber. After weaning, pups of both treatments were housed in colony chambers without adults. The recordings were conducted from May 2012 to June 2013, and in February 2014, in chambers of different treatments in parallel, where each chamber was continuously recorded (see Table 1 for recording periods of each group, and Table 2 for treatment assignment of individual bats). Importantly, we include in this database all of the recordings in which our automatic tools identified social calls. Thus, the database can be regarded as practically complete representation of the social vocal communication used by the bats during the recording periods, and statistics about the total usage of vocalizations can be safely drawn. Recordings may only be missing on rare occasions, due to short periods of technical problems, such as power cuts or malfunctioning microphone replacements.

Table 1 Recording settings.
Table 2 Description of recorded subjects.

Data Records

The data (Data Citation 1) consist of:

  1. 1

    293,238 recorded audio files (WAV format, sampling rate: 250 kHz, depth: 16 bit). The files are compressed (and can be extracted) using 7-zip (www.7-zip.org).

  2. 2

    One annotation file: Annotations.csv, with 91,080 annotations. These annotations were obtained from the videos (see below) and include details such as the emitter and context of each vocalization. The content of each column in the annotation file is described in Table 3, and descriptions of contexts and behaviours are depicted in Table 4. Each annotation corresponds to sequences of vocalizations in one file. Most files include a single interaction and, correspondingly, a single annotation, though some files record several interactions (and may be annotated with several annotations). Accordingly, columns 9 and 10 in Annotations.csv specify the location in the file to which each annotation refers (see Table 3).

    Table 3 Annotation details.
    Table 4 Annotated contexts and behaviours.
  3. 3

    One file describing the audio files: FileInfo.csv, which includes the exact recording time, the recording channel, and the exact time of the voiced segments in each file.

  4. 4

    One metadata file: Metadata.pdf, with details about the subjects and annotation definitions (Tables 1,2,3,4).

  5. 5

    A set of audio example files.

  6. 6

    Example videos of different interactions.

  7. 7

    A folder with example raw video recordings.

  8. 8

    A sample Matlab code exemplifying the segmentation and noise-filtering of raw audio recording. A similar process was used for obtaining the start and end positions of voiced segments (given in FileInfo.csv), and for filtering voiceless files; parameter adjustment might be required for specific tasks.

The recorded audio files are divided into folders of no more than 10,000 files for the convenience of use (this division has no significance). Note that two non-annotated files might be different recordings of the same call, if they were recorded at the same time, in the same treatment, in different channels, though it is not necessarily the case, and this is never the case for annotated files. Such duplicates can be excluded by the user by inspecting the recordings themselves. The annotation and metadata files are in comma-separated-value format (CSV) to ease their use with automatic tools, and to allow their direct upload into spreadsheet software. The metadata file includes descriptions of all identifiers in the annotation file. The example files include a few audio files exemplifying different recorded sounds, to facilitate the familiarity of the user with the recorded data. These examples include social call syllables, isolation calls of young pups, echolocation clicks, and examples of background noises (e.g., cage noise, microphone direct hit, etc., which are all rare in this database).

Technical Validation

The annotation types (contexts or behaviours) were defined by YP and MT. The recordings were annotated using the videos by MT and EP, or by trained students. These observers were certified after annotating a few recording days, which were then validated by an expert (YP or MT). In annotating the recordings we adopted a conservative approach, in which we designated as ‘unknown’ any type of data for which we had any doubt. Despite the training of the observers, some noise might have been introduced during the hundreds of hours of manual annotations, thus we estimated an error rate by a post-hoc quality test: 435 annotated recordings were sampled randomly and were then carefully re-annotated by EP, MT, and YP. Errors were counted when there were either a discrepancy between the post-hoc and the original annotations, or when the post-hoc examination concluded that some doubt still exists. The error rates were 2.1% (95% Confidence-Interval [CI]: 0.8–3.4%) for the emitter identification, 2.1% (95% CI: 0.8–3.4%) for the addressee identification, and 4% (95% CI: 2.2–5.8%) for the context identification. Thus we estimate the accuracy of the annotations as 97.9% for the emitter and addressee, and 96% for the context.

Usage Notes

The FileInfo.csv file includes automatically generated start and end positions of voiced segments (in samples) for each file. This enables an immediate analysis of the data without the need to apply any pre-processing. However, we encourage users to verify the suitability of the automatic segmentation (given in FileInfo.csv) to their needs by reviewing it in a random sample of recordings. To facilitate such review, and familiarity with the database, we provide a small library of examples of different sounds which might be encountered in the raw recordings, including social calls (which are the core interest of this study, and the most common sounds in the database), echolocation clicks (which are sporadically recorded before or after social calls), cage noises (relatively rare), and pup isolation calls (which are distinct from adults calls). For a usage which is sensitive to possible differences between microphones, one may take advantage of the Recording channel (different channels represent different microphones) field in FileInfo.csv (note that some microphones might have been replaced during the experiment, although these replacements were rare).

Additional Information

How to cite this article: Prat, Y. et al. An annotated dataset of Egyptian fruit bat vocalizations across varying contexts and during vocal ontogeny. Sci. Data 4:170143 doi: 10.1038/sdata.2017.143 (2017).

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.