Unsupervised identification of the internal states that shape natural behavior

Article metrics

Abstract

Internal states shape stimulus responses and decision-making, but we lack methods to identify them. To address this gap, we developed an unsupervised method to identify internal states from behavioral data and applied it to a dynamic social interaction. During courtship, Drosophila melanogaster males pattern their songs using feedback cues from their partner. Our model uncovers three latent states underlying this behavior and is able to predict moment-to-moment variation in song-patterning decisions. These states correspond to different sensorimotor strategies, each of which is characterized by different mappings from feedback cues to song modes. We show that a pair of neurons previously thought to be command neurons for song production are sufficient to drive switching between states. Our results reveal how animals compose behavior from previously unidentified internal states, which is a necessary step for quantitative descriptions of animal behavior that link environmental cues, internal needs, neuronal activity and motor outputs.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: A model with hidden states effectively predicts song patterning.
Fig. 2: Three sensorimotor strategies.
Fig. 3: Internal states are defined by distinct mappings between feedback cues and song behavior.
Fig. 4: Optogenetic activation of song pathway neurons and state switching.

Data availability

Data are available at http://arks.princeton.edu/ark:/88435/dsp01rv042w888.

Code availability

The code for tracking flies (DeepFlyTrack) is available at https://github.com/murthylab/DeepFlyTrack. The code for running the GLM–HMM algorithm is available at https://github.com/murthylab/GLMHMM.

References

  1. 1.

    Berman, G. J. Measuring behavior across scales. BMC Biol. 16, 23 (2018).

  2. 2.

    Calhoun, A. J. & Murthy, M. Quantifying behavior to solve sensorimotor transformations: advances from worms and flies. Curr. Opin. Neurobiol. 46, 90–98 (2017).

  3. 3.

    Egnor, S. E. R. & Branson, K. Computational analysis of behavior. Annu. Rev. Neurosci. 39, 217–236 (2016).

  4. 4.

    Anderson, D. J. Circuit modules linking internal states and social behaviour in flies and mice. Nat. Rev. Neurosci. 17, 692–704 (2016).

  5. 5.

    Stringer, C. et al. Spontaneous behaviors drive multidimensional, brain-wide activity. Science 364, eaav7893 (2019).

  6. 6.

    Musall, S., Kaufman, M. T., Gluf, S. & Churchland, A. A. Single-trial neural dynamics are dominated by richly varied movements. Nature Neurosci. 22, 1677–1686 (2019).

  7. 7.

    Saleem, A. B., Ayaz, A., Jeffery, K. J., Harris, K. D. & Carandini, M. Integration of visual motion and locomotion in mouse visual cortex. Nat. Neurosci. 16, 1864–1869 (2013).

  8. 8.

    Suver, M. P., Mamiya, A. & Dickinson, M. H. Octopamine neurons mediate flight-induced modulation of visual processing in Drosophila. Curr. Biol. 22, 2294–2302 (2012).

  9. 9.

    Liu, M., & Sharma, A. K., Shaevitz, J. W. & Leifer, A. M. Temporal processing and context dependency in Caenorhabditis elegans response to mechanosensation. eLife 7, e36419 (2018).

  10. 10.

    vanBreugel, F., Huda, A. & Dickinson, M. H. Distinct activity-gated pathways mediate attraction and aversion to CO2 in Drosophila. Nature 564, 420–424 (2018).

  11. 11.

    Remedios, R. et al. Social behaviour shapes hypothalamic neural ensemble representations of conspecific sex. Nature 550, 388–392 (2017).

  12. 12.

    Zhang, S. X., Miner, L. E., Boutros, C. L., Rogulja, D. & Crickmore, M. A. Motivation, perception, and chance converge to make a binary decision. Neuron 99, 376–388 (2018).

  13. 13.

    Hoopfer, E. D., Jung, Y., Inagaki, H. K., Rubin, G. M. & Anderson, D. J. P1 interneurons promote a persistent internal state that enhances inter-male aggression in Drosophila. eLife 4, e11346 (2015).

  14. 14.

    Leinwand, S. G. & Chalasani, S. H. Neuropeptide signaling remodels chemosensory circuit composition in Caenorhabditis elegans. Nat. Neurosci. 16, 1461–1467 (2013).

  15. 15.

    Root, C. M. et al. A presynaptic gain control mechanism fine-tunes olfactory behavior. Neuron 59, 311–321 (2008).

  16. 16.

    Iigaya, K., Fonseca, M. S., Murakami, M., Mainen, Z. F. & Dayan, P. An effect of serotonergic stimulation on learning rates for rewards apparent after long intertrial intervals. Nat. Commun. 9, 2477 (2018).

  17. 17.

    Gründemann, J. et al. Amygdala ensembles dynamically encode behavioral states. Science 364, eaav8736 (2019).

  18. 18.

    Dietrich, M. O., Zimmer, M. R., Bober, J. & Horvath, T. L. Hypothalamic agrp neurons drive stereotypic behaviors beyond feeding. Cell 160, 1222–1232 (2015).

  19. 19.

    Berman, G. J., Choi, D. M., Bialek, W. & Shaevitz, J. W. Mapping the stereotyped behaviour of freely moving fruit flies. J. R. Soc. Interface 11, 99 (2014).

  20. 20.

    Wiltschko, A. B. et al. Mapping sub-second structure in mouse behavior. Neuron 88, 1121–1135 (2015).

  21. 21.

    Eyjolfsdottir, E., Branson, K., Yue, Y. & Perona, P. Learning recurrent representations for hierarchical behavior modeling. Preprint at arXiv https://arxiv.org/abs/1611.00094 (2016).

  22. 22.

    Katsov, A. Y., Freifeld, L., Horowitz, M., Kuehn, S. & Clandinin, T. R. Dynamic structure of locomotor behavior in walking fruit flies. eLife 6, e26410 (2017).

  23. 23.

    Corrado, G. S., Sugrue, L. P., Seung, H. S. & Newsome, W. T. Linear-nonlinear-Poisson models of primate choice dynamics. J. Exp. Anal. Behav. 84, 581–617 (2005).

  24. 24.

    Coen, P. et al. Dynamic sensory cues shape song structure in Drosophila. Nature 507, 233–237 (2014).

  25. 25.

    Calhoun, A. J. et al. Neural mechanisms for evaluating environmental variability in caenorhabditis elegans. Neuron 86, 428–441 (2015).

  26. 26.

    Tai, L.-H., Lee, A. M., Benavidez, N., Bonci, A. & Wilbrecht, L. Transient stimulation of distinct subpopulations of striatal neurons mimics changes in action value. Nat. Neurosci. 15, 1281–1289 (2012).

  27. 27.

    Bennet-Clark, H. C. & Ewing, A. W. Stimuli provided by courtship of male Drosophila melanogaster. Nature 215, 669–671 (1967).

  28. 28.

    Coen, P., Xie, M., Clemens, J. & Murthy, M. Sensorimotor transformations underlying variability in song intensity during Drosophila courtship. Neuron 89, 629–644 (2016).

  29. 29.

    Clemens, J. et al. Discovery of a new song mode in Drosophila reveals hidden structure in the sensory and neural drivers of behavior. Curr. Biol. 28, 2400–2412 (2018).

  30. 30.

    Bussell, J. J., Yapici, N., Zhang, S. X., Dickson, B. J. & Vosshall, L. B. Abdominal-B neurons control Drosophila virgin female receptivity. Curr. Biol. 24, 1584–1595 (2014).

  31. 31.

    Clemens, J. et al. Connecting neural codes with behavior in the auditory system of Drosophila. Neuron 87, 1332–1343 (2015).

  32. 32.

    von Philipsborn, A. C. et al. Neuronal control of Drosophila courtship song. Neuron 69, 509–522 (2011).

  33. 33.

    Escola, S., Fontanini, A., Katz, D. & Paninski, L. Hidden Markov models for the stimulus-response relationships of multistate neural systems. Neural Comput. 23, 1071–1132 (2011).

  34. 34.

    Sharma, A., Johnson, R., Engert, F. & Linderman, S. Point process latent variable models of larval zebrafish behavior. in Proc. Neural Information Processing Systems 31 (eds Bengio, S. et al) 10942–10953 (Curran Associates, 2018).

  35. 35.

    Berman, G. J., Bialek, W. & Shaevitz, J. W. Predictability and hierarchy in Drosophila behavior. Proc. Natl Acad. Sci. USA 113, 11943–11948 (2016).

  36. 36.

    Ding, Y. et al. Neural evolution of context-dependent fly song. Curr. Biol. 29, 1089–1099 (2019).

  37. 37.

    Arthur, B. J., Sunayama-Morita, T., Coen, P., Murthy, M. & Stern, D. L. Multi-channel acoustic recording and automated analysis of Drosophila courtship songs. BMC Biol. 11, 11 (2013).

  38. 38.

    McKellar, C. E. et al. Threshold-based ordering of sequential actions during Drosophila courtship. Curr. Biol. 29, 426–434 (2019).

  39. 39.

    Pereira, T. D. et al. Fast animal pose estimation using deep neural networks. Nat. Methods 16, 117–125 (2019).

  40. 40.

    Vidaurre, D., Myers, N. E., Stokes, M., Nobre, A. C. & Woolrich, M. W. Temporally unconstrained decoding reveals consistent but time-varying stages of stimulus processing. Cerebral Cortex 29, 863–874 (2018).

  41. 41.

    Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

  42. 42.

    Linderman, S. et al. Bayesian learning and inference in recurrent switching linear dynamical systems. PMLR 54, 914–922 (2017).

  43. 43.

    Tao, L., Ozarkar, S., Beck, J. M. & Bhandawat, V. Statistical structure of locomotion and its modulation by odors. eLife 8, e41235 (2019).

  44. 44.

    Yu, J. Y., Kanai, M. I., Demir, E., Jefferis, G. S. X. E. & Dickson, B. J. Cellular organization of the neural circuit that drives Drosophila courtship behavior. Curr. Biol. 20, 1602–1614 (2010).

  45. 45.

    Burgos-Robles, A. et al. Amygdala inputs to prefrontal cortex guide behavior amid conflicting cues of reward and punishment. Nat. Neurosci. 20, 824–835 (2017).

  46. 46.

    Farooqi, I. S. et al. Leptin regulates striatal regions and human eating behavior. Science 317, 1355 (2007).

  47. 47.

    Zhang, S. X., Rogulja, D. & Crickmore, M. A. Dopaminergic circuitry underlying mating drive. Neuron 91, 168–181 (2016).

  48. 48.

    Sakai, T. & Ishida, N. Circadian rhythms of female mating activity governed by clock genes in Drosophila. Proc. Natl Acad. Sci. USA 98, 9221–9225 (2001).

  49. 49.

    Nowicki, S. & Searcy, W. A. Song function and the evolution of female preferences: why birds sing, why brains matter. Ann. NY Acad. Sci. 1016, 704–723 (2004).

  50. 50.

    Tinbergen, N. Social releasers and the experimental method required for their study. Wilson Bull. 60, 6–51 (1948).

  51. 51.

    Klapoetke, N. C. et al. Independent optical excitation of distinct neural populations. Nat. Methods 11, 338–346 (2014).

  52. 52.

    Rabiner, L. R. A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE 77, 257–286 (1989).

Download references

Acknowledgements

The authors thank M. Choi, F. Roemschied, R. Fathy and D. Deutsch for assistance with establishing triggered LED stimulation (for optogenetics) in the acoustic behavioral assay chamber. They thank P. Andolfatto, V. Jayaraman, G. Rubin and B. Dickson for flies. They also thank G. Guan for excellent technical assistance and the entire Murthy Lab for feedback on this manuscript. This work was funded by the Simons Foundation SCGB (AWD494712, AWD1004351 and AWD543027 to A.J.C. & J.W.P.), NIH BRAIN R01 (NS104899 to M.M. and J.W.P.) and the Howard Hughes Medical Institute (to M.M.).

Author information

A.J.C. and M.M. designed the study. A.J.C. collected new data (wild-type data analyzed in this study were collected for a previous study24.) A.J.C. and J.W.P designed the models. A.J.C. analyzed the data and generated the figures. A.J.C., J.W.P. and M.M. wrote the paper.

Correspondence to Mala Murthy.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Neuroscience thanks Benjamin de Bivort, Sarah Woolley, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Comparison of GLMs and GLM-HMM.

a, Fly feedback cues used for prediction in (Coen et al. 2014) (left) or the current study (right). b, Comparison of model performance using probability correct (‘pCorr’) (see Methods) for predictions from (Coen et al. 2014) (reproduced from that paper) for the single-state GLM (See Fig. 1c) and 3-state GLM-HMM (see Fig. 1d). Each open circle represents predictions from one courtship pair. The same pairs were used when calculating the pCorr value for each condition (GLM and 3-state GLM-HMM); filled circles represent mean +/- SD; 100 shown for visualization purposes. c, Schematic of standard HMM, which has fixed transition and emission probabilities. d, Schematic of GLM-HMM in the same format, with static probabilities replaced by dynamic ones. Example filters from the GLM are indicated with the purple and light brown lines.

Extended Data Fig. 2 Assessing model predictions.

a, Illustration of how song is binned for model predictions. Song traces (top) are discretized by identifying the most common type of song in between two moments in time, allowing for either fine (middle) or coarse (bottom) binning - see Methods. b, Illustration of how model performance is estimated, using one step forward predictions (see Methods). c, 3-state GLM-HMM performance at predicting each bin (measured in bits/bin) when song is discretized or binned at different frequencies (60 Hz, 30 Hz, 15 Hz, 5 Hz) and compared to a static HMM - all values normalized to a ‘Chance’ model (see Methods). Each open circle represents predictions from one courtship pair. Note that the performance at 30Hz represents a re-scaled version of the performance shown in Fig. 1g. Filled circles represent mean +/- SD, n=100. d, Comparison of the 3-state GLM-HMM with a static HMM for specific types of transitions when song is sampled at 30 Hz (in bits/transition, equivalent to bits/bin; compare with panel (c)) - all values normalized to a ‘Chance’ model (see Methods). The HMM is worse than the ‘Chance’ model at predicting transitions. Filled circles represent mean +/- SD, n=100. e, Performance of models when the underlying states used for prediction are estimated ignoring past song mode history (see b) and only using the the GLM filters - all values normalized to a ‘Chance’ model (see Methods). The 3-state GLM-HMM significantly improves prediction over ‘Chance’ (p = 6.8 e-32, Mann-Whitney U-test) and outperforms all other models. Filled circles represent mean +/- SD, n=100. f, Example output of GLM-HMM model when the underlying states are generated purely from feedback cues (e).

Extended Data Fig. 3 Evaluating the states of the GLM-HMM.

a-c. The mean value for each feedback cue in the (a) ‘close’, (b) ‘chasing’, or (c) ‘whatever’ state (see Methods for details on z-scoring). d-f. Representative traces of male and female movement trajectories in each state. Male trajectories are in gray and female trajectories in magenta. Arrows indicate fly orientation at the end of 660 ms. g. In the 4-state GLM-HMM model, the probability of observing each type of song when the animal is in that state. Filled circles represent individual animals (n=276 animals, small black circles with lines are mean +/- SD). h. The correspondence between the 3-state GLM-HMM and the 4-state GLM-HMM. Shown is the conditional probability of the 3-state model being in the ‘close’, ‘chasing’, or ‘whatever’ states given the state of the 4-state model. i. The mean probability across flies of being in each state of the 4-state model when aligned to absolute time (top) or the time of copulation (bottom). j. Probability of state dwell times generated from feedback cues. These show non-exponential dwell times on a y-log plot.

Extended Data Fig. 4 State transition filters.

a-c, State-transition filters that predict transitions from one state to another for each feedback cue (see Fig. 1B for list of all 17 feedback cues used in this study).

Extended Data Fig. 5 Amplitude of output filters.

The amplitude of output filters (see Methods) for each state/output pair. Output filter amplitudes were normalized between 0 (smallest filter amplitude) and 1 (largest filter amplitude).

Extended Data Fig. 6 Output filters.

a-c, Output filters for each feedback cue (see Fig. 1b) that predict the emission of each song type for a given state. ‘No song’ filters are not shown as these are fixed to be constant, and song type filters are in relation to these values (see Methods). Heavy line represents mean, shading represents SEM. d-e, Sign of filter for each emission filter shows the same feature can be excitatory or inhibitory depending on the state.

Extended Data Fig. 7 Activation of song pathway neurons.

a, Solitary ATR-fed P1a males produce song when exposed to the same LED stimulus used in Figure 4. In solitary males, song production is both long-lasting and time-locked to the LED stimulus. Number of animals in parentheses (n=5), heavy line represents mean, shading represents SEM. b, ATR-fed P1a males courting a female produce significantly more Pfast (p = 1e-4), Pslow (p = 1e-3), and significantly-different amounts of sine song (p = 0.009). All p-values from Mann-Whitney U-test. Animals from (a) (n=5), center lines of box plots represent median, the bottom and top edges represent the 25th and 75th percentiles (respectively). Whiskers extend to +/- 2.7 times the standard deviation. c, The probability of observing each song mode aligned to the opto stimulus shows that LED activation of flies not fed ATR does not increase song production. Number of animals in parentheses, heavy line represents mean, shading represents SEM. d, The probability of the model being in each state aligned to the opto stimulus shows that LED activation of flies not fed ATR does not change state residence. Error bars represent SEM. Number of animals in parentheses, heavy line represents mean, shading represents SEM. e-f, ‘Opto’ filters represent the contribution of the LED to the production of each type of song for (e) ATR+ and (f) ATR- flies. Number of animals in parentheses, heavy line represents mean, shading represents SEM. The filters for each strain and song type are not significantly different between states. g, Measuring the maximal change in state probability between LED ON and LED OFF shows that only pIP10 activation produces a significant difference between ATR+ and ATR- flies (two-tailed t test). Number of animals in parentheses in (e-f), center lines of box plots represent median, the bottom and top edges represent the 25th and 75th percentiles (respectively). Whiskers extend to +/- 2.7 times the standard deviation. All p-values from two-tailed t test.

Extended Data Fig. 8 Activating pIP10 biases males toward the close state.

a, Conditioning on which state the animal is in prior to the light being on (left, ATR-fed pIP10 flies, n=41; middle ATR-free pIP10 flies, n=28; right, ratio of ATR-fed to ATR-free state dwell time), activation of pIP10 results in an increase in the probability of being in the close state unless the animal was already in the close state. Shaded area is SEM. b, When the male was both close (<5mm) and far (> 8mm), pIP10 activation increases the probability that the animal will enter the close state. Shaded area is SEM. c. When the male was already either singing or not singing, pIP10 activation increases the probability that the animal will enter the close state. Shaded area is SEM.

Extended Data Fig. 9 Bias toward close state is due to the altered use of feedback cues.

a, Predictive performance is not significantly different between light ON and light OFF conditions for both ATR-fed (n=41, p=0.08) and ATR-free animals (n=28, p=0.46). Performance suffers without male (+ATR p=4e-12, -ATR p=1e-9) or female feedback cues (+ATR p=1e-10, -ATR p=1e-9), suggesting these state-specific features are needed to predict animal behavior. Dots represent individual flies, center line is mean and lines are +/- SD. All statistical tests are Mann-Whitney U-test. b, The similarity between each feedback cue and the filters for the ‘close’ state are subtracted by the similarity of that feedback cue to the filters for the ‘chasing’ state during LED activation of ATR-fed pIP10 flies. This reveals song patterning is more similar to the ‘close’ state than the ‘chasing’ state for most feedback cues. c, Animals that were not fed ATR (n=28) do not show a change in the contribution of the feedback cues to being in a given state, while animals that are fed ATR (n=41) do show a change in feedback cue contribution. Shaded area is SEM. d, Most aspects of the animal trajectory do not differ in response to red light when males are either fed (black, n=41) or not fed (gray, n=28) ATR food. Plotted are the six strongest contributors from (b). p-values are p=0.056 for mFV, p=0.11 for fmFV, 9.7e-5 for mLS, p=0.64 for fLS, p=0.67 for fFV, p=0.009 for mfAngle Mann-Whitney U-test, significance at p = 0.05 corrected to p = 0.0083 by Bonferroni. * represents p < 0.0083, n.s. p > 0.008. Shaded area is SEM.

Extended Data Fig. 10 The relationship between feedback cues and song patterns changes when LED is ON in pIP10 +ATR flies.

a, The transfer functions of each feedback cue when the LED is OFF (black) and when the LED is ON (red) compared to the wild-type average (dark gray). b-c, Illustration of Pearson’s correlation between transfer functions of two feedback cues (mFV (g) and fLS (h)) when the LED is off (top, black) and on (bottom, red) and the transfer function in each state (not the average as in Fig. 4f). The feature is considered most similar to the state with which it has the highest Pearson’s correlation. All ATR-fed pIP10 animals were used (n=41). d, The state transfer function that is closest to the wild-type average for each feedback cue. For instance, the mFV average is closest to the ‘whatever’ state and the fLS average is closest to the ‘chasing’ state. e-f, Same as (d), but for pIP10 – ATR flies when the LED is off (e) or on (f).

Supplementary information

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Calhoun, A.J., Pillow, J.W. & Murthy, M. Unsupervised identification of the internal states that shape natural behavior. Nat Neurosci 22, 2040–2049 (2019) doi:10.1038/s41593-019-0533-x

Download citation