Abstract
What are the spatial and temporal scales of brainwide neuronal activity? We used swept, confocally-aligned planar excitation (SCAPE) microscopy to image all cells in a large volume of the brain of adult Drosophila with high spatiotemporal resolution while flies engaged in a variety of spontaneous behaviors. This revealed neural representations of behavior on multiple spatial and temporal scales. The activity of most neurons correlated (or anticorrelated) with running and flailing over timescales that ranged from seconds to a minute. Grooming elicited a weaker global response. Significant residual activity not directly correlated with behavior was high dimensional and reflected the activity of small clusters of spatially organized neurons that may correspond to genetically defined cell types. These clusters participate in the global dynamics, indicating that neural activity reflects a combination of local and broadly distributed components. This suggests that microcircuits with highly specified functions are provided with knowledge of the larger context in which they operate.
Similar content being viewed by others
Introduction
What are the spatial and temporal scales of activity in the brain? In nematodes, flies, zebrafish, and mice1, the exogenous activation of defined clusters of neurons can drive behavioral sequences, providing a causal link between the activity of small groups of cells and specific behaviors. In Drosophila melanogaster, defined clusters of genetically identified neurons can elicit innate behaviors, including aggression2,3, courtship4,5,6, and egg laying7. The identification of these circuits has suggested a view of the fly brain as a collection of specialized microcircuits. On the other hand, several locomotor behaviors seem to be associated with extensive activity in the fly brain beyond those neurons that are directly involved in the behavior. For example, locomotive behavior in the fly is associated with activity not only in motor circuits8,9,10 but also in primary sensory areas11,12,13,14,15 and downstream sensory structures such as the mushroom body16,17. These are similar to observations in mouse primary sensory cortices18,19,20. Thus, locomotor behaviors are often associated with more extensive patterns of activity than are required to elicit the specific behavior.
Brainwide recording of neural activity in multiple organisms reveals global activity associated with behavior21,22,23,24,25,26,27,28,29,30,31,32,33,34,35 as well as cognitive tasks36,37,38. Recent studies in Drosophila have demonstrated extensive activity throughout most neuropil in the fly brain during running25,26,39,40. Similarly, both calcium imaging and electrophysiological recordings in the mouse have revealed distributed activity correlated with behavior across the cortex30,31,32,34. However, studies that employ neuropil imaging in the fly and widefield imaging in the mouse do not distinguish whether behavior results in the activation of all neurons or the activation of more limited but distributed clusters of neurons.
Why does neural activity extend well beyond those neurons responsible for the behavior? Distributed activity may provide circuits that control specific behaviors with information relevant to the locomotor state of the organism. In this manner, sensory representations may also reflect behavioral state. For example, in visual systems, locomotion enhances gain and elicits activity in area V1 of mice18,20 and in the optic lobe of flies12,14,15. Distributed activity associated with behavior may reflect efference copies that enable the cancellation of self-generated sensory input41. For example, locomotor state on the fly combines with self-generated visual feedback to control posture42. If a majority of neurons are indeed active during behavior, this would imply that the neural ensembles that are capable of eliciting specific behaviors (e.g., mating, aggression, or egg laying), will also be active during unrelated behaviors. This further implies that the ability of clusters of neurons to elicit specific behaviors must be modulated by behavioral context.
The fly brain offers a unique opportunity to examine the relationship between broadly distributed activity and the activity of spatially localized genetically identified neurons. Analysis of neural activity at both a global and local scale requires that we observe the activity of neurons distributed throughout the brain at sufficient temporal resolution to reveal correlations between neurons. We used SCAPE microscopy43,44 to record activity in a significant fraction of the neurons across a large and contiguous portion of the brain of behaving Drosophila. The principal patterns of neural activity (“flygenvectors”) comprise multiple spatial and temporal scales. We observe that signals related to some but not all behaviors engage the majority of imaged neurons, including genetically defined neurons that control specific behaviors. Moreover, although the activity of most neurons is correlated with current behavior, a significant fraction exhibit activity correlated with behavioral dynamics on longer timescales, perhaps reflecting the animal’s arousal state. The neural activity not explained by behavior is complex and high-dimensional, comprised of a large number of patterns distinguishable from noise. Most of these activity patterns are sparse and spatially organized, suggesting that each dimension corresponds to the localized activity of specific cell types. These groups of cells exhibiting unique local dynamics also participate in the global behavioral state, affording the opportunity for local computations to be state-dependent. Thus, neural activity in the behaving fly reflects the coordination of broadly distributed and spatially localized dynamics, and neurons with highly specified functions are provided with information about the larger behavioral context.
Results
Large-scale functional imaging at single-neuron resolution
We used SCAPE microscopy43,44, a single-objective form of light-sheet microscopy that permits high-speed volumetric imaging, to examine activity across a large volume of the central brain of behaving adult Drosophila. This enabled dual-color imaging of the dorsal third of the central brain in the behaving fly at more than 10 volumes per second with a voxel size of 1.0 × 1.4 × 2.4 μm (See Methods for details), greatly surpassing the spatiotemporal resolution of common methods such as two-photon imaging (Fig. 1a–b, g). We imaged flies expressing the nuclear calcium reporter nls-GCaMP6s and the static nuclear dsRed under control of the panneuronal driver nSyb-Gal4. Nuclear calcium reporters have been shown to be faithful readouts of neural activity45,46; they may preclude seeing fast dynamics and small changes in neural activity but offer the substantial benefit of easily resolving individual neurons. We therefore reasoned that any increase in low-pass filtering introduced by using a nuclear-localized indicator was greatly offset by the advantage of allowing cellular-level spatial resolution. We imaged a parallelepiped-shaped volume spanning the dorsal third of the central brain, achieving single-cell resolution through the majority of this imaged volume (Fig. S1). Kenyon cells were omitted from our analyses because nls-GCaMP6s expression was poor (Fig. S1). On the basis of cell counts from electron microscopy47, we expected to resolve on the order of a few thousand cells (See Supplemental Information). We used the fluorescence of the static red channel to extract on average 1631 ± 109 ROIs per animal. After refinement to exclude ROIs with large motion artifacts, we obtained 1419 ± 78 stable, single-cell ROIs per animal (Methods). By visual inspection, we confirmed that this count contained nearly all neurons within 70 μm of the dorsal surface and a sample of neurons residing at greater depths (Fig. S1).
Broad-scale neural activity is highly correlated with behavior
We examined neural activity while flies behaved freely on a spherical treadmill (Methods). The different behaviors exhibited by the fly were identified by tracking points on the fly’s body with Deep Graph Pose48. We used a semi-supervised approach described in a companion manuscript49 to infer the behavioral states of running, front and back grooming, abdomen bending, and quiescence (Fig. 1c–d, Methods). The average time flies spent in each behavioral state varied considerably (Quiescent: 50%, running: 19%, front grooming: 6%, back grooming: 15%, abdomen bending: 10%, undefined: 0.2%), and different flies exhibited these behaviors with varying frequencies (Fig. 1f). We also imaged the fly without a spherical treadmill, where it primarily exhibited a flailing behavior. When off the ball, flies flailed 12% of the time. On the treadmill, flies performed bouts of running punctuated by either grooming or quiescence. Autocorrelation of the running state decayed on time scales of 1s and 40s (Fig. 1e), because running occurred in bouts that lasted a few seconds but the tendency to run persisted for considerably longer times. The other annotated behaviors exhibited only a single fast correlation time (Fig. S1). Long-timescale changes in the tendency to run suggest that an underlying state, such as arousal, fluctuated over the course of our experiments.
Strikingly, most of the imaged neurons throughout the brain show a pattern of activity that is correlated with running. This is in accord with previous studies demonstrating that most of the neuropil in the fly brain is active when the fly runs25,26,39,40 and demonstrates that these earlier neuropil recordings are not the consequence of a sparse ensemble of active neurons with extensive projections. Rather, running is represented by the vast majority of neurons in the fly brain (Fig. 1h). The mean activity across all the imaged neurons is highly correlated with running smoothed with an exponential filter with a decay time of 6s (r = 0.90, Fig. 1i). This correlation cannot be accounted for by motion artifacts; motion artifacts are negligible after registration, and movement of the brain before registration is not correlated with running (r = 0.02, Fig. S1). Cross-correlation of individual neurons with running is high, and the activity of most neurons follows running with a small lag (Fig. 1j).
Distinct neural populations represent locomotion over different timescales
We fit a regression model to extract the components of neural activity correlated with all identifiable behaviors (running, front and back grooming, abdomen bending, and quiescence). Quiescence was characterized by a lack of movement of all tracked points on the body (Methods). Our tethered preparation prevented flies from exhibiting other behaviors such as proboscis or wing extension. To reflect moments of uncertainty in a fly’s behavioral state, we used the behavioral state probability (Methods, Fig. 1d, bottom) rather than the binary behavioral state in our regression model. The observation that the autocorrelation of running exhibited two decay times (Fig. 1e) suggested that different neurons might be correlated with behavior on different timescales. Therefore, we regressed each neuron’s activity against all behaviors filtered using a different fitted time constant (τi) for each cell (i). We allowed for both potentially causal and acausal relationships between behavior and neural activity using a cell-specific temporal shift (ϕi) of neural activity relative to the annotated behaviors (Methods). We assessed the significance of the fit to each cell by randomly shifting regressors in time (Methods).
Regressing neurons across behaviors and filtering each neuron with its own time constant considerably increased correlations between the activity of individual neurons and the annotated behaviors (Fig. 2a). This model accounted for proportionally more variance in flies that spent more time running (CC = 0.73, Fig. 2b), as expected from the widespread representation of running (Fig. 1h). The majority of neurons are positively correlated with running, although a smaller population show strong negative correlation with running (Fig. 2c). Negatively correlated neurons are highly concentrated in the Pars Intercerebralis (PI) (Fig. 2d–f). This region is comprised of a heterogeneous population of peptidergic neurons with a wide range of functions50. Although many PI neurons are anticorrelated with running, some PI neurons are positively correlated, suggesting that the release of a set of peptides is higher during running while release of others is higher during quiescence. Notably, many of these peptidergic PI neurons project to the same neuropil50, meaning that this biologically meaningful heterogeneity in adjacent neurons would likely be masked in neuropil imaging.
Cells exhibited a remarkably broad range of preferred filter time constants (Fig. 2a, g). 41% of cells had small time constants (τ < 4 seconds), reflecting the similarity of the dynamics of behavior and mean neural activity (Fig. 1e). However, 31% of all cells have τ greater than 20 seconds, and the overall distribution is bimodal (Fig. 2g). Thus, the neural relationship to behavior has two timescales that approximate the timescales of running itself (Figs. 1e and 2g). The median r2 does not decrease as τ increases, indicating that behavior explains a similar fraction of neural activity in cells with small and large behavioral time constants (Fig. 2g). The temporal shifts in the filters were almost always positive and similar to the filter time constants, such that cells with large time constants also had large shifts (Fig. 2h). The locations in the brain of cells with a given behavior time constant exhibit spatial organization (Fig. 2i): some brain regions exhibit predominantly small τ and other regions exhibit large τ. Neurons with large τ cluster in the PI region and in lateral areas on the posterior and anterior surfaces (Fig. 2i). Neurons with small τ are distributed throughout the brain but most concentrated near the midline on the dorso–posterior surface (Fig. 2i). This region is primarily composed of neurons innervating the protocerebral bridge and fan-shaped body of the central complex and descending neurons innervating the ventral nerve cord ("CX, DN”, Fig. 2i, Fig. S2). This is consistent with the observation that neurons in these brain regions are involved in orienting and locomotion51,52.
To explore whether neural activity might be related to aspects of behavior beyond those already considered, we refit the activity of every cell using the spatial coordinates of every tracked body point as a regressor, allowing for a unique behavior time constant (τi) and temporal shift (ϕi) for every cell as before (Methods). On average, the neural variance explained by this ‘markers’ model was higher than that of our original ‘states’ model, as expected given the significantly larger number of regressors (16 marker coordinates versus 4 active states, Fig. S2). However, the fraction of variance explained that exceeded expectation (from temporally shifted regressors. See Methods.) was similar for both models (Fig. S2), suggesting that our original model using behavioral states captures a relatively complete and more parsimonious relationship between neural activity and behavior. For these reasons, as we proceeded to examine relationships between neural activity and behavior, we exclusively focused on the more parsimonious ‘states’ model.
Brainwide neural activity correlates with vigorous but not subdued behaviors
Do all behaviors engage the entire dorsal brain, or is running unique? Grooming and running are both precise directed behaviors but differ in the number of limbs they engage, whereas flailing and running both engage all limbs. We define behaviors engaging all limbs as ‘vigorous’ and behaviors engaging fewer limbs as ‘subdued’. Most neurons are noticeably less active during grooming than running (Fig. 3a). During front and back grooming, only 3.0% and 2.1% of all cells, respectively, have τ < 4s and a regression weight > 0.02 (Fig. 3b–c). Only 8 cells across all flies were highly correlated with front grooming (CC > 0.5, τ < 4s), and only two flies had multiple such cells (Fig. 3d). In both flies, these cells were near the periphery of the imaged volume, potentially accounting for their absence in other flies. Flies engage in each behavior for different amounts of time, meaning that the variance explained by each behavior in neural data reflects both the duration and the influence of that behavior. Thus, to quantify brainwide influence of each behavior, we normalize the variance explained by each behavior by the total time each fly exhibited that behavior, relative to running. Front and back grooming account for only 18% and 9% as much variance per unit time as running in the neural activity of cells with τ < 4s (Fig. 3e, Fig. S3). Our observation that the dorsal brain is not broadly engaged during grooming is qualitatively in agreement with prior work proposing that small ensembles of cells are responsible for grooming53,54.
We elicit flailing by removing the treadmill from beneath the fly. The representation of flailing is brainwide and qualitatively similar to that of running (Fig. 3f). 59% of neurons with regression weights > 0.02 and τ < 4s during running had equally large regression weights during flailing (Fig. 3g, h). This suggests that global activity does not encode the precise modality of locomotion but rather may encode locomotive vigor or arousal more generally. This is further supported by the observation that unlike grooming, flailing accounts for more variance per unit time than running (218% and 262% for τ > 4s and τ > 20s, respectively. Figure 3e). Collectively, our results suggest that vigorous behaviors activate global representations, whereas more subdued behaviors such as grooming do not.
Residual neural activity reveals ensembles of neurons with correlated activity
We next examined the nature of the neural activity not accounted for by our regression model, and thus not easily explained by any of the identified behaviors. After large-scale locomotion- and other behavior-related activity has been regressed out, the residual activity exhibits rich dynamics across both space and time (Fig. 4a), with all neurons exhibiting significant residual dynamics across timescales, from seconds to minutes (p < 1e-10, Ljung-Box test. See Methods). This residual activity likely includes both activities unrelated to behavior as well as activity related to behavior but in a manner more complex than the regression model permits. For example, the residual activity of some cells appears to include dynamics related to transitions between states (Fig. 4a–b). We examined this by comparing neural activity preceding a state transition to activity earlier in a bout of a given behavior. On average, transitions from quiescence to running were preceded by a slight increase in residual neural activity (Fig. S4). However, we did not find evidence for a subpopulation of neurons that reliably encode state transitions; any neuron is equally likely to exhibit a large response during the transition from one behavior bout to another (Fig. S4). For simplicity, hereafter we refer to activity accounted for and unaccounted for by the regression model as behavior-related and residual, respectively. On average, the fraction of variance explained by behavior (mean r2 = 0.39) is similar in magnitude to that of the residual dynamics (1−r2). These residual dynamics include neurons that are highly active during running (Fig. 4b, red), and the variance explained by its leading PCs and behavior were negligibly correlated (Fig. S4). This implies that behavior-related and residual activity coexist in the same population of neurons.
We examined the structure of residual activity by performing a principal component analysis (PCA). On average, the first 10 modes explain 62% of the residual variance, and subsequent modes each account for no more than 2% of the variance (Fig. 4c). We quantified the dimensionality of this residual activity as the number of PCA modes that maximize the log-likelihood on held-out data. Higher-order PCA modes that do not improve the log-likelihood are not predictive of held-out data and therefore are defined as noise. Surprisingly, many modes can be distinguished from noise (41.5 ± 4.6 modes, Fig. 4d, Methods), despite the fact that many of these modes account for very little variance. These PCA modes are very sparse, in some cases involving as few as 4 neurons (Fig. 4e, Methods). The average sparseness of the first two modes is 1.3%, meaning that a typical mode involves 18 neurons (Fig. 4f). Thus, modes that explain a small fraction of the total variance nevertheless describe reliable patterns present in neural activity. Counterintuitively, dominant modes are sparser than less dominant modes (Fig. 4f). This suggests that the most reliable patterns in the data tend to contain fewer neurons.
Each PCA mode is sparse and therefore dominated by the activity of a small group of neurons with idiosyncratic yet similar dynamics (Fig. S4). These modes show spatial organization; for example, small groups of bilaterally symmetric neurons dominate the largest PCA modes (Fig. 4g–i). These modes are similar across flies, although there is variability in which mode explains the most variance in a given fly (Fig. 4g–i). To quantify this spatial organization, we first approximate each mode as a binary pattern in which only large outliers in the original mode are set to 1 (Methods). We then analyzed the spatial organization by calculating the distance between nonzero cells in the binary pattern after superimposing the left and right hemisphere by reflecting at the midline. Across all flies, modes were more spatially organized than expected by chance (Fig. 4j). The dominant modes identified by this analysis correspond to ensembles of ~20 cells that may comprise functional units. The ensembles often display symmetry across hemispheres. Each functional group is likely to be made up of multiple clusters with even smaller numbers of neurons, perhaps corresponding to specific cell types.
Residual activity is similar in running and quiescent states
What is the relationship between global behavior-related activity and the sparser residual patterns of activity? One possibility is that residual dynamics could depend on behavioral state so that, for example, a particular residual dynamic pattern only appears during running (Fig. 5a, model 1). Alternatively, residual dynamics could be present in different forms in each of the multiple behavioral states (Fig. 5a, model 2). Finally, residual activity could be independent of behavioral state, and therefore similar, for example, in the running and the quiescent states (Fig. 5a, model 3). We find that the third of these possibilities most accurately accounts for our data; residual activity shows no obvious relationship to behavioral state (Fig. 4a).
We examined the residual neural activity during a behavioral state (a “subspace”) and compared the subspaces of the running and quiescent states. The amount of variance explained by each mode appeared virtually identical in the two states (Fig. 5b). The dimensionality of these two subspaces is qualitatively similar, but on average the quiescent state is higher dimensional (37.9 ± 6.1) than the running state (20.5 ± 2.0, Fig. 5c). This implies that the running and quiescent states are both complex.
We next asked if the residual activity during the running and quiescent states are not only similar in their complexity but also contain similar dynamics. We therefore determined whether the PCA modes defined in one state explain appreciable variance in the other state. PCA modes defined by activity during the quiescent state explain approximately 75% as much variance in the running state, and PCA modes of the running state explain 75% of the quiescent state (Fig. 5d). This implies that the subspaces occupied by the dynamics in each state are highly overlapping. Furthermore, the dimensionality of this overlap is similar to the dimensionality of the activity (Quiescent-to-Running = 22.6 ± 5.1, Running-to-Quiescent = 20.8 ± 2.2, Fig. 5e). Moreover, projections of the residual dynamics from both states onto the first two modes of the running state are highly intermingled (Fig. 5f, also see Fig. S5). Collectively, these results indicate that the temporal and spatial structure of the residual activity is similar in the running and quiescent states.
PCA identifies patterns in the correlations across the full population of neurons. To look for state-dependent effects in small groups of cells, we compared correlations between the residual activity of all pairs of cells in the quiescent and running states. These correlations are similar with no large outliers (Fig. 5g, h). Thus, behavioral state and the global pattern of activity associated with it appears to have only a modest effect on the structure of residual activity. This is true not only for the residual dynamics of large populations of neurons but also for the residual correlations between all pairs of neurons (Fig. 5g, h). Thus, behavioral state and residual dynamics appear remarkably independent (Fig. 5a, model 3).
Cluster analysis reveals spatially segregated groups of neurons with correlated activity
PCA revealed ensembles of spatially organized and functionally related neurons in the residual activity. We identified smaller clusters of correlated neurons by performing hierarchical clustering analysis on the residual activity (Fig. 6a). This procedure builds a tree of similarity between the activity patterns of all cells, where at each branch point the ‘children’ describe potentially meaningful subsets of a given ‘parent’. To look for structure in the data at all spatial scales without defining arbitrary parameters for the number of expected clusters, we identified significant clusters using cross-validation (Methods). Specifically, we determined whether the variance of each child cluster was significantly smaller than the variance of random samples of the same size extracted from the parent cluster (Methods). In this way, we determined whether a given small group of neurons defined a cluster unique from other members of the parent cluster. Both a child and its parent cluster can be significant, and therefore neurons may participate in dynamics organized on multiple spatial scales.
Figure 6a shows the full clustering tree for one fly, with each branch colored according to whether the parent was a significant cluster (not significant in black, all other colors significant). We observed significant clusters of many sizes, including one cluster comprised of more than half of all neurons but also many clusters comprised of only two neurons (Fig. 6a–b). We next asked whether significant clusters are spatially organized. A subset of Pars Intercerebralis neurons located near the midline form a spatially compact cluster that is identifiable across flies (Fig. 6c–d, white). Significant clusters that share a parent with the Pars Intercerebralis cluster are predominantly in posterolateral regions (Fig. 6c–d, yellow). Thus, there is spatial organization and stereotypy at multiple spatial scales. The full distribution of sizes for all significant clusters (Fig. 6e) reveals a large number of significant clusters with 2 members. These clusters exhibit diverse residual dynamics, but each cluster consists of pairs of cells with similar dynamics (Fig. 6g). Despite these clusters being defined by residual dynamics, neurons in the same cluster have a similar relationship to global activity and behavior (Fig. S6). As a population, cells within a cluster exhibit a distribution of behavioral time constants and correlations indistinguishable from the distributions across all cells (Fig. 6f, S6). Thus, clusters are highly diverse and participate in the global behavioral state.
By visual inspection, many small clusters appear to be either bilaterally symmetric or spatially localized (Fig. 6h). To quantify this observation, we analyzed the spatial organization by calculating the distance between cells in a cluster after superimposing the left and right hemisphere by folding at the midline (Methods, Fig. S6). Most clusters with two members were more spatially organized than expected by chance (Fig. 6i, S6). The presence of small clusters that are predictive of both activity patterns and spatial location is consistent with the association of cluster identity with function—cells with similar dynamics and similar function are likely to be in similar locations. These observations suggest that the fly brain is composed of many small subpopulations that collectively account for the high dimensionality of the brainwide data. Two-member clusters are embedded in larger ensembles of neurons, implying that the functional relationship between neurons is hierarchical. This is consistent with known classes of cells in the fly brain—for example, Kenyon cells can be subdivided into α/\(\beta,{\alpha }^{{\prime} }\)/\({\beta }^{{\prime} }\), and γ subclasses; similarly, dopaminergic neurons can not only be divided into subclasses such as the PPL1 cluster but the PPL1 cluster can be further divided into single identified neurons that innervate distinct mushroom body compartments55.
Our functional profiling of the brain offers a novel and complementary method of identifying cell types throughout the brain. The vast majority of cells in the central brain can be transcriptionally characterized as consisting of a few thousand distinct cell types that come in clusters of 1-10 neurons per hemibrain56. Histograms of the number of cells within each cell type from genetic and connectomic cell-typing56 show an exponential shape similar to that revealed by our activity-based analysis (Fig. 6e). Thus, the smallest spatially organized subpopulations we identified functionally may correspond to genetically defined cell types.
Egg-laying command neurons correlate with running
In the fly, small identified circuits that control specific behaviors have been elucidated. Our observation that most neurons in the fly brain are active during running and flailing suggests that neurons engaged in specific behaviors, such as mating, aggression, or egg-laying, are also active during spontaneous running. To test this, we asked whether the recently identified oviDN egg-laying command neurons7 are active during locomotion. We imaged flies expressing the nuclear calcium reporter nls-GCaMP6s and the static nuclear dsRed under control of the split-GAL4 oviDN-SS17, which cleanly labels two of the three oviDN neurons in each hemisphere (Fig. 7a). We did not observe egg-laying behavior while flies were on the ball and thus, as expected, oviDN neurons exhibited little activity while flies were in the quiescent state. Neural activity was reliably higher during bouts of running (Fig. 7b), and running accounted for substantially more variance in the neural data than expected by chance (p < 0.05), consistent with previous work57. As observed in our panneuronal imaging data, total variance explained was highly correlated with time spent running (Fig. 7c), suggesting that heterogeneity in the neural data is accounted for by heterogeneity in the behavior. Thus, as predicted by our panneuronal data, neurons with highly specified function are provided with knowledge of the larger context in which they operate. This knowledge is reflected in the activation of egg-laying neurons, and therefore gating mechanisms are required to ensure that behaviors occur at the right time and place.
Genetically defined subpopulations of PI neurons are inversely correlated with running
Clusters of cells with similar activity may correspond to genetically defined cell types in the fly brain. To explore this, we focused on cell types within the PI region. Panneuronal imaging revealed neurons in this region anticorrelated with running, in sharp distinction to the majority of imaged neurons (Fig. 2d, e). We examined the activity of two peptidergic cell types within PI, Dilp and Dh44, the latter a subset of the former58. Consistent with expectation from panneuronal imaging, many Dilp and Dh44 neurons showed an inverse relationship with running (Fig. 7d–f). Indeed, analysis of the distribution of running correlations observed in different parts of the brain confirmed that Dilp and Dh44 exhibit running correlations that one would only expect to find in PI (Fig. S7). Thus, these cell types are likely to correspond to unique clusters of neurons we identified in PI with panneuronal imaging.
Discussion
We used SCAPE microscopy to record from a large volume of the dorsal brain with cellular resolution, complementing large-scale studies of neuropil regions in the fly brain25,26,27,36,39,40,59. To achieve cellular resolution, we used nuclear calcium as an indicator of neural activity. Trafficking of calcium into the nucleus is regulated by neural activity and influences gene expression60. SCAPE imaging permitted us to record from all neurons in a contiguous and large brain volume at high speed, providing an extensive picture of the neural correlates of behavior with cellular resolution. When placed on a ball, flies run, groom, or are quiescent. When suspended, flies often flail. Running and flailing engage a large fraction of the neurons in the imaged volume. A much smaller fraction of the neurons exhibit activity correlated with grooming. These behaviors unfolded over seconds and minutes (Fig. 1e), giving us the opportunity to resolve the neural correlates of these timescales. A regression model reveals neural activity correlated with running on both short and long-time scales. This suggests that most neurons are correlated with the act of running, and a significant fraction are correlated with the tendency to run. Moreover, cells with a given behavioral time constant are spatially organized, in some cases aligning with areas known to be involved in metabolism or locomotion. For example, a region we observed to have activity most highly correlated with behavior aligned with boundaries of specific cell types innervating the central complex (Fig. 2i, S2). More generally, the identity of neurons in each functionally defined region is unknown but can be loosely constrained by cell body locations in existing anatomy databases47,61.
Subtracting the dominant activity correlated with behavior reveals additional rich dynamics across time and space. This residual activity likely includes both activity unrelated to the exhibited behaviors as well as activity related to behavior but in a manner more complex than our regression model permits. Interestingly, this activity shows little dependence on locomotive state: residual activity exhibits similar spatiotemporal patterns in running and quiescent states. Thus, local computations appear to be superimposed upon a global behavioral state but not strongly state-dependent. This is similar to the observations that behavior-related activity is widespread but orthogonal to other dynamics in dorsal cortex of the mouse31, and that preparatory and muscle-related activity are orthogonal to one another in the primary motor cortex of the monkey62.
Neural activity not accounted for by behavior is high dimensional and sparse. Hierarchical clustering reveals small groups of neurons with highly correlated activity, at the extreme comprised of only 2 cells. These functionally defined clusters may correspond to genetically defined cell types in the fly brain. Consistent with this expectation, genetically defined cell types can account for the clusters we observed with panneuronal imaging in the Pars Intercerebralis (Fig. 7d–f). These small circuits do not operate in isolation. Clusters defined by the residual activity also participate in the global behavior-related dynamics. Thus, global patterns may inform local computation and in turn, local computations may influence global patterns.
The global scale of neural activity correlated with locomotion in flies is consistent with findings in worms21,22,33, zebrafish23,24,35 and mice30,31,32,34. Studies in flies25,26,27,28,36,39,40 and those in other organisms pose the question of the mechanism and function of broadly distributed brainwide activity. In the fly, small identified circuits that control specific behaviors have been elucidated. However, we have shown that most neurons in the fly brain are active during running and flailing, either as actors or observers. This suggests that neurons engaged in specific behaviors, such as mating, aggression, or even egg laying, are also active during spontaneous running, without the act of running triggering these other behaviors. Indeed, we find that egg-laying command neurons7 increase their activity during running without eliciting egg-laying. Downstream circuits must therefore be gated by behavioral state.
The brainwide behavioral state could arise from a variety of sources. For example, global activity could arise from widespread neuromodulation. Alternatively, the recurrent connectivity of the fly nervous system could provide a pathway for this global activity. One of the most plausible sources is the extensive afferent input to the brain from the ventral nerve cord—~2500 neurons originating in the ventral nerve cord project diffusely to the central brain52. Subsets of these neurons have recently been shown to encode behavioral states63.
We observe a small but substantial fraction of neurons that correlate with locomotion on timescales longer than the duration of individual running bouts. These neurons may represent a locomotor state, the tendency to run. Many of these neurons reside in large posterolateral clusters and in the dorsomedial Pars Intercerebralis. The PI is a predominantly peptidergic domain, and neurons in this region are poised to have influence over extended durations50. Recent work has implicated a relationship between brainwide behavior-related activity and metabolism27. Our observation that neurons involved in regulating metabolism are also modulated by running, albeit in a manner distinct from most other neurons, suggests that the causality of this relationship may be bidirectional.
Why does locomotor behavior have privileged access to virtually all neurons in the fly brain? Neurons in multiple neural pathways would likely benefit from knowledge of current behavior64. This activity may modulate ongoing behavior, recapitulate past, or even predict future behavioral action. In artificial intelligence, the utility of proprioceptive feedback to higher-order networks has been demonstrated—in artificial agents trained to solve a variety of tasks, subnetworks charged with representing abstract quantities such as value benefit from knowledge of the agent’s behavior65,66. Interestingly, artificial neurons in such subnetworks also tend to have activity correlated with the behavior itself65. Therefore, locomotor state may provide a useful behavioral context for other computations throughout the brain and it is perhaps not surprising that it elicits the most prominent activity throughout the brain. In short, it is good to know what you are doing.
Methods
Genetics and fly rearing
We imaged female 4–7 day-old flies of the following genotype: w/+; UAS-nls-GCaMP6s/+; nSyb-Gal4/UAS-nls-DsRed. UAS-nls-GCaMP6s was a gift from Barry Dickson. We imaged egg-laying command neurons using the split-GAL4 oviDN-SS17. We imaged Dilp neurons using Dilp5-GAL4, and Dh44 neurons using Dh44-GAL4.
SCAPE light-sheet imaging
Imaging was performed on a SCAPE 2.0 system44. In brief, the laser sheet was directed through an upright mounted 20x/1.0NA water immersion objective. Emitted light from the sample was separated into two channels by an image splitter outfitted with two dichroic filters and the detected red and green channels were recorded side-by-side on the camera chip. The imaging speed for these experiments was between 8 and 12 volumes per second, typically covering a volume of ~450 × 340 × 150 μm3. Variability in size of visible brain volume determined scan speed. In raw data, the voxel size along two dimensions is isotropic and defined by the camera chip, while voxel size in the third dimension is the step size of scanning. Here, the scan dimension was anterior to posterior. Because the light-sheet accesses the brain from an oblique angle, we orthogonalize the coordinate system before further processing, resulting in a typical voxel size of 1.0 × 1.4 × 2.4 μm.
Mount and preparation
We mounted flies to a customized holder consisting of a 3D-printed holder and a laser-cut stainless-steel headplate. We use a spherical treadmill similar to prior designs67. We monitor the behavior of the fly at 70 Hz, illuminated by 750 nm LEDs using a Basler acA780 camera outfitted with a VZM-450i lens (Edmund Optics) and a near-IR longpass filter (Midwest Optical LP780-22.5, Graftek Imaging). Depictions of the preparation made in BioRender (Fig. 1a, b).
Our mounting and dissection procedure was very similar to prior work67 but with a larger dissected window to accommodate SCAPE (Fig. 1b); all dissections that opened up a window similar to Fig. 1b without damaging the brain were deemed successful. After dissection, flies were tested for robust behavior on the spherical treadmill - we defined robust behavior as exhibiting bouts of walking totaling at least one minute in a 5-minute span. All flies that exhibited robust behavior post-dissection were imaged. Most flies that passed these criteria continued to exhibit robust behavior for many minutes, but we only analyzed data from flies that exhibited bouts of walking totaling at least one minute in the first five minutes of imaging. Imaging continued for up to 30 minutes, terminating when a fly no longer exhibited bouts of walking. The mean experiment duration over all flies included in the analysis was 18.1 minutes.
Motion correction
To perform image registration of our volumetric imaging dataset, we used the NoRMCorre algorithm68 augmented with an annealing procedure in which the grid size and the range of permitted local displacements gradually decrease with each iteration. At each step, we computed displacements using the activity-independent DsRed channel and applied the inferred displacements to the GCaMP channel.
Source extraction and deconvolution
ROIs are defined using watershed segmentation applied to the red channel of a temporally averaged volume, resulting in 1631 ± 109 ROIs per animal. After motion correction, most cells have negligible residual motion, but in some data sets a small fraction of cells have motion that is too nonlinear to be addressed with NoRMCorre. To quantify residual motion and eliminate non-stationary cells, we compute the squared coefficient of variation, CV2 = Var[ΔF/F]/Mean[F]2 from the red channel. Most ROIs (>95%) have CV2 << 1, while some have CV2 >> 1 and are discarded. No cells exhibit CV2 ≈ 1 (Fig. S1). This refinement of ROIs yields 1419 ± 78 stable, single-cell ROIs per animal.
Although this procedure typically reduces motion artifacts to less than 1 voxel for most cells, we further minimize the impact of residual motion by defining the activity of each cell as the ratio of green and red, F = green/red. We then define baseline ratiometric fluorescence, F0 as the best-fit exponential using least absolute deviation (LAD) regression applied to the derivative of F, dFt = Ft+1 − Ft. Specifically, for each cell, \(\hat{a},\hat{b}={{{\mbox{argmin}}}}_{a,b}{\sum }_{t}| d{F}_{t}-d{F}_{0}(t,a,b)|\), where \(d{F}_{0}(t,a,b)=-(a/b)\exp [-t/b]\). We then define ΔF/F = (F − F0)/F0, where \({F}_{0}=m+a\exp (-t/b)\) and \(m=\min [F]\). LAD regression confers robustness to outliers, and working with the derivative of F confers robustness to long-timescale nonstationarity. We find similar but slightly noisier activity using simple ΔF/F defined on the green channel alone.
Anatomical alignment across animals
We create a standardized reference frame by coarsely aligning cell locations across flies. Treating every cell as a point, we align the point sets for each brain to a common reference volume using the Gaussian mixture model method developed here: https://github.com/bing-jian/gmmreg.
Analysis of behavior
We monitor the movement of the spherical treadmill by measuring the total pixel variance between successive frames from the region containing the ball. This unitless estimate of motion-aided behavior segmentation, is described below. In some datasets, the spherical treadmill was removed after 10 minutes of imaging. Here, we measured pixel variance in an ROI around the fly’s legs, which provided a measure of behavior we called flailing, consisting of bouts of rapid leg movements.
We analyze fly behavior both by directly tracking motion of the treadmill (described above) and by tracking eight points on the body of the fly using Deep Graph Pose48 (DGP; Fig. 1c). We hand-labeled the eight selected points in 1771 frames from 26 videos (50–137 frames per video) using the DeepLabCut (DLC)69 GUI. We then trained DGP on these frames, which augments the supervised loss of DLC with a semi-supervised loss that incorporates additional, unlabeled frames; we found that this significantly improved the pose estimation, even after post hoc smoothing of the DLC markers.
We further segment discrete behaviors from the DGP markers using a semi-supervised sequence model49. We chose to label five salient behaviors commonly observed across all flies: running, front and back grooming, abdomen bending, and a quiescent state. We labeled up to 1000 frames for each of the five behaviors for each of 20 flies using the DeepEthogram GUI70, resulting in a total of 33,756 hand labels (quiescent = 6250, run = 4950, front groom = 5700, back groom = 5480, abdomen bend = 11,376). We supplemented this small, high-quality set of hand labels with a large, lower-quality set of “weak” labels computed using a simple set of heuristics (see details below).
Semi-supervised behavioral segmentation
We train a semi-supervised behavioral segmentation model that classifies the DGP markers into one of the five available behavior classes for each time point. The model’s loss function contains three terms: (1) a standard supervised loss that classifies a sparse set of hand labels; (2) a weakly supervised loss that classifies a set of easy-to-compute heuristic labels; and (3) a self-supervised loss that predicts the evolution of the DGP markers. Let xt denote the DGP markers at time t, and let yt denote the one-hot vector encoding the hand labels at time t such that the kth entry is 1 if behavior k is present, else the entry is 0. We assume that the hand labels are only defined on a subset of time points \({{{{{{{\mathcal{T}}}}}}}}\subseteq \{1,2,...T\}\). The cross-entropy loss function then defines the supervised objective (\({{{{{{{{\mathcal{L}}}}}}}}}_{{{\mbox{super}}}}\)) to optimize:
where f() denotes the sequence model mapping the DGP markers to behavior labels. We now introduce a set of heuristic labels \({\widetilde{{{{{{{{\bf{y}}}}}}}}}}_{t}\), defined at each time point. Computing the cross-entropy loss on all time points that do not already have a corresponding hand label defines the heuristic objective:
The self-supervised loss requires the sequence model to predict xt+1 from xt. To properly do so we now expand the definition of the sequence model f() to include two components: an encoder e(), which maps the behavioral features xt to an intermediate behavioral embedding zt; and a linear classifier c() which maps zt to the predicted labels (\({\hat{{{{{{{{\bf{y}}}}}}}}}}_{t}=c(e({{{{{{{{\bf{x}}}}}}}}}_{t}))\). We can now incorporate the self-supervised loss through the use of a predictor function p(), which maps zt to xt+1, and match xt+1 to the true behavioral features p(e(xt+1)) through a mean square error loss \({{{{{{{{\mathcal{L}}}}}}}}}_{{{\mbox{MSE}}}}\) computed on all time points:
Finally, we combine all terms into the full semi-supervised loss function:
where the λ terms are hyperparameters that control the contributions of their respective losses. Note that setting λh = λp = 0 results in a fully supervised model, while λs = λh = 0 results in a fully unsupervised model.
For the encoder and predictor networks e() and p() we use a dilated Temporal Convolutional Network (dTCN)71, which has shown good performance across a range of sequence modeling tasks. Both networks use a two-layer dTCN with a filter size of 9 time steps and 32 channels for each layer, with leaky ReLU activation functions, and weight dropout with probability p = 0.1. We use 10 fly videos for training and 10 for testing. All models are trained with the Adam optimizer using an initial learning rate of 1e-4 and a batch size of 2000 time points. For the training flies, 80% of frames are used for training, 20% for validation. Training terminates once the loss on validation data begins to rise for 20 consecutive epochs; the epoch with the lowest validation loss is used for testing. To evaluate the models, we compute the F1 score - the geometric mean of precision and accuracy - on the hand labels of the 10 held-out test flies. We average the F1 score over all behaviors and choose the hyperparameters λh and λp based on the highest score. We then retrain the model with those hyperparameter settings using all 20 flies to arrive at our final segmentation model. We also performed a small hyperparameter search across the number of layers, channels per layer, filter size, and learning rate, and found that our results are robust across different settings (data not shown).
To construct an ethogram of behavioral state, we use the argmax of predicted behavioral state labels \(({\hat{{{{{{{{\bf{y}}}}}}}}}}_{t})\) at every time point. Time points in which \(\max \left[{\hat{{{{{{{{\bf{y}}}}}}}}}}_{t}\right] \, < \, 0.75\) are labeled as “undefined" in the ethogram.
Heuristic labels
The addition of a large set of easily computed heuristic labels improves the accuracy of the behavioral segmentation49. Below, we provide more detail on these heuristics. Note that we choose conservative values for the thresholds in order to decrease the prevalence of false positives. A consequence of this choice is that some time points are not assigned a heuristic label; nevertheless this procedure adds enough high-quality information to substantially improve the models.
Run. We first estimate the time points at which a fly is running by utilizing the treadmill motion energy (ME). We transform the treadmill ME to lie in the range [0, 1], then assign the ‘run’ label to time points when the treadmill ME is above a threshold (0.5).
Quiescent. We compute the average ME over all DGP markers for each time point, then denoise this one-dimensional signal with a total variation smoother (the denoise_tv_chambolle filter from the sklearn72 Python package). We then transform this signal to approximately lie in the range [0, 1] (the 99th percentile is mapped to 1 in order to make this process robust to outliers). We assign the ‘quiescent’ label to time points when this signal is below a threshold (0.02) and the fly is not running (according to the previous heuristic).
Abdomen bend. We compute the average ME over the abdomen markers, then denoise this signal and transform it to approximately lie in the range [0, 1]. We assign the ‘abdomen bend’ label to time points when this signal is above a threshold (0.9) and the fly is not still or running according to the previous heuristics.
Front and back groom. We compute the average ME over the forelimb markers, then denoise this signal and transform it to approximately lie in the range [0, 1]. We assign the ‘front groom’ label to time points when this signal is above a threshold (0.05), the corresponding back groom signal (computed from the hindlimb markers) is below a threshold (0.02), and the fly is not still, running, or bending its abdomen according to the previous heuristics. We assign the ‘back groom’ label in an analogous manner.
Regression model
We regressed each neuron’s activity against all behavioral states (B={running, front grooming, back grooming, flailing}) filtered using a fitted time constant (τi) and temporal shift (ϕi) unique for each cell (i). To reflect moments of uncertainty in a fly’s behavioral state, we used the behavioral state probabilities (\({\hat{y}}_{bt}\)) rather than the binary behavioral states (\(\mathop{{{{{{{{\rm{argmax}}}}}}}}}\limits_{b}[{\hat{y}}_{bt}]\)) as regressors. Thus, we model the activity f of cell i at time t as
We fit all parameters simultaneously using Sequential Least Squares Quadratic Programming. The γ coefficients describe the relative importance of each behavior in accounting for the activity of each cell, while the α coefficients capture drift independent of behavior. The convolution kernel is \({\kappa }_{{\tau }_{i}{\phi }_{i}}={(2{\tau }_{i})}^{-1}\exp [-(| t-\phi | )/{\tau }_{i}]\). This symmetric kernel avoids presuming a causal direction between behavior and neural activity. A cell with a broad kernel should have ∣ϕ∣ ≥ τ, with the sign of ϕ determining the direction of potential causality (neural activity that precedes behavior may or may not be causal to the behavior, but neural activity that follows behavior cannot be causal). A lag of ∣ϕ∣ ≈ τ should not be interpreted as a true lag, but rather a reflection of putative causality with smoothness constraints.
The alternative regression model used the principal components of DGP marker position. Specifically, we used the principal components of the normalized and mean subtracted x and y coordinates of all 8 tracked points. This set of 16 orthogonal regressors were then fed into the same regression model described above in place of the behavioral states.
To test the significance of the fit of each cell by either regression model, we compared variance explained to that from a model that used behavior regressors that were randomly shifted in time. Specifically, we randomly shifted all regressors in time by the same fraction of the total experiment duration, ranging from 33% to 66%, with time points shifted past the end of the experiment wrapping around to the beginning. We generated five instances of this shifted fit and required that the original fit produced larger r2 than all of them. The regression fit for cells that failed this test were treated as not significant, regardless of their r2 value.
Significance of residual dynamics
To ascertain the degree of temporal structure in the residual activity after subtracting the regression model fit, we performed a Ljung-Box test of autocorrelation in the residual dynamics. For every cell, we performed this test on all lags between 10 and 610 frames (approximately corresponding to 1 second and 1 minute, respectively). Every lag and every cell from every fly yielded a p-value lower than 10−10.
Dimensionality reduction
We performed PCA on the residual activity after subtracting the regression model fit. We quantified the dimensionality of this residual activity as the number of PCA modes that maximize the log-likelihood of the lower dimensional subspace on held-out data. We fit the principal components on 80% of all time points and evaluate the log-likelihood on the remaining 20%.
To quantify the degree of approximate sparseness of PCA modes without selecting a threshold, we calculate the participation ratio of each principal component vector \({\overrightarrow{v}}_{j}\) as
Intuitively, this gives an estimate of how many elements of each mode are large (significantly nonzero), without having to choose an arbitrary threshold. The participation ratio of a zero-mean Gaussian vector is ~0.33, which is a useful null hypothesis for the existence of either sparse or dense structure in the PCA modes. We define the number of active neurons (n) in each mode as sparseness (S) multiplied by the total number of neurons.
We sorted residual activity by behavior label and then performed PCA separately on each behavior’s set of time points to quantify the residual subspace (Xb) of each behavior b. To compare the subspaces of two behaviors, for example running and the quiescent state, we quantified the common variance explained and the common dimensionality. We defined common variance explained (Emb) for m modes as
where b is the behavior on which the PCA modes were defined, and b0 is the other behavior. Similarly, we define common dimensionality by cross validating the projection of one subspace onto the modes of the other (\({{{{{{{{\bf{X}}}}}}}}}_{{b}_{0}}{\overrightarrow{v}}_{jb}\)). See ‘Spatial Organization’ section for explanation of calculating intra-mode distances and spatial organization.
Clustering
We performed agglomerative hierarchical clustering on residual neural activity using Euclidean affinity and ward linkage.
To look for structure in the data at all spatial scales without defining arbitrary parameters for the number of expected clusters or an affinity threshold, we identified significant clusters using cross-validation. We performed clustering on 80% of the time points, and evaluated the validity of the identified clusters on the remaining 20%. Specifically, we evaluated the intra-cluster variance on held-out time points for each cluster and for size-matched samples from its parent cluster. The number of selected samples was
where Np and Nc are the number of neurons in the parent and child cluster, respectively. A child cluster was deemed significant if its test variance was less than that of the samples (p < 0.05). Both a child and its parent cluster can be significant. See ‘Spatial Organization’ section for explanation of calculating intra-cluster distances and cluster organization.
Spatial organization
We analyzed the spatial organization of sparse binary patterns. For our analysis of cluster organization, these patterns directly corresponded to cluster labels. For the corresponding analysis of organization of PCA modes, an intermediate binarization step was required. To approximate each PCA mode as a binary pattern, we set large outliers (greater than five standard deviations from the mean) in the original mode to 1 and all other cells to 0.
We defined the Euclidean distance for each binary pattern by first reflecting the brain along the midline—thus, the lateral coordinate of each cell was equal to its distance from the midline (Fig. S6F). We then compute the Euclidean distance between the coordinates of each cell in a binary pattern. We performed this analysis on both the identified and randomly shuffled patterns of the same size to validate our results.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Source data are provided with this paper. The datasets generated during the current study are publicly available in NWB format in a Figshare database at https://doi.org/10.6084/m9.figshare.23749074. An accompanying source data file is named “datasets_for_each_figure.xlsx”.
Code availability
Data was collected using custom Matlab software (https://github.com/schafferEvan/VIP) interfacing with Andor acquisition system (andor.oxinst.com). Analyses were performed using custom Python and Matlab code that can be found in the following Github repositiories (package versions are specified in the respective requirements.txt files in each repository): Behavioral data processing: https://github.com/themattinthehatt/daart (DOI: 10.5281/zenodo.8277452), Neural data processing: https://github.com/schafferEvan/VIP (ref. 73, DOI: 10.5281/zenodo.8263548), Analysis: https://github.com/schafferEvan/flygenvectors (ref. 74, DOI: 10.5281/zenodo.8263524).
References
Deisseroth, K. Optogenetics: 10 years of microbial opsins in neuroscience. Nat. Neurosci. 18, 1213–1225 (2015).
Hoopfer, E.D., Jung, Y., Inagaki, H.K., Rubin, G.M., & Anderson, D.J. P1 interneurons promote a persistent internal state that enhances inter-male aggression in drosophila. Elife 4, e11346 (2015).
Duistermars, B. J., Pfeiffer, B. D., Hoopfer, E. D. & Anderson, D. J. A brain module for scalable control of complex, multi-motor threat displays. Neuron 100, 1474–1490.e4 (2018).
von Philipsborn, A. C. et al. Neuronal control of drosophila courtship song. Neuron 69, 509–522 (2011).
Coen, P., Xie, M., Clemens, J. & Murthy, M. Sensorimotor transformations underlying variability in song intensity during drosophila courtship. Neuron 89, 629–644 (2016).
Hindmarsh Sten, T., Li, R., Otopalik, A. & Ruta, V. Sexual arousal gates visual processing during drosophila courtship. Nature 595, 549–553 (2021).
Wang, F. et al. Neural circuitry linking mating and egg laying in drosophila females. Nature 579, 101–105 (2020).
Robie, A. A. et al. Mapping the neural substrates of behavior. Cell 170, 393–406.e28 (2017).
Sen, R. et al. Moonwalker descending neurons mediate visually evoked retreat in drosophila. Curr. Biol. 27, 766–771 (2017).
Ache, J. M., Namiki, S., Lee, A., Branson, K. & Card, G. M. State-dependent decoupling of sensory and motor circuits underlies behavioral flexibility in drosophila. Nat. Neurosci. 22, 1132–1139 (2019).
Maimon, G., Straw, A. D. & Dickinson, M. H. Active flight increases the gain of visual motion processing in drosophila. Nat. Neurosci. 13, 393–399 (2010).
Chiappe, M. E., Seelig, J. D., Reiser, M. B. & Jayaraman, V. Walking modulates speed sensitivity in drosophila motion vision. Curr. Biol. 20, 1470–1475 (2010).
Suver, M. P., Mamiya, A. & Dickinson, M. H. Octopamine neurons mediate flight-induced modulation of visual processing in drosophila. Curr. Biol. 22, 2294–2302 (2012).
Fujiwara, T., Cruz, T. A., Bohnslav, J. P. & Chiappe, M. E. A faithful internal representation of walking movements in the drosophila visual system. Nat. Neurosci. 20, 72–81 (2017).
Strother, J. A. et al. Behavioral state modulates the ON visual motion pathway of drosophila. Proc. Natl. Acad. Sci. USA 115, E102–E111 (2018).
Cohn, R., Morantte, I. & Ruta, V. Coordinated and compartmentalized neuromodulation shapes sensory processing in drosophila. Cell 163, 1742–1755 (2015).
Zolin, A. et al. Context-dependent representations of movement in drosophila dopaminergic reinforcement pathways. Nat. Neurosci. 24, 1555–1566 (2021).
Niell, C. M. & Stryker, M. P. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65, 472–479 (2010).
Schneider, D. M., Nelson, A. & Mooney, R. A synaptic and circuit basis for corollary discharge in the auditory cortex. Nature 513, 189–194 (2014).
Bimbard, C. et al. Behavioral origin of sound-evoked activity in mouse visual cortex. Nat. Neurosci. 26, 251–258 (2023).
Nguyen, J. P. et al. Whole-brain calcium imaging with cellular resolution in freely behaving caenorhabditis elegans. Proc. Natl. Acad. Sci. USA 113, E1074–81 (2016).
Prevedel, R. et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 11, 727–730 (2014).
Ahrens, M. B. et al. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).
Kim, D. H. et al. Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish. Nat. Methods 14, 1107–1114 (2017).
Aimon, S. et al. Fast near-whole-brain imaging in adult drosophila during responses to stimuli and behavior. PLoS Biol. 17, e2006732 (2019).
Mann, K., Gallen, C. L. & Clandinin, T. R. Whole-Brain calcium imaging reveals an intrinsic functional network in drosophila. Curr. Biol. 27, 2389–2396.e4 (2017).
Mann, K., Deny, S., Ganguli, S. & Clandinin, T. R. Coupling of activity, metabolism and behaviour across the drosophila brain. Nature 593, 244–248 (2021).
Lemon, W. C. et al. Whole-central nervous system functional imaging in larval drosophila. Nat. Commun. 6, 7924 (2015).
Vaadia, R. D. et al. Characterization of proprioceptive system dynamics in behaving drosophila larvae using high-speed volumetric microscopy. Curr. Biol. 29, 935–944.e4 (2019).
Musall, S. et al. Single-trial neural dynamics are dominated by richly varied movements. Nat. Neurosci. 22, 1677–1686 (2019).
Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, 255 (2019).
Steinmetz, N. A., Zatka-Haas, P., Carandini, M. & Harris, K. D. Distributed coding of choice, action and engagement across the mouse brain. Nature 576, 266–273 (2019).
Kato, S. et al. Global brain dynamics embed the motor command sequence of caenorhabditis elegans. Cell 163, 656–669 (2015).
Kauvar, I. C. et al. Cortical observation by synchronous multifocal optical sampling reveals widespread population encoding of actions. Neuron 107, 351–367.e19 (2020).
Marques, J. C., Li, M., Schaak, D., Robson, D. N. & Li, J. M. Internal state dynamics shape brainwide activity and foraging behaviour. Nature 577, 239–243 (2020).
Pacheco, D.A., Thiberge, S.Y., Pnevmatikakis, E., & Murthy, M. Auditory activity is diverse and widespread throughout the central brain of drosophila. Nat. Neurosci. 24, 93–104 (2020).
Koay, S. A., Charles, A. S., Thiberge, S. Y., Brody, C. D. & Tank, D. W. Sequential and efficient neural-population coding of complex task information. Neuron 110, 328–349.e11 (2022).
Allen, W. E. et al. Thirst regulates motivated behavior through modulation of brainwide neural population dynamics. Science 364, 253 (2019).
Aimon, S., Cheng, K.Y., Gjorgjieva, J., & Grunwald Kadow, I.C. Global change in brain state during spontaneous and forced walk in drosophila is composed of combined activity patterns of different neuron classes. Elife 12, e85202 (2023).
Brezovec, L.E., Berger, A.B., Druckmann, S., & Clandinin, T.R. Mapping the neural dynamics of locomotion across the drosophila brain. https://www.biorxiv.org/content/10.1101/2022.03.20.485047v1 (2022).
von Helmholtz, H.L.F. Handbuch der physiologischen Optik. v. 2, 1910, volume 2. L. Voss, (1911).
Kim, A. J., Fenk, L. M., Lyu, C. & Maimon, G. Quantitative predictions orchestrate visual signaling in drosophila. Cell 168, 280–294.e12 (2017).
Bouchard, M. B. et al. Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms. Nat. Photonics 9, 113–119 (2015).
Voleti, V. et al. Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. 16, 1054–1062 (2019).
Weislogel, J.-M. et al. Requirement for nuclear calcium signaling in drosophila long-term memory. Sci. Signal. 6, ra33 (2013).
Jung, Y. et al. Neurons that function within an integrator to promote a persistent behavioral state in drosophila. Neuron 105, 322–333.e5 (2020).
Scheffer, L.K. et al. A connectome and analysis of the adult central brain. Elife, 9, e57443 (2020).
Wu, A. et al. Deep graph pose: a semi-supervised deep graphical model for improved animal pose tracking. In: H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, eds. Advances in Neural Information Processing Systems, volume 33, pages 6040–6052. Curran Associates, Inc., (2020).
Whiteway, M.R. et al. Semi-supervised sequence modeling for improved behavioral segmentation. bioRxiv https://www.biorxiv.org/content/10.1101/2021.06.16.448685v1 (2021).
de Velasco, B. et al. Specification and development of the pars intercerebralis and pars lateralis, neuroendocrine command centers in the drosophila brain. Dev. Biol. 302, 309–323 (2007).
Turner-Evans, D. B. & Jayaraman, V. The insect central complex. Curr. Biol. 26, R453–7 (2016).
Hsu, C. T. & Bhandawat, V. Organization of descending neurons in drosophila melanogaster. Sci. Rep. 6, 20259 (2016).
Hampel, S., Franconville, R., Simpson, J. H. & Seeds, A. M. A neural command circuit for grooming movement control. Elife 4, e08758 (2015).
Seeds, A.M. et al. A suppression hierarchy among competing motor programs drives sequential grooming in drosophila. 3, e02951 (2014).
Aso, Y. et al. The neuronal architecture of the mushroom body provides a logic for associative learning. Elife, 3, e04577 (2014).
Pfeiffer, B. D. et al. Tools for neuroanatomy and neurogenetics in drosophila. Proc. Natl. Acad. Sci. USA 105, 9715–9720 (2008).
Vijayan, V. et al. A rise-to-threshold signal for a relative value deliberation. bioRxiv. https://doi.org/10.1101/2021.09.23.461548. (2021)
Nässel, D. R. & Zandawala, M. Hormonal axes in drosophila: regulation of hormone release and multiplicity of actions. Cell Tissue Res. 382, 233–266 (2020).
Munch, D., Goldschmidt, D., & Ribeiro, C. Distinct internal states interact to shape food choice by modulating sensorimotor processing at global and local scales. bioRxiv, https://www.biorxiv.org/content/10.1101/2021.05.27.445920v1 (2021).
Bading, H. Nuclear calcium signalling in the regulation of brain function, 2013. Nat. Rev. Neurosci. 14, 593–608 (2013).
Jenett, A. et al. A GAL4-driver line resource for drosophila neurobiology. Cell Rep. 2, 991–1001 (2012).
Kaufman, M. T., Churchland, M. M., Ryu, S. I. & Shenoy, K. V. Cortical activity in the null space: permitting preparation without movement. Nat. Neurosci. 17, 440–448 (2014).
Chen, C.-L. et al. Ascending neurons convey behavioral state to integrative sensory and action selection brain regions. Nat. Neurosci. 26, 682–695 (2023).
Kaplan, H. S. & Zimmer, M. Brain-wide representations of ongoing behavior: a universal principle. Curr. Opin. Neurobiol. 64, 60–69 (2020).
Merel, J., Aldarondo, D., Marshall, J., Tassa, Y., Wayne, G., & Ölveczky, B. Deep neuroethology of a virtual rodent. https://arxiv.org/abs/1911.09451 (2019).
Heess, N. et al. Learning and transfer of modulated locomotor controllers. https://arxiv.org/abs/1610.05182 (2016).
Seelig, J. D. et al. Two-photon calcium imaging from head-fixed drosophila during optomotor walking behavior. Nat. Methods 7, 535–540 (2010).
Pnevmatikakis, E. A. & Giovannucci, A. NoRMCorre: an online algorithm for piecewise rigid motion correction of calcium imaging data. J. Neurosci. Methods 291, 83–94 (2017).
Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. 21, 1281–1289 (2018).
Bohnslav, J.P. et al. DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels. Elife 10, e63377 (2021).
Lea, C., Flynn, M.D., Vidal, R., Reiter, A., & Hager, G.D. Temporal convolutional networks for action segmentation and detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 1003–1012 (2017).
Pedregosa, F. Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
Schaffer, E.S. & Mishra, N. schafferevan/vip: v1.0.0 (2023).
Schaffer, E.S., Whiteway, M.R., & Mishra, N. schafferevan/flygenvectors: v1.0.0 (2023).
Acknowledgements
We would like to thank Tanya Tabachnik for the design and manufacturing of our fly treadmill, Barry Dickson for generously sharing fly stocks, Armaan Ahmed, Virginia Devi-Chou, and Benjamin Lucero for assistance in processing data, and members of the Axel, Hillman, and Paninski labs for helpful comments and suggestions. This work was supported by grants from the Simons Foundation (481778, E.S.S.; 542951, L.F.A., E.M.C.H., R.A.; 543023, L.P.), the NSF Graduate Research Fellowship Program DGE 16-44869 (N.M.), BRAIN Initiative Awards UF1NS108213 and U01NS094296 (E.M.C.H.), the NSF NeuroNex Award DBI-1707398 (L.F.A., L.P.), the Gatsby Charitable Foundation (L.F.A.), and the Howard Hughes Medical Institute (R.A.).
Author information
Authors and Affiliations
Contributions
Conceptualization, E.S.S., N.M., M.R.W., W.L., L.P., L.F.A., E.M.C.H., R.A. Methodology, N.M., E.S.S., M.R.W., W.L., J.F., L.P., L.F.A., E.M.C.H. Validation—Iterations of Fly Imaging, E.S.S., N.M., W.L., J.F. Software, N.M., E.S.S., M.R.W., W.L., K.P., V.V., E.M.C.H. Formal Analysis, E.S.S., N.M., M.R.W. Investigation, N.M., E.S.S., W.L., J.F., M.V. Data Management, E.S.S., N.M., M.V. Software for Data Curation, N.M., E.S.S., M.R.W. Writing—original draft, E.S.S. Writing—Preparation of Manuscript, N.M, E.S.S. Writing—review & editing, E.S.S., N.M., M.R.W., W.L., L.P., L.F.A., E.M.C.H., R.A. Visualization, E.S.S. Supervision, R.A., E.M.C.H., L.F.A., L.P. Funding acquisition, N.M., E.S.S., L.P., L.F.A., E.M.C.H., R.A.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Sophie Aimon and the other, anonymous, reviewers for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schaffer, E.S., Mishra, N., Whiteway, M.R. et al. The spatial and temporal structure of neural activity across the fly brain. Nat Commun 14, 5572 (2023). https://doi.org/10.1038/s41467-023-41261-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467-023-41261-2
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.