Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

# Inferring single-trial neural population dynamics using sequential auto-encoders

## Abstract

Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.

This is a preview of subscription content, access via your institution

## Relevant articles

• ### Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding

Nature Communications Open Access 02 December 2022

• ### Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder

Nature Communications Open Access 12 November 2022

## Access options

\$32.00

All prices are NET prices.

## Data availability

Data will be made available upon reasonable request from the authors, unless prohibited owing to research participant privacy concerns.

## References

1. Afshar, A. et al. Single-trial neural correlates of arm movement preparation. Neuron 71, 555–564 (2011).

2. Carnevale, F., de Lafuente, V., Romo, R., Barak, O. & Parga, N. Dynamic control of response criterion in premotor cortex during perceptual detection under temporal uncertainty. Neuron 86, 1067–1077 (2015).

3. Churchland, M. M. et al. Neural population dynamics during reaching. Nature 487, 51–56 (2012).

4. Harvey, C. D., Coen, P. & Tank, D. W. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012).

5. Kaufman, M. T., Churchland, M. M., Ryu, S. I. & Shenoy, K. V. Cortical activity in the null space: permitting preparation without movement. Nat. Neurosci. 17, 440–448 (2014).

6. Kobak, D. et al. Demixed principal component analysis of neural population data. Elife 5, e10989 (2016).

7. Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

8. Pandarinath, C. et al. Neural population dynamics in human motor cortex during movements in people with ALS. Elife 4, e07436 (2015).

9. Sadtler, P. T. et al. Neural constraints on learning. Nature 512, 423–426 (2014).

10. Shenoy, K. V., Sahani, M. & Churchland, M. M. Cortical control of arm movements: a dynamical systems perspective. Annu. Rev. Neurosci. 36, 337–359 (2013).

11. Ahrens, M. B. et al. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).

12. Yu, B. M. et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102, 614–635 (2009).

13. Zhao, Y. & Park, I. M. Variational latent Gaussian process for recovering single-trial dynamics from population spike trains. Neural Comput. 29, 1293–1316 (2017).

14. Aghagolzadeh, M. & Truccolo, W. Latent state-space models for neural decoding. Conf. Proc. IEEE Eng.Med. Biol. Soc. 2014, 3033–3036 (2014).

15. Gao, Y., Archer, E. W., Paninski, L. & Cunningham, J. P. Linear dynamical neural population models through nonlinear embeddings. In Proc. 30th International Conference on Neural Information Processing Systems (eds. Lee, D. D. et al.) 163–171 (Curran Associates, Red Hook, NY, 2016).

16. Kao, J. C. et al. Single-trial dynamics of motor cortex and their applications to brain-machine interfaces. Nat. Commun. 6, 7759 (2015).

17. Macke, J. H. et al. Empirical models of spiking in neural populations. In Advances in Neural Information Processing Systems 24 (eds Shawe-Taylor, J. et al.) 1350–1358 (Curran Associates, Red Hook, NY, 2011).

18. Linderman, S. et al. Bayesian learning and inference in recurrent switching linear dynamical systems. In Proc. 20th International Conference on Artificial Intelligence and Statistics vol. 54 (eds Singh, A. & Zhu, J.) 914–922 (PMLR/Microtome Publishing, Brookline, MA, 2017).

19. Petreska, B. et al. Dynamical segmentation of single trials from population neural data. In Advances in Neural Information Processing Systems 24 (eds Shawe-Taylor, J. et al.) 756–764 (Curran Associates, Red Hook, NY, 2011).

20. Kato, S. et al. Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Cell 163, 656–669 (2015).

21. Kaufman, M. T. et al. The largest response component in motor cortex reflects movement timing but not movement type. eNeuro 3, ENEURO.0085-16.2016 (2016).

22. Gao, P. & Ganguli, S. On simplicity and complexity in the brave new world of large-scale neuroscience. Curr. Opin. Neurobiol. 32, 148–155 (2015).

23. Kingma, D. P. & Welling, M. Auto-encoding variational bayes. arXiv Preprint at https://arxiv.org/abs/1312.6114 (2013).

24. Doersch, C. Tutorial on variational autoencoders. arXiv Preprint at https://arxiv.org/abs/1606.05908 (2016).

25. Sussillo, D., Jozefowicz, R., Abbott, L. F. & Pandarinath, C. LFADS—latent factor analysis via dynamical systems. arXiv Preprint at https://arxiv.org/abs/1608.06315 (2016).

26. Salinas, E. & Abbott, L. F. Vector reconstruction from firing rates. J. Comput. Neurosci. 1, 89–107 (1994).

27. Turaga, S. et al. Inferring neural population dynamics from multiple partial recordings of the same neural circuit. In Advances in Neural Information Processing Systems 26 (eds Burges, C. J. C. et al.) 539–547 (Curran Associates, Red Hook, NY, 2013).

28. Nonnenmacher, M., Turaga, S. C. & Macke, J. H. Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations. In Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 5702–5712 (Curran Associates, Red Hook, NY, 2017).

29. Donoghue, J. P., Sanes, J. N., Hatsopoulos, N. G. & Gaal, G. Neural discharge and local field potential oscillations in primate motor cortex during voluntary movements. J. Neurophysiol. 79, 159–173 (1998).

30. Murthy, V. N. & Fetz, E. E. Synchronization of neurons during local field potential oscillations in sensorimotor cortex of awake monkeys. J. Neurophysiol. 76, 3968–3982 (1996).

31. Fries, P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends. Cogn. Sci. 9, 474–480 (2005).

32. Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487 (2015).

33. Gilja, V. et al. Clinical translation of a high-performance neural prosthesis. Nat. Med. 21, 1142 (2015).

34. Pandarinath, C. et al. High performance communication by people with paralysis using an intracortical brain-computer interface. Elife 6, e18554 (2017).

35. Sussillo, D. et al. A recurrent neural network for closed-loop intracortical brain–machine interface decoders. J. Neural. Eng. 9, 26027 (2012).

36. Sussillo, D., Stavisky, S. D., Kao, J. C., Ryu, S. I. & Shenoy, K. V. Making brain–machine interfaces robust to future neural variability. Nat. Commun. 7, 13749 (2016).

37. Ezzyat, Y. et al. Closed-loop stimulation of temporal cortex rescues functional networks and improves memory. Nat. Commun. 9, 365 (2018).

38. Klinger, N. V. & Mittal, S. Clinical efficacy of deep brain stimulation for the treatment of medically refractory epilepsy. Clin. Neurol. Neurosurg. 140, 11–25 (2016).

39. Little, S. et al. Adaptive deep brain stimulation in advanced Parkinson disease. Ann. Neurol. 74, 449–457 (2013).

40. Rosin, B. et al. Closed-loop deep brain stimulation is superior in ameliorating parkinsonism. Neuron 72, 370–384 (2011).

41. Williamson, R. S., Sahani, M. & Pillow, J. W. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction. PLoS. Comput. Biol. 11, e1004141 (2015).

42. Rezende, D. J., Mohamed, S., & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Proc. 31st International Conference on Machine Learning (eds Xing, E. P. & Jebara, T.) 1278–1286 (JMLR/Microtome Publishing, Brookline, MA, 2014).

43. Gregor, K., Danihelka, I., Graves, A., Rezende, D. J. & Wierstra, D. DRAW: a recurrent neural network for image generation. arXiv Preprint at https://arxiv.org/abs/1608.06315 (2016).

44. Krishnan, R. G., Shalit, U. & Sontag, D. Deep Kalman filters. arXiv Preprint at https://arxiv.org/abs/1511.05121 (2015).

45. Chung, J., Gulcehre, C., Cho, K. & Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv Preprint at https://arxiv.org/abs/1412.3555 (2014).

46. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv Preprint at https://arxiv.org/abs/1207.0580 (2012).

47. Zaremba, W., Sutskever, I. & Vinyals, O. Recurrent neural network regularization. arXiv Preprint at https://arxiv.org/abs/1409.2329 (2014).

48. Bowman, S. R. et al. Generating sentences from a continuous space. In Proc. 20th SIGNLL Conference on Computational Natural Language Learning (eds Riezler, S. & Goldberg, Y.) 10–21 (Association for Computational Linguistics, Stroudsberg, PA, 2016).

49. Hochberg, L. R. BrainGate2: feasibility study of an intracortical neural interface system for persons with tetraplegia (BrainGate2). ClinicalTrials.gov https://www.clinicaltrials.gov/ct2/show/NCT00912041 (2018)..

50. Gilja, V. et al. A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757 (2012).

51. Maaten, Lvander & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).

52. Willett, F. R. et al. Feedback control policies employed by people using intracortical brain-computer interfaces. J. Neural Eng. 14, 016001 (2017).

53. Fan, J. M. et al. Intention estimation in brain–machine interfaces. J. Neural. Eng. 11, 16004 (2014).

54. Paninski, L. Maximum likelihood estimation of cascade point-process neural encoding models. Network 15, 243–262 (2004).

## Acknowledgements

We thank J.P. Cunningham and J. Sohl-Dickstein for extensive conversation. We thank M.M. Churchland for contributions to data collection for monkey J, C. Blabe and P. Nuyujukian for assistance with research sessions with participant T5, E. Eskandar for array implantation with participant T7, and B. Sorice and A. Sarma for assistance with research sessions with participant T7. L.F.A.’s research was supported by US National Institutes of Health grant MH093338, the Gatsby Charitable Foundation through the Gatsby Initiative in Brain Circuitry at Columbia University, the Simons Foundation, the Swartz Foundation, the Harold and Leila Y. Mathers Foundation, and the Kavli Institute for Brain Science at Columbia University. C.P. was supported by a postdoctoral fellowship from the Craig H. Neilsen Foundation for spinal cord injury research and the Stanford Dean’s Fellowship. S.D.S. was supported by the ALS Association’s Milton Safenowitz Postdoctoral Fellowship. K.V.S.’s research was supported by the following awards: an NIH-NINDS award (T-R01NS076460), an NIH-NIMH award (T-R01MH09964703), an NIH Director’s Pioneer award (8DP1HD075623), a DARPA-DSO ‘REPAIR’ award (N66001-10-C-2010), a DARPA-BTO ‘NeuroFAST’ award (W911NF-14-2-0013), a Simons Foundation Collaboration on the Global Brain award (325380), and the Howard Hughes Medical Institute. J.M.H.’s research was supported by an NIH-NIDCD award (R01DC014034). K.V.S. and J.M.H.’s research was supported by Stanford BioX-NeuroVentures, Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Garlick Foundation, and the Reeve Foundation. L.R.H.’s research was supported by an NIH-NIDCD award (R01DC009899), the Rehabilitation Research and Development Service, Department of Veterans Affairs (B6453R), the MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke, and the Executive Committee on Research, Massachusetts General Hospital. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, the Department of Veterans Affairs, or the United States Government. BrainGate CAUTION: Investigational Device. Limited by Federal Law to Investigational Use.

## Author information

Authors

### Contributions

C.P., D.J.O., and D.S. designed the study, performed analyses, and wrote the manuscript with input from all other authors. D.S. and L.F.A. developed the algorithmic approach. C.P., J.C., and R.J. contributed to algorithmic development and analysis of synthetic data. D.J.O., S.D.S., J.C.K., E.M.T., M.T.K., S.I.R., and K.V.S. performed research with monkeys. C.P., L.R.H., K.V.S., and J.M.H. performed research with human research participants. All authors contributed to revision of the manuscript.

### Corresponding authors

Correspondence to Chethan Pandarinath or David Sussillo.

## Ethics declarations

### Competing interests

J.M.H. is on the Medical Advisory Boards of Enspire DBS and Circuit Therapeutics, and the Surgical Advisory Board for Neuropace, Inc. K.V.S. is a consultant to Neuralink Corp. and on the Scientific Advisory Boards of CTRL-Labs, Inc. and Heal, Inc. These entities did not support this work. D.S. and J.C. are employed by Google Inc., and R.J. was employed by Google Inc. at the time the research was conducted. This funder provided support in the form of salaries for authors, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Integrated supplementary information

### Supplementary Figure 1 Conceptual dynamical system: 1D pendulum.

a. A 1-dimensional pendulum released from position p1 or p2 has different positions and velocities over time. b. The state of the pendulum is captured by a 2-D vector that specifies its position and velocity, shown here evolving as a function of time (blue and black traces correspond to different initial conditions p1 and p2). c. The evolution of the state follows dynamical rules, that is the pendulum’s equations of motion. In this autonomous dynamical system (that is, there are no perturbations), knowing the pendulum’s initial state (filled circles) and dynamical rules that govern the state evolution (gray vector field) gives full knowledge of the pendulum’s state as a function of time (black and blue traces). d-f. Perturbations to the pendulum, an example of input-driven dynamics. d. The pendulum’s motion might be perturbed by an external force, for example it is bumped rightward. e,f. With a perturbation, the evolution of the system’s state during the perturbation no longer follows its autonomous dynamical rules. This is shown as dashed red lines in the position vs. time and velocity vs. time plots, as well as the state-space diagram. A perturbation can be modeled by transforming the equations to allow an input term u(t) that models the perturbation. g. LFADS modeling of the perturbed pendulum. Traces of the pendulum motion are used to train LFADS. During training, the LFADS generator RNN learns to approximate the pendulum dynamics, = f(s, u), using its own internal state and dynamics. LFADS also learns a per-trial initial generator state s0, which allows it to model trials that start with different initial pendulum states, such as p1 or p2. Further, LFADS learns a set of time-varying inputs per trial, which allows it to model perturbations to the pendulum system, u(t). These three pieces of information are enough to reconstruct each trial.

### Supplementary Figure 2 LFADS applied to Lorenz attractor.

We compared the performance of LFADS to three existing methods that estimate latent state from neural data: Variational Latent Gaussian Process (vLGP), Gaussian Process Factor Analysis (GPFA), and Poisson Feed-forward neural network Linear Dynamical System (PfLDS). To test LFADS and to compare its performance with other approaches, we generated synthetic stochastic spike trains from deterministic nonlinear systems. The first is the standard Lorenz system (see Online Methods for equations and details). a. An example trial illustrating the evolution of the Lorenz system in its 3-dimensional state space, and b. its dynamic variables as a function of time. c. Firing rates for the 30 simulated neurons are generated by a linear readout of the latent variables followed by an exponential nonlinearity, with neurons sorted according to their weighting for the first Lorenz dimension. d. Spike times for the neurons are generated from the rates of the simulated neurons. e-h Sample performance for each method applied to spike trains based on Lorenz attractor. Each panel shows actual (black) and inferred (red) values of the three latent variables for a single example trial for the 4 methods: e. vLGP, f. GPFA, g. PfLDS, h. LFADS. For LFADS, posterior means were averaged over 128 samples of g0 conditioned on the particular data sequence being decoded. We quantified performance using R2, that is, the fraction of the variance of the actual latent variables captured by the estimated latent values. For the latent dimensions {y1,y2,y3}, the resulting R2 values were {0.761,0.688,0.218}, {0.713,0.725,0.325}, {0.784,0.732,0.368}, and {0.850,0.921,0.872} for vLGP, GPFA, PfLDS, and LFADS, respectively.

### Supplementary Figure 3 LFADS applied to autonomous chaotic data RNN.

a-c. We generated synthetic data with high-dimensional, chaotic dynamics using an RNN. a. Firing rates for one example trial, simulated by the chaotic "data RNN" (colors show rates fluctuating between -1 and 1). b. We then created synthetic spike trains, emitted from a Poisson process whose underlying rates were the normalized continuous rates of the data RNN. c. We used principal components analysis to assess the dimensionality of the data. As expected, the state of the data RNN had lower dimension than its number of neurons, and 20 principal components were sufficient to capture > 95% of the variance of the system. So we restricted the latent space to 20 dimensions for each of the models tested and, in the case of LFADS, set the factors dimensionality to 20 as well (F = 20). d-g Sample performance for each method on the RNN task. We tested the performance of the methods at extracting the underlying firing rates from the spike trains of the RNN dataset. Shown are single trial examples for d GPFA, e vLGP, f PfLDS, and g LFADS. As can be seen by eye, the LFADS results are closer to the actual underlying rates than for the other models (black, firing rates of chaotic data RNN, red, inferred rates). h-j. Summary R2 values between actual and inferred rates. Comparison using held-out data of the R2 values for h GPFA vs. LFADS, i vLGP vs. LFADS, and j PfLDS vs. LFADS for all units (blue ’x’). In all comparisons, LFADS yields a better fit to the data, for every single unit.

### Supplementary Figure 4 Effect of varying hyperparameters on kinematic decoding performance in the maze task.

For GPFA, we tested the effect of varying binsize and latent dimensionality on the ability to decode arm velocities. For LFADS, we fixed the binsize at 5 ms, and tested the effect of changing the latent dimensionality (number of factors).

### Supplementary Figure 5 Single-trial factor trajectories from the multisession stitched LFADS model for a representative session.

LFADS factors are projected into a space consisting of the condition-independent signal (CIS) and the first rotational plane identified using jPCA; this is the same projection used in Fig. 4d in the main text. Black dots depict the time of go cue presentation, and each trajectory proceeds for 510 ms. Colors indicate reach direction as in Fig. 4. Thin traces are trajectories from individual trials; thick traces of the same color are the condition-averaged trajectory for each reach direction.

### Supplementary Figure 6 Dynamically stitched multisession LFADS model outperforms single-session LFADS models in predicting reaction times.

As defined in (Kaufman et al., eNeuro 2016), the condition-independent signal (CIS) is a high variance component of motor cortical population activity obtained via demixed principal components analysis (dPCA). Kaufman et al. 2016 also demonstrated that threshold crossing time of the CIS on single trials is an effective predictor of reach reaction time (RT). Here we identify the CIS as a linear projection of LFADS factor trajectories. We apply dPCA to the factor outputs of each single-session and the multi-session LFADS models to identify the largest condition-independent component, and then threshold the CIS to predict RT on single trials. a. Plot of condition-independent signals (CIS) for an example dataset. Each trace represents the CIS timecourse on a single trial, and is colored by that trial’s actual RT. b. Plot of correlations between CIS-predicted RT and actual RT on trials from each dataset for stitched multi-session LFADS vs single-session LFADS. Each point represents an individual recording session. For the stitched model, a single CIS projection was computed and applied for all sessions, whereas individual CIS projections were obtained for each single-session model.

### Supplementary Figure 7 Inferring inputs from a chaotic data RNN with delta pulse inputs.

We tested the ability of LFADS to infer the input to a dynamical system, specifically chaotic data RNNs, as used in Supp. Fig. 3. During each trial, we perturbed the network by delivering a delta pulse at a random time tpulse between 0.25s and 0.75s. The full trial length was 1s. This pulse affected the underlying rates produced by the data RNN, which subsequently affected the generated spike trains that were used as input to LFADS. To test the ability of LFADS to infer the timing of the input pulses, we allowed the LFADS model to infer a time-varying input (1-dimensional). We explored two levels of dynamical complexity in the data RNNs (see Online Methods), defined by two values, 1.5 and 2.5, of a hyper-parameter to the data RNN, γ. a-c γ = 1.5. This value of γ value produces “gentler" chaotic activity in the data RNN than the higher value. a. Example trial illustrating results from the γ = 1.5 chaotic data RNN with an external input (shown in black at the top of each column). Firing rates for the 50 simulated neurons. b. Poisson-generated spike times for the simulated neurons. c. Example trial showing (top) the actual (black) and inferred (cyan) input, and (bottom) actual firing rates of a subset of neurons in black and the corresponding inferred firing rates in red (bottom). d-f. Same as a-c, but for γ = 2.5, which produces significantly more chaotic dynamics than γ = 1.5. f. For this more difficult case, LFADS inferred input at the correct time (blue arrow), but also used its 1-D inferred input to shape the dynamics at times there was no actual input (green arrow).

### Supplementary Figure 8 Summary of results of chaotic data RNNs receiving input pulses.

We extracted averaged inferred inputs, ut, from the LFADS models used in Supp. Fig. 7. a,b. To see how timing of the inferred input was related to the timing of the actual input pulse, we determined the time at which ut reached its maximum value (vertical axis), and plotted this against the actual time of the delta pulse (horizontal axis) for all trials. a. Results using γ = 1.5, b. and γ = 2.5. As shown, for the majority of trials, despite the complex internal dynamics, LFADS was able to infer the correct timing of a strong input. However, LFADS did a better job of inferring the inputs in the case of simpler dynamics for two reasons: In the case of γ = 2.5, 1) the complexity of the dynamics reduces the effective magnitude of the perturbation caused by the input and 2) LFADS used the inferred input more actively to account for non-input-driven dynamics. We include this example of a highly chaotic data RNN to highlight the subtlety of interpreting an inferred input. c-d One possibility in using LFADS with inferred inputs (that is dimensionality of ut ≥ 1) is that the data to be modeled is actually generated by an autonomous system, yet one, not knowing this fact, allows for an inferred input in LFADS. To study this case we utilized the four chaotic data RNNs described above, that is γ = 1.5, and γ = 2.5, with and without delta pulse inputs. We trained an LFADS model for each of the four cases, with an inferred input of dimensionality 1, despite the fact that two of the four data RNNs generated their data autonomously. After training we examined the strength of the average inferred input, ut, for each LFADS model (where strength is taken as the root-mean-square of the inferred input, averaged over an appropriate time window, sqrt(〈u2t〉t1:t2)). The results are show in panel c for γ = 1.5 and panel d for γ = 2.5. The solid lines show the strength ut at each time point when the data RNN received no input pulses, averaged across all examples. The ’◦’ and ’x’ show the strength of ut at time points when the data RNN received delta pulses, averaged in a time window around t, and averaged over all examples. Intuitively, a ’◦’ is the strength of ut around a delta pulse at time t, and an ’x’ is the strength of ut if there was no delta pulse around time t.

### Supplementary Figure 9 Inferring input from a data RNN trained to integrate white noise to a bound.

LFADS is able to model simulated neurons that are integrating a noisy input, and also infer the input itself (the noise signal). a. Overview of the integration-to-bound task. On each trial, the data RNN receives noise drawn from a Gaussian distribution with mean 0, variance 0.0625. We trained an RNN to integrate this stochastic, 1-dimensional input to either a high (+1) or low (−1) bound. After the data RNN learned the task, we generated spiking data from 50 neurons using similar methodology as Supp. Fig. 3 and fit an LFADS model to this data. b-d We fit an LFADS model to the data using 3200 1-second training examples, and evaluated its performance on 800 held-out trials. LFADS was able to accurately infer the ground truth firing rates (LFADS in red, ground truth in black). LFADS also inferred the associated white-noise input to the data RNN (LFADS cyan, ground truth in black, posterior means averaged over 1024 samples). These panels show the trials with the worst, median, and best measured R2 values between true and inferred inputs. b. Trial with worst R2 = 0.11, c. median R2 = 0.38, d. and the best R2 = 0.64. e. Histogram showing distribution of R2 values between true and inferred inputs for the 800 held-out trials.

### Supplementary Figure 10 Inferred inputs from individual trials in the cursor jump task.

This figure parallels Fig. 5d from the main text, but in this case, we plot the inferred inputs for individual trials. To increase visibility, only 10 trials for each condition (reach direction and perturbation type) are shown. Individual traces were smoothed with an acausal Gaussian filter (60 ms s.d.). Despite the high variance across individual trials, several of the trends in the inferred inputs described in Fig. 5 in the main text are visible at the single-trial level. The inputs show information about the target of the upcoming reach on a single-trial level, though individual traces are noisy. Specifically, for example, at the time of target onset (squares), the inferred input dimension 1 diverges for up vs. down reaches, but not for different perturbation types (as the information about perturbation type is not yet known at that phase of the task). Further, the inputs show information about the perturbation timing and identity on a single-trial level, though again, individual traces are noisy. Specifically, around the time of the perturbation (arrow), the traces diverge for left-perturbed vs. right-perturbed vs. unperturbed trials (for example, seen in dimension 2). Though these individual traces are noisy, Fig. 5e in the main text shows that these inputs can largely be separated on a single-trial basis using a nonlinear dimensionality reduction algorithm, t-SNE.

### Supplementary Figure 11 The LFADS generator.

The generative LFADS model is a recurrent network with a feed-forward read-out. The generator takes a sampled initial condition, $${\hat{\mathbf g}}_0$$ and a sampled inferred input, $${\hat{\mathbf u}}_t$$, at each time step, and iterates forward. At each time step the temporal factors, ft, and the rates, rt are generated in a feed-forward manner from gt. Spikes are generated from a Poisson process, $${\hat{\mathbf x}}_t \sim {\rm{Poisson}}({\mathbf{x}}_t|{\mathbf{r}}_t)$$. The initial condition and input at time step 1 are sampled from diagonal Gaussian distributions with zero mean and fixed chosen variance. Otherwise, the inputs are sampled from a Gaussian auto-regressive prior.

### Supplementary Figure 12 The full LFADS model for inference.

The generator / decoder portion highlighted with a gray background and is colored red, the encoder portion is colored blue and the controller, purple. To infer the latent dynamics from the recorded neural spike trains x1:T and conditioning data a1:T, initial conditions for the controller and generator networks are encoded from inputs. In the case of the generator, the initial condition $${\hat{\mathbf g}}_0$$ is drawn from an approximate posterior $$Q^{g_0}({\mathbf{g}}_0|{\mathbf{x}}_{1:T},{\mathbf{a}}_{1:T})$$ that receives an encoding of the input, Egen (in this figure, for compactness, we use x and a to denote x1:T and a1:T). The low-dimensional factors at t = 0, f0, are computed from $${\hat{\mathbf g}}_0$$. The controller then propagates one step forward in time, receiving the sample factors f0 as well as bidirectionally encoded inputs $${\mathbf{E}}_1^{con}$$ computed from x1:T,a1:T. The controller produces, through an approximate posterior Qu(u1|g0,x1:T,a1:T), a sampled inferred input $${\hat{\mathbf u}}_1$$ that is fed into the generator network. The generator network then produces {g1,f1,r1}, with f1 the factors, and r1 the Poisson rates at t = 1. The process continues iteratively so, at time step t, the generator network receives gt−1 and $${\hat{\mathbf u}}_t$$ sampled from Qu(ut|u1:t−1,g0,x1:T,a1:T). The job of the controller is to produce a nonzero inferred input only when the generator network is incapable of accounting for the data autonomously. Although the controller is technically part of the encoder, it is run in a forward manner along with the decoder.

## Supplementary information

### Supplementary Text and Figures

Supplementary Figures 1–12, Supplementary Tables 1 and 2, and Supplementary Notes 1 and 2

### Supplementary Video 1

Generator initial states inferred by LFADS are organized with respect to kinematics of the upcoming reach. The video depicts the initial conditions vectors for each individual trial of the ‘Maze’ reaching task for monkey J, mapped onto a low-dimensional space (3D) via t-SNE (as in Fig. 2c). Each point represents the initial conditions vector for an individual trial (2,296 trials are shown). Colors denote the angle of the endpoint of the upcoming reach (colors shown in Fig. 2a), and marker types denote the curvature of the reach (circles, squares, and triangles for straight, counter-clockwise curved, and clockwise curved reaches, respectively). As shown, the initial conditions exhibit similarity for trials with similar kinematic trajectories (both for trials whose reach endpoints have similar angles and for trials with similar reach curvature). Since structure in the initial conditions implies structure at the level of the generator’s dynamics, this analysis implies that LFADS produces dynamic trajectories that show similarity based on the kinematics of the reach type for a given trial, despite LFADS not having any information about reaching conditions.

### Supplementary Video 2

LFADS reveals consistent rotational dynamics on individual trials. The video contains two sequential movies showing the trajectories in neural population state space during individual reach trials for monkey J (Fig. 3). The first movie illustrates the single-trial trajectories uncovered by smoothing the data with a Gaussian kernel. The second movie illustrates single-trial trajectories uncovered by LFADS. 2,296 trials are shown, representing the 108 conditions of the ‘Maze’ task.

### Supplementary Video 3

Multisession LFADS finds consistent representations for individual trials across sessions. The video contains six sequential movies showing the trajectories in state space during individual reach trials for monkey P (Fig. 4). The first video shows single-trial GPFA factor trajectories for all trials estimated for a single session. The second and third videos show single-trial LFADS factor trajectories estimated from all trials using a single-session model and from the stitched model, respectively. The fourth, fifth, and sixth repeat this sequence but show single-trial trajectories for 42/44 sessions (2 were omitted for ease of presentation). Colors represent eight reach directions. Multisession movies include approximately 14,500 trials, 38 separate electrode penetration sites and spanned 162 d from the first to the last session. Each trajectory begins at the go cue and proceeds for 510 ms into movement, which occurs at varying times due to reaction time variability. For GPFA and single-session. LFADS factors, the trajectories from individual sessions were concatenated and the projection yielding the CIS and first jPCA plane was estimated (Methods). This provided a set of common trajectories against which each individual session’s data were regressed. These regression coefficients provided projections of the individual sessions’ trajectories that were maximally similar to the common trajectories. In contrast, for stitched LFADS factors, we simply estimated the projection yielding the CIS and first jPCA plane from all of the sessions together, as the factors are already in shared space.

### Supplementary Data 1

Comparing rates estimated by three different techniques for all 202 neurons recorded during the ‘Maze’ task.

### Supplementary Data 2

Temporal correlations for conditioned-averaged responses for the ‘Maze’ task.

### Supplementary Data 3

Neural correlations for conditioned-averaged responses for the ‘Maze’ task.

### Supplementary Software 1

Reference implementation for Latent Factor Analysis via Dynamical Systems (LFADS), written in Python and TensorFlow.

### Supplementary Software 2

Matlab interface for Latent Factor Analysis via Dynamical Systems (LFADS).

## Rights and permissions

Reprints and Permissions

Pandarinath, C., O’Shea, D.J., Collins, J. et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nat Methods 15, 805–815 (2018). https://doi.org/10.1038/s41592-018-0109-9

• Accepted:

• Published:

• Issue Date:

• DOI: https://doi.org/10.1038/s41592-018-0109-9

• ### Multiregion neuronal activity: the forest and the trees

• Isaac V. Kauvar
• Karl Deisseroth

Nature Reviews Neuroscience (2022)

• ### A guide to machine learning for biologists

• Joe G. Greener
• Shaun M. Kandathil
• David T. Jones

Nature Reviews Molecular Cell Biology (2022)

• ### Imaging whole-brain activity to understand behaviour

• Albert Lin
• Daniel Witvliet
• Vivek Venkatachalam

Nature Reviews Physics (2022)

• ### Priority coding in the visual system

• Nicole C. Rust
• Marlene R. Cohen

Nature Reviews Neuroscience (2022)

• ### Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder

• Matthew S. Willsey
• Samuel R. Nason-Tomaszewski
• Cynthia A. Chestek

Nature Communications (2022)