Abstract
Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from singleneuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trialtotrial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from singletrial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from nonoverlapping recording sessions spanning months to improve inference of underlying dynamics.
Access options
Subscribe to Journal
Get full journal access for 1 year
$242.00
only $20.17 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
from$8.99
All prices are NET prices.
Data availability
Data will be made available upon reasonable request from the authors, unless prohibited owing to research participant privacy concerns.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.
Afshar, A. et al. Singletrial neural correlates of arm movement preparation. Neuron 71, 555–564 (2011).
 2.
Carnevale, F., de Lafuente, V., Romo, R., Barak, O. & Parga, N. Dynamic control of response criterion in premotor cortex during perceptual detection under temporal uncertainty. Neuron 86, 1067–1077 (2015).
 3.
Churchland, M. M. et al. Neural population dynamics during reaching. Nature 487, 51–56 (2012).
 4.
Harvey, C. D., Coen, P. & Tank, D. W. Choicespecific sequences in parietal cortex during a virtualnavigation decision task. Nature 484, 62–68 (2012).
 5.
Kaufman, M. T., Churchland, M. M., Ryu, S. I. & Shenoy, K. V. Cortical activity in the null space: permitting preparation without movement. Nat. Neurosci. 17, 440–448 (2014).
 6.
Kobak, D. et al. Demixed principal component analysis of neural population data. Elife 5, e10989 (2016).
 7.
Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Contextdependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).
 8.
Pandarinath, C. et al. Neural population dynamics in human motor cortex during movements in people with ALS. Elife 4, e07436 (2015).
 9.
Sadtler, P. T. et al. Neural constraints on learning. Nature 512, 423–426 (2014).
 10.
Shenoy, K. V., Sahani, M. & Churchland, M. M. Cortical control of arm movements: a dynamical systems perspective. Annu. Rev. Neurosci. 36, 337–359 (2013).
 11.
Ahrens, M. B. et al. Brainwide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).
 12.
Yu, B. M. et al. Gaussianprocess factor analysis for lowdimensional singletrial analysis of neural population activity. J. Neurophysiol. 102, 614–635 (2009).
 13.
Zhao, Y. & Park, I. M. Variational latent Gaussian process for recovering singletrial dynamics from population spike trains. Neural Comput. 29, 1293–1316 (2017).
 14.
Aghagolzadeh, M. & Truccolo, W. Latent statespace models for neural decoding. Conf. Proc. IEEE Eng.Med. Biol. Soc. 2014, 3033–3036 (2014).
 15.
Gao, Y., Archer, E. W., Paninski, L. & Cunningham, J. P. Linear dynamical neural population models through nonlinear embeddings. In Proc. 30th International Conference on Neural Information Processing Systems (eds. Lee, D. D. et al.) 163–171 (Curran Associates, Red Hook, NY, 2016).
 16.
Kao, J. C. et al. Singletrial dynamics of motor cortex and their applications to brainmachine interfaces. Nat. Commun. 6, 7759 (2015).
 17.
Macke, J. H. et al. Empirical models of spiking in neural populations. In Advances in Neural Information Processing Systems 24 (eds ShaweTaylor, J. et al.) 1350–1358 (Curran Associates, Red Hook, NY, 2011).
 18.
Linderman, S. et al. Bayesian learning and inference in recurrent switching linear dynamical systems. In Proc. 20th International Conference on Artificial Intelligence and Statistics vol. 54 (eds Singh, A. & Zhu, J.) 914–922 (PMLR/Microtome Publishing, Brookline, MA, 2017).
 19.
Petreska, B. et al. Dynamical segmentation of single trials from population neural data. In Advances in Neural Information Processing Systems 24 (eds ShaweTaylor, J. et al.) 756–764 (Curran Associates, Red Hook, NY, 2011).
 20.
Kato, S. et al. Global brain dynamics embed the motor command sequence of Caenorhabditis elegans. Cell 163, 656–669 (2015).
 21.
Kaufman, M. T. et al. The largest response component in motor cortex reflects movement timing but not movement type. eNeuro 3, ENEURO.008516.2016 (2016).
 22.
Gao, P. & Ganguli, S. On simplicity and complexity in the brave new world of largescale neuroscience. Curr. Opin. Neurobiol. 32, 148–155 (2015).
 23.
Kingma, D. P. & Welling, M. Autoencoding variational bayes. arXiv Preprint at https://arxiv.org/abs/1312.6114 (2013).
 24.
Doersch, C. Tutorial on variational autoencoders. arXiv Preprint at https://arxiv.org/abs/1606.05908 (2016).
 25.
Sussillo, D., Jozefowicz, R., Abbott, L. F. & Pandarinath, C. LFADS—latent factor analysis via dynamical systems. arXiv Preprint at https://arxiv.org/abs/1608.06315 (2016).
 26.
Salinas, E. & Abbott, L. F. Vector reconstruction from firing rates. J. Comput. Neurosci. 1, 89–107 (1994).
 27.
Turaga, S. et al. Inferring neural population dynamics from multiple partial recordings of the same neural circuit. In Advances in Neural Information Processing Systems 26 (eds Burges, C. J. C. et al.) 539–547 (Curran Associates, Red Hook, NY, 2013).
 28.
Nonnenmacher, M., Turaga, S. C. & Macke, J. H. Extracting lowdimensional dynamics from multiple largescale neural population recordings by learning to predict correlations. In Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 5702–5712 (Curran Associates, Red Hook, NY, 2017).
 29.
Donoghue, J. P., Sanes, J. N., Hatsopoulos, N. G. & Gaal, G. Neural discharge and local field potential oscillations in primate motor cortex during voluntary movements. J. Neurophysiol. 79, 159–173 (1998).
 30.
Murthy, V. N. & Fetz, E. E. Synchronization of neurons during local field potential oscillations in sensorimotor cortex of awake monkeys. J. Neurophysiol. 76, 3968–3982 (1996).
 31.
Fries, P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends. Cogn. Sci. 9, 474–480 (2005).
 32.
Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487 (2015).
 33.
Gilja, V. et al. Clinical translation of a highperformance neural prosthesis. Nat. Med. 21, 1142 (2015).
 34.
Pandarinath, C. et al. High performance communication by people with paralysis using an intracortical braincomputer interface. Elife 6, e18554 (2017).
 35.
Sussillo, D. et al. A recurrent neural network for closedloop intracortical brain–machine interface decoders. J. Neural. Eng. 9, 26027 (2012).
 36.
Sussillo, D., Stavisky, S. D., Kao, J. C., Ryu, S. I. & Shenoy, K. V. Making brain–machine interfaces robust to future neural variability. Nat. Commun. 7, 13749 (2016).
 37.
Ezzyat, Y. et al. Closedloop stimulation of temporal cortex rescues functional networks and improves memory. Nat. Commun. 9, 365 (2018).
 38.
Klinger, N. V. & Mittal, S. Clinical efficacy of deep brain stimulation for the treatment of medically refractory epilepsy. Clin. Neurol. Neurosurg. 140, 11–25 (2016).
 39.
Little, S. et al. Adaptive deep brain stimulation in advanced Parkinson disease. Ann. Neurol. 74, 449–457 (2013).
 40.
Rosin, B. et al. Closedloop deep brain stimulation is superior in ameliorating parkinsonism. Neuron 72, 370–384 (2011).
 41.
Williamson, R. S., Sahani, M. & Pillow, J. W. The equivalence of informationtheoretic and likelihoodbased methods for neural dimensionality reduction. PLoS. Comput. Biol. 11, e1004141 (2015).
 42.
Rezende, D. J., Mohamed, S., & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Proc. 31st International Conference on Machine Learning (eds Xing, E. P. & Jebara, T.) 1278–1286 (JMLR/Microtome Publishing, Brookline, MA, 2014).
 43.
Gregor, K., Danihelka, I., Graves, A., Rezende, D. J. & Wierstra, D. DRAW: a recurrent neural network for image generation. arXiv Preprint at https://arxiv.org/abs/1608.06315 (2016).
 44.
Krishnan, R. G., Shalit, U. & Sontag, D. Deep Kalman filters. arXiv Preprint at https://arxiv.org/abs/1511.05121 (2015).
 45.
Chung, J., Gulcehre, C., Cho, K. & Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv Preprint at https://arxiv.org/abs/1412.3555 (2014).
 46.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Improving neural networks by preventing coadaptation of feature detectors. arXiv Preprint at https://arxiv.org/abs/1207.0580 (2012).
 47.
Zaremba, W., Sutskever, I. & Vinyals, O. Recurrent neural network regularization. arXiv Preprint at https://arxiv.org/abs/1409.2329 (2014).
 48.
Bowman, S. R. et al. Generating sentences from a continuous space. In Proc. 20th SIGNLL Conference on Computational Natural Language Learning (eds Riezler, S. & Goldberg, Y.) 10–21 (Association for Computational Linguistics, Stroudsberg, PA, 2016).
 49.
Hochberg, L. R. BrainGate2: feasibility study of an intracortical neural interface system for persons with tetraplegia (BrainGate2). ClinicalTrials.gov https://www.clinicaltrials.gov/ct2/show/NCT00912041 (2018)..
 50.
Gilja, V. et al. A highperformance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757 (2012).
 51.
Maaten, Lvander & Hinton, G. Visualizing data using tSNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).
 52.
Willett, F. R. et al. Feedback control policies employed by people using intracortical braincomputer interfaces. J. Neural Eng. 14, 016001 (2017).
 53.
Fan, J. M. et al. Intention estimation in brain–machine interfaces. J. Neural. Eng. 11, 16004 (2014).
 54.
Paninski, L. Maximum likelihood estimation of cascade pointprocess neural encoding models. Network 15, 243–262 (2004).
Acknowledgements
We thank J.P. Cunningham and J. SohlDickstein for extensive conversation. We thank M.M. Churchland for contributions to data collection for monkey J, C. Blabe and P. Nuyujukian for assistance with research sessions with participant T5, E. Eskandar for array implantation with participant T7, and B. Sorice and A. Sarma for assistance with research sessions with participant T7. L.F.A.’s research was supported by US National Institutes of Health grant MH093338, the Gatsby Charitable Foundation through the Gatsby Initiative in Brain Circuitry at Columbia University, the Simons Foundation, the Swartz Foundation, the Harold and Leila Y. Mathers Foundation, and the Kavli Institute for Brain Science at Columbia University. C.P. was supported by a postdoctoral fellowship from the Craig H. Neilsen Foundation for spinal cord injury research and the Stanford Dean’s Fellowship. S.D.S. was supported by the ALS Association’s Milton Safenowitz Postdoctoral Fellowship. K.V.S.’s research was supported by the following awards: an NIHNINDS award (TR01NS076460), an NIHNIMH award (TR01MH09964703), an NIH Director’s Pioneer award (8DP1HD075623), a DARPADSO ‘REPAIR’ award (N6600110C2010), a DARPABTO ‘NeuroFAST’ award (W911NF1420013), a Simons Foundation Collaboration on the Global Brain award (325380), and the Howard Hughes Medical Institute. J.M.H.’s research was supported by an NIHNIDCD award (R01DC014034). K.V.S. and J.M.H.’s research was supported by Stanford BioXNeuroVentures, Stanford Institute for NeuroInnovation and Translational Neuroscience, the Garlick Foundation, and the Reeve Foundation. L.R.H.’s research was supported by an NIHNIDCD award (R01DC009899), the Rehabilitation Research and Development Service, Department of Veterans Affairs (B6453R), the MGHDeane Institute for Integrated Research on Atrial Fibrillation and Stroke, and the Executive Committee on Research, Massachusetts General Hospital. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, the Department of Veterans Affairs, or the United States Government. BrainGate CAUTION: Investigational Device. Limited by Federal Law to Investigational Use.
Author information
Author notes
Affiliations
Contributions
C.P., D.J.O., and D.S. designed the study, performed analyses, and wrote the manuscript with input from all other authors. D.S. and L.F.A. developed the algorithmic approach. C.P., J.C., and R.J. contributed to algorithmic development and analysis of synthetic data. D.J.O., S.D.S., J.C.K., E.M.T., M.T.K., S.I.R., and K.V.S. performed research with monkeys. C.P., L.R.H., K.V.S., and J.M.H. performed research with human research participants. All authors contributed to revision of the manuscript.
Competing interests
J.M.H. is on the Medical Advisory Boards of Enspire DBS and Circuit Therapeutics, and the Surgical Advisory Board for Neuropace, Inc. K.V.S. is a consultant to Neuralink Corp. and on the Scientific Advisory Boards of CTRLLabs, Inc. and Heal, Inc. These entities did not support this work. D.S. and J.C. are employed by Google Inc., and R.J. was employed by Google Inc. at the time the research was conducted. This funder provided support in the form of salaries for authors, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Corresponding authors
Correspondence to Chethan Pandarinath or David Sussillo.
Integrated supplementary information
Supplementary Figure 1 Conceptual dynamical system: 1D pendulum.
a. A 1dimensional pendulum released from position p1 or p2 has different positions and velocities over time. b. The state of the pendulum is captured by a 2D vector that specifies its position and velocity, shown here evolving as a function of time (blue and black traces correspond to different initial conditions p1 and p2). c. The evolution of the state follows dynamical rules, that is the pendulum’s equations of motion. In this autonomous dynamical system (that is, there are no perturbations), knowing the pendulum’s initial state (filled circles) and dynamical rules that govern the state evolution (gray vector field) gives full knowledge of the pendulum’s state as a function of time (black and blue traces). df. Perturbations to the pendulum, an example of inputdriven dynamics. d. The pendulum’s motion might be perturbed by an external force, for example it is bumped rightward. e,f. With a perturbation, the evolution of the system’s state during the perturbation no longer follows its autonomous dynamical rules. This is shown as dashed red lines in the position vs. time and velocity vs. time plots, as well as the statespace diagram. A perturbation can be modeled by transforming the equations to allow an input term u(t) that models the perturbation. g. LFADS modeling of the perturbed pendulum. Traces of the pendulum motion are used to train LFADS. During training, the LFADS generator RNN learns to approximate the pendulum dynamics, ṡ = f(s, u), using its own internal state and dynamics. LFADS also learns a pertrial initial generator state s0, which allows it to model trials that start with different initial pendulum states, such as p1 or p2. Further, LFADS learns a set of timevarying inputs per trial, which allows it to model perturbations to the pendulum system, u(t). These three pieces of information are enough to reconstruct each trial.
Supplementary Figure 2 LFADS applied to Lorenz attractor.
We compared the performance of LFADS to three existing methods that estimate latent state from neural data: Variational Latent Gaussian Process (vLGP), Gaussian Process Factor Analysis (GPFA), and Poisson Feedforward neural network Linear Dynamical System (PfLDS). To test LFADS and to compare its performance with other approaches, we generated synthetic stochastic spike trains from deterministic nonlinear systems. The first is the standard Lorenz system (see Online Methods for equations and details). a. An example trial illustrating the evolution of the Lorenz system in its 3dimensional state space, and b. its dynamic variables as a function of time. c. Firing rates for the 30 simulated neurons are generated by a linear readout of the latent variables followed by an exponential nonlinearity, with neurons sorted according to their weighting for the first Lorenz dimension. d. Spike times for the neurons are generated from the rates of the simulated neurons. eh Sample performance for each method applied to spike trains based on Lorenz attractor. Each panel shows actual (black) and inferred (red) values of the three latent variables for a single example trial for the 4 methods: e. vLGP, f. GPFA, g. PfLDS, h. LFADS. For LFADS, posterior means were averaged over 128 samples of g0 conditioned on the particular data sequence being decoded. We quantified performance using R^{2}, that is, the fraction of the variance of the actual latent variables captured by the estimated latent values. For the latent dimensions {y1,y2,y3}, the resulting R^{2} values were {0.761,0.688,0.218}, {0.713,0.725,0.325}, {0.784,0.732,0.368}, and {0.850,0.921,0.872} for vLGP, GPFA, PfLDS, and LFADS, respectively.
Supplementary Figure 3 LFADS applied to autonomous chaotic data RNN.
ac. We generated synthetic data with highdimensional, chaotic dynamics using an RNN. a. Firing rates for one example trial, simulated by the chaotic "data RNN" (colors show rates fluctuating between 1 and 1). b. We then created synthetic spike trains, emitted from a Poisson process whose underlying rates were the normalized continuous rates of the data RNN. c. We used principal components analysis to assess the dimensionality of the data. As expected, the state of the data RNN had lower dimension than its number of neurons, and 20 principal components were sufficient to capture > 95% of the variance of the system. So we restricted the latent space to 20 dimensions for each of the models tested and, in the case of LFADS, set the factors dimensionality to 20 as well (F = 20). dg Sample performance for each method on the RNN task. We tested the performance of the methods at extracting the underlying firing rates from the spike trains of the RNN dataset. Shown are single trial examples for d GPFA, e vLGP, f PfLDS, and g LFADS. As can be seen by eye, the LFADS results are closer to the actual underlying rates than for the other models (black, firing rates of chaotic data RNN, red, inferred rates). hj. Summary R^{2} values between actual and inferred rates. Comparison using heldout data of the R2 values for h GPFA vs. LFADS, i vLGP vs. LFADS, and j PfLDS vs. LFADS for all units (blue ’x’). In all comparisons, LFADS yields a better fit to the data, for every single unit.
Supplementary Figure 4 Effect of varying hyperparameters on kinematic decoding performance in the maze task.
For GPFA, we tested the effect of varying binsize and latent dimensionality on the ability to decode arm velocities. For LFADS, we fixed the binsize at 5 ms, and tested the effect of changing the latent dimensionality (number of factors).
Supplementary Figure 5 Singletrial factor trajectories from the multisession stitched LFADS model for a representative session.
LFADS factors are projected into a space consisting of the conditionindependent signal (CIS) and the first rotational plane identified using jPCA; this is the same projection used in Fig. 4d in the main text. Black dots depict the time of go cue presentation, and each trajectory proceeds for 510 ms. Colors indicate reach direction as in Fig. 4. Thin traces are trajectories from individual trials; thick traces of the same color are the conditionaveraged trajectory for each reach direction.
Supplementary Figure 6 Dynamically stitched multisession LFADS model outperforms singlesession LFADS models in predicting reaction times.
As defined in (Kaufman et al., eNeuro 2016), the conditionindependent signal (CIS) is a high variance component of motor cortical population activity obtained via demixed principal components analysis (dPCA). Kaufman et al. 2016 also demonstrated that threshold crossing time of the CIS on single trials is an effective predictor of reach reaction time (RT). Here we identify the CIS as a linear projection of LFADS factor trajectories. We apply dPCA to the factor outputs of each singlesession and the multisession LFADS models to identify the largest conditionindependent component, and then threshold the CIS to predict RT on single trials. a. Plot of conditionindependent signals (CIS) for an example dataset. Each trace represents the CIS timecourse on a single trial, and is colored by that trial’s actual RT. b. Plot of correlations between CISpredicted RT and actual RT on trials from each dataset for stitched multisession LFADS vs singlesession LFADS. Each point represents an individual recording session. For the stitched model, a single CIS projection was computed and applied for all sessions, whereas individual CIS projections were obtained for each singlesession model.
Supplementary Figure 7 Inferring inputs from a chaotic data RNN with delta pulse inputs.
We tested the ability of LFADS to infer the input to a dynamical system, specifically chaotic data RNNs, as used in Supp. Fig. 3. During each trial, we perturbed the network by delivering a delta pulse at a random time t_{pulse} between 0.25s and 0.75s. The full trial length was 1s. This pulse affected the underlying rates produced by the data RNN, which subsequently affected the generated spike trains that were used as input to LFADS. To test the ability of LFADS to infer the timing of the input pulses, we allowed the LFADS model to infer a timevarying input (1dimensional). We explored two levels of dynamical complexity in the data RNNs (see Online Methods), defined by two values, 1.5 and 2.5, of a hyperparameter to the data RNN, γ. ac γ = 1.5. This value of γ value produces “gentler" chaotic activity in the data RNN than the higher value. a. Example trial illustrating results from the γ = 1.5 chaotic data RNN with an external input (shown in black at the top of each column). Firing rates for the 50 simulated neurons. b. Poissongenerated spike times for the simulated neurons. c. Example trial showing (top) the actual (black) and inferred (cyan) input, and (bottom) actual firing rates of a subset of neurons in black and the corresponding inferred firing rates in red (bottom). df. Same as ac, but for γ = 2.5, which produces significantly more chaotic dynamics than γ = 1.5. f. For this more difficult case, LFADS inferred input at the correct time (blue arrow), but also used its 1D inferred input to shape the dynamics at times there was no actual input (green arrow).
Supplementary Figure 8 Summary of results of chaotic data RNNs receiving input pulses.
We extracted averaged inferred inputs, u_{t}, from the LFADS models used in Supp. Fig. 7. a,b. To see how timing of the inferred input was related to the timing of the actual input pulse, we determined the time at which u_{t} reached its maximum value (vertical axis), and plotted this against the actual time of the delta pulse (horizontal axis) for all trials. a. Results using γ = 1.5, b. and γ = 2.5. As shown, for the majority of trials, despite the complex internal dynamics, LFADS was able to infer the correct timing of a strong input. However, LFADS did a better job of inferring the inputs in the case of simpler dynamics for two reasons: In the case of γ = 2.5, 1) the complexity of the dynamics reduces the effective magnitude of the perturbation caused by the input and 2) LFADS used the inferred input more actively to account for noninputdriven dynamics. We include this example of a highly chaotic data RNN to highlight the subtlety of interpreting an inferred input. cd One possibility in using LFADS with inferred inputs (that is dimensionality of u_{t} ≥ 1) is that the data to be modeled is actually generated by an autonomous system, yet one, not knowing this fact, allows for an inferred input in LFADS. To study this case we utilized the four chaotic data RNNs described above, that is γ = 1.5, and γ = 2.5, with and without delta pulse inputs. We trained an LFADS model for each of the four cases, with an inferred input of dimensionality 1, despite the fact that two of the four data RNNs generated their data autonomously. After training we examined the strength of the average inferred input, u_{t}, for each LFADS model (where strength is taken as the rootmeansquare of the inferred input, averaged over an appropriate time window, sqrt(〈u^{2}_{t}〉t_{1}:t_{2})). The results are show in panel c for γ = 1.5 and panel d for γ = 2.5. The solid lines show the strength u_{t} at each time point when the data RNN received no input pulses, averaged across all examples. The ’◦’ and ’x’ show the strength of u_{t} at time points when the data RNN received delta pulses, averaged in a time window around t, and averaged over all examples. Intuitively, a ’◦’ is the strength of u_{t} around a delta pulse at time t, and an ’x’ is the strength of u_{t} if there was no delta pulse around time t.
Supplementary Figure 9 Inferring input from a data RNN trained to integrate white noise to a bound.
LFADS is able to model simulated neurons that are integrating a noisy input, and also infer the input itself (the noise signal). a. Overview of the integrationtobound task. On each trial, the data RNN receives noise drawn from a Gaussian distribution with mean 0, variance 0.0625. We trained an RNN to integrate this stochastic, 1dimensional input to either a high (+1) or low (−1) bound. After the data RNN learned the task, we generated spiking data from 50 neurons using similar methodology as Supp. Fig. 3 and fit an LFADS model to this data. bd We fit an LFADS model to the data using 3200 1second training examples, and evaluated its performance on 800 heldout trials. LFADS was able to accurately infer the ground truth firing rates (LFADS in red, ground truth in black). LFADS also inferred the associated whitenoise input to the data RNN (LFADS cyan, ground truth in black, posterior means averaged over 1024 samples). These panels show the trials with the worst, median, and best measured R^{2} values between true and inferred inputs. b. Trial with worst R^{2} = 0.11, c. median R^{2} = 0.38, d. and the best R^{2} = 0.64. e. Histogram showing distribution of R^{2} values between true and inferred inputs for the 800 heldout trials.
Supplementary Figure 10 Inferred inputs from individual trials in the cursor jump task.
This figure parallels Fig. 5d from the main text, but in this case, we plot the inferred inputs for individual trials. To increase visibility, only 10 trials for each condition (reach direction and perturbation type) are shown. Individual traces were smoothed with an acausal Gaussian filter (60 ms s.d.). Despite the high variance across individual trials, several of the trends in the inferred inputs described in Fig. 5 in the main text are visible at the singletrial level. The inputs show information about the target of the upcoming reach on a singletrial level, though individual traces are noisy. Specifically, for example, at the time of target onset (squares), the inferred input dimension 1 diverges for up vs. down reaches, but not for different perturbation types (as the information about perturbation type is not yet known at that phase of the task). Further, the inputs show information about the perturbation timing and identity on a singletrial level, though again, individual traces are noisy. Specifically, around the time of the perturbation (arrow), the traces diverge for leftperturbed vs. rightperturbed vs. unperturbed trials (for example, seen in dimension 2). Though these individual traces are noisy, Fig. 5e in the main text shows that these inputs can largely be separated on a singletrial basis using a nonlinear dimensionality reduction algorithm, tSNE.
Supplementary Figure 11 The LFADS generator.
The generative LFADS model is a recurrent network with a feedforward readout. The generator takes a sampled initial condition, \({\hat{\mathbf g}}_0\) and a sampled inferred input, \({\hat{\mathbf u}}_t\), at each time step, and iterates forward. At each time step the temporal factors, f_{t}, and the rates, r_{t} are generated in a feedforward manner from g_{t}. Spikes are generated from a Poisson process, \({\hat{\mathbf x}}_t \sim {\rm{Poisson}}({\mathbf{x}}_t{\mathbf{r}}_t)\). The initial condition and input at time step 1 are sampled from diagonal Gaussian distributions with zero mean and fixed chosen variance. Otherwise, the inputs are sampled from a Gaussian autoregressive prior.
Supplementary Figure 12 The full LFADS model for inference.
The generator / decoder portion highlighted with a gray background and is colored red, the encoder portion is colored blue and the controller, purple. To infer the latent dynamics from the recorded neural spike trains x_{1:T} and conditioning data a_{1:T}, initial conditions for the controller and generator networks are encoded from inputs. In the case of the generator, the initial condition \({\hat{\mathbf g}}_0\) is drawn from an approximate posterior \(Q^{g_0}({\mathbf{g}}_0{\mathbf{x}}_{1:T},{\mathbf{a}}_{1:T})\) that receives an encoding of the input, E^{gen} (in this figure, for compactness, we use x and a to denote x_{1:T} and a_{1:T}). The lowdimensional factors at t = 0, f_{0}, are computed from \({\hat{\mathbf g}}_0\). The controller then propagates one step forward in time, receiving the sample factors f_{0} as well as bidirectionally encoded inputs \({\mathbf{E}}_1^{con}\) computed from x_{1:T},a_{1:T}. The controller produces, through an approximate posterior Q^{u}(u_{1}g_{0},x_{1:T},a_{1:T}), a sampled inferred input \({\hat{\mathbf u}}_1\) that is fed into the generator network. The generator network then produces {g_{1},f_{1},r_{1}}, with f_{1} the factors, and r_{1} the Poisson rates at t = 1. The process continues iteratively so, at time step t, the generator network receives g_{t−1} and \({\hat{\mathbf u}}_t\) sampled from Q^{u}(u_{t}u_{1:t−1},g_{0},x_{1:T},a_{1:T}). The job of the controller is to produce a nonzero inferred input only when the generator network is incapable of accounting for the data autonomously. Although the controller is technically part of the encoder, it is run in a forward manner along with the decoder.
Supplementary information
Supplementary Text and Figures
Supplementary Figures 1–12, Supplementary Tables 1 and 2, and Supplementary Notes 1 and 2
Supplementary Video 1
Generator initial states inferred by LFADS are organized with respect to kinematics of the upcoming reach. The video depicts the initial conditions vectors for each individual trial of the ‘Maze’ reaching task for monkey J, mapped onto a lowdimensional space (3D) via tSNE (as in Fig. 2c). Each point represents the initial conditions vector for an individual trial (2,296 trials are shown). Colors denote the angle of the endpoint of the upcoming reach (colors shown in Fig. 2a), and marker types denote the curvature of the reach (circles, squares, and triangles for straight, counterclockwise curved, and clockwise curved reaches, respectively). As shown, the initial conditions exhibit similarity for trials with similar kinematic trajectories (both for trials whose reach endpoints have similar angles and for trials with similar reach curvature). Since structure in the initial conditions implies structure at the level of the generator’s dynamics, this analysis implies that LFADS produces dynamic trajectories that show similarity based on the kinematics of the reach type for a given trial, despite LFADS not having any information about reaching conditions.
Supplementary Video 2
LFADS reveals consistent rotational dynamics on individual trials. The video contains two sequential movies showing the trajectories in neural population state space during individual reach trials for monkey J (Fig. 3). The first movie illustrates the singletrial trajectories uncovered by smoothing the data with a Gaussian kernel. The second movie illustrates singletrial trajectories uncovered by LFADS. 2,296 trials are shown, representing the 108 conditions of the ‘Maze’ task.
Supplementary Video 3
Multisession LFADS finds consistent representations for individual trials across sessions. The video contains six sequential movies showing the trajectories in state space during individual reach trials for monkey P (Fig. 4). The first video shows singletrial GPFA factor trajectories for all trials estimated for a single session. The second and third videos show singletrial LFADS factor trajectories estimated from all trials using a singlesession model and from the stitched model, respectively. The fourth, fifth, and sixth repeat this sequence but show singletrial trajectories for 42/44 sessions (2 were omitted for ease of presentation). Colors represent eight reach directions. Multisession movies include approximately 14,500 trials, 38 separate electrode penetration sites and spanned 162 d from the first to the last session. Each trajectory begins at the go cue and proceeds for 510 ms into movement, which occurs at varying times due to reaction time variability. For GPFA and singlesession. LFADS factors, the trajectories from individual sessions were concatenated and the projection yielding the CIS and first jPCA plane was estimated (Methods). This provided a set of common trajectories against which each individual session’s data were regressed. These regression coefficients provided projections of the individual sessions’ trajectories that were maximally similar to the common trajectories. In contrast, for stitched LFADS factors, we simply estimated the projection yielding the CIS and first jPCA plane from all of the sessions together, as the factors are already in shared space.
Supplementary Data 1
Comparing rates estimated by three different techniques for all 202 neurons recorded during the ‘Maze’ task.
Supplementary Data 2
Temporal correlations for conditionedaveraged responses for the ‘Maze’ task.
Supplementary Data 3
Neural correlations for conditionedaveraged responses for the ‘Maze’ task.
Supplementary Software 1
Reference implementation for Latent Factor Analysis via Dynamical Systems (LFADS), written in Python and TensorFlow.
Supplementary Software 2
Matlab interface for Latent Factor Analysis via Dynamical Systems (LFADS).
Rights and permissions
About this article
Received
Accepted
Published
Issue Date
DOI
Further reading

Brain implants that let you speak your mind
Nature (2019)

An orderly singletrial organization of population dynamics in premotor cortex predicts behavioral variability
Nature Communications (2019)

Deep learning reaches the motor system
Nature Methods (2018)