Internal models of sensorimotor integration regulate cortical dynamics

Article metrics

Abstract

Sensorimotor control during overt movements is characterized in terms of three building blocks: a controller, a simulator and a state estimator. We asked whether the same framework could explain the control of internal states in the absence of movements. Recently, it was shown that the brain controls the timing of future movements by adjusting an internal speed command. We trained monkeys in a novel task in which the speed command had to be dynamically controlled based on the timing of a sequence of flashes. Recordings from the frontal cortex provided evidence that the brain updates the internal speed command after each flash based on the error between the timing of the flash and the anticipated timing of the flash derived from a simulated motor plan. These findings suggest that cognitive control of internal states may be understood in terms of the same computational principles as motor control.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: The 1-2-3-Go task, behavior and a sequential updating model.
Fig. 2: Predictions of the open-loop and internal model hypothesis in the 1-2-3-Go task.
Fig. 3: Example response profiles of individual DMFC neurons during the 1-2-3-Go task.
Fig. 4: Temporal scaling of non-monotonic firing rates across individual DMFC neurons.
Fig. 5: A representation of the sample interval by individual DMFC neurons.
Fig. 6: Neural trajectories and a technique for analyzing their kinematics.
Fig. 7: Relative speed and distance between neural trajectories during 1-2-3-Go.
Fig. 8: Speed and distance between neural trajectories reflect animals’ internal estimates.

Data availability

The data that support the findings of this study are available at: https://drive.google.com/drive/folders/1T-U4hHW8iIEEea8ngBHryr9qfL_GNu_y?usp=sharing.

Code availability

Standalone code, including custom analyses used for generating the plots in the paper, are provided at: https://drive.google.com/drive/folders/1T-U4hHW8iIEEea8ngBHryr9qfL_GNu_y?usp=sharing.

References

  1. 1.

    Wolpert, D. M. & Ghahramani, Z. Computational principles of movement neuroscience. Nat. Neurosci. 3, 1212–1217 (2000).

  2. 2.

    Todorov, E. Optimality principles in sensorimotor control. Nat. Neurosci. 7, 907–915 (2004).

  3. 3.

    Scott, S. H. Optimal feedback control and the neural basis of volitional motor control. Nat. Rev. Neurosci. 5, 532–546 (2004).

  4. 4.

    Shadmehr, R. & Krakauer, J. W. A computational neuroanatomy for motor control. Exp. Brain Res. 185, 359–381 (2008).

  5. 5.

    Wolpert, D. M., Ghahramani, Z. & Jordan, M. I. An internal model for sensorimotor integration. Science 269, 1880–1882 (1995).

  6. 6.

    Ito, M. Control of mental activities by internal models in the cerebellum. Nat. Rev. Neurosci. 9, 304–313 (2008).

  7. 7.

    Remington, E. D., Narain, D., Hosseini, E. A. & Jazayeri, M. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron 98, 1005–1019.e5 (2018).

  8. 8.

    Wang, J., Narain, D., Hosseini, E. A. & Jazayeri, M. Flexible timing by temporal scaling of cortical responses. Nat. Neurosci. 21, 102–110 (2018).

  9. 9.

    Sohn, H., Narain, D., Meirhaeghe, N. & Jazayeri, M. Bayesian computation through cortical latent dynamics. Neuron 103, 934–947.e5 (2019).

  10. 10.

    Shadmehr, R. & Mussa-Ivaldi, F. A. Adaptive representation of dynamics during learning of a motor task. J. Neurosci. 14, 3208–3224 (1994).

  11. 11.

    Gallistel, C. R. & Gibbon, J. Time, rate, and conditioning. Psychol. Rev. 107, 289–344 (2000).

  12. 12.

    Egger, S. W. & Jazayeri, M. A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise. Sci. Rep. 8, 12597 (2018).

  13. 13.

    Acerbi, L., Wolpert, D. M. & Vijayakumar, S. Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing. PLoS Comput. Biol. 8, e1002771 (2012).

  14. 14.

    Jazayeri, M. & Shadlen, M. N. A neural mechanism for sensing and reproducing a time interval. Curr. Biol. 25, 2599–2609 (2015).

  15. 15.

    Kurata, K. & Wise, S. P. Premotor and supplementary motor cortex in rhesus monkeys: neuronal activity during externally- and internally-instructed motor tasks. Exp. Brain Res. 72, 237–248 (1988).

  16. 16.

    Mita, A., Mushiake, H., Shima, K., Matsuzaka, Y. & Tanji, J. Interval time coding by neurons in the presupplementary and supplementary motor areas. Nat. Neurosci. 12, 502–507 (2009).

  17. 17.

    Kunimatsu, J. & Tanaka, M. Alteration of the timing of self-initiated but not reactive saccades by electrical stimulation in the supplementary eye field. Eur. J. Neurosci. 36, 3258–3268 (2012).

  18. 18.

    Merchant, H., Pérez, O., Zarco, W. & Gámez, J. Interval tuning in the primate medial premotor cortex as a general timing mechanism. J. Neurosci. 33, 9082–9096 (2013).

  19. 19.

    Merchant, H., Zarco, W., Pérez, O., Prado, L. & Bartolo, R. Measuring time with different neural chronometers during a synchronization-continuation task. Proc. Natl Acad. Sci. USA 108, 19784–19789 (2011).

  20. 20.

    Murakami, M., Vicente, M. I., Costa, G. M. & Mainen, Z. F. Neural antecedents of self-initiated actions in secondary motor cortex. Nat. Neurosci. 17, 1574–1582 (2014).

  21. 21.

    Merchant, H. & Averbeck, B. B. The computational and neural basis of rhythmic timing in medial premotor cortex. J. Neurosci. https://doi.org/10.1523/JNEUROSCI.0367-17.2017 (2017).

  22. 22.

    Franklin, D. W. & Wolpert, D. M. Computational mechanisms of sensorimotor control. Neuron 72, 425–442 (2011).

  23. 23.

    Kawato, M., Furukawa, K. & Suzuki, R. A hierarchical neural-network model for control and learning of voluntary movement. Biol. Cybern. 57, 169–185 (1987).

  24. 24.

    Angelaki, D. E., Shaikh, A. G., Green, A. M. & Dickman, J. D. Neurons compute internal models of the physical laws of motion. Nature 430, 560–564 (2004).

  25. 25.

    Sommer, M. A. & Wurtz, R. H. Influence of the thalamus on spatial visual processing in frontal cortex. Nature 444, 374–377 (2006).

  26. 26.

    Desmurget, M. & Grafton, S. Forward modeling allows feedback control for fast reaching movements. Trends Cogn. Sci. 4, 423–431 (2000).

  27. 27.

    Sabes, P. N. The planning and control of reaching movements. Curr. Opin. Neurobiol. 10, 740–746 (2000).

  28. 28.

    Todorov, E. & Jordan, M. I. Optimal feedback control as a theory of motor coordination. Nat. Neurosci. 5, 1226–1235 (2002).

  29. 29.

    Sommer, M. A. & Wurtz, R. H. A pathway in primate brain for internal monitoring of movements. Science 296, 1480–1482 (2002).

  30. 30.

    Cadena-Valencia, V., García-Garibay, O., Merchant, H., Jazayeri, M. & de Lafuente, V. Entrainment and maintenance of an internal metronome in supplementary motor area. eLife 7, e38983 (2018).

  31. 31.

    Shook, B. L., Schlag-Rey, M. & Schlag, J. Primate supplementary eye field. II. Comparative aspects of connections with the thalamus, corpus striatum, and related forebrain nuclei. J. Comp. Neurol. 307, 562–583 (1991).

  32. 32.

    Shook, B. L., Schlag-Rey, M. & Schlag, J. Primate supplementary eye field: I. Comparative aspects of mesencephalic and pontine connections. J. Comp. Neurol. 301, 618–642 (1990).

  33. 33.

    Picard, N. & Strick, P. L. Motor areas of the medial wall: a review of their location and functional activation. Cereb. Cortex 6, 342–353 (1996).

  34. 34.

    Schall, J. D. in Extrastriate Cortex in Primates (eds Rockland, K. S., Kaas, J. H. & Peters, A.) 527–638 (Springer, 1997).

  35. 35.

    Matelli, M., Luppino, G. & Rizzolatti, G. Architecture of superior and mesial area 6 and the adjacent cingulate cortex in the macaque monkey. J. Comp. Neurol. 311, 445–462 (1991).

  36. 36.

    Asanuma, C., Thach, W. R. & Jones, E. G. Anatomical evidence for segregated focal groupings of efferent cells and their terminal ramifications in the cerebellothalamic pathway of the monkey. Brain Res. 286, 267–297 (1983).

  37. 37.

    Lynch, J. C., Hoover, J. E. & Strick, P. L. Input to the primate frontal eye field from the substantia nigra, superior colliculus, and dentate nucleus demonstrated by transneuronal transport. Exp. Brain Res. 100, 181–186 (1994).

  38. 38.

    Prevosto, V., Graf, W. & Ugolini, G. Cerebellar inputs to intraparietal cortex areas LIP and MIP: functional frameworks for adaptive control of eye movements, reaching, and arm/eye/head movement coordination. Cereb. Cortex 20, 214–228 (2010).

  39. 39.

    Kunimatsu, J., Suzuki, T. W., Ohmae, S. & Tanaka, M. Different contributions of preparatory activity in the basal ganglia and cerebellum for self-timing. eLife 7, e35676 (2018).

  40. 40.

    Ashmore, R. C. & Sommer, M. A. Delay activity of saccade-related neurons in the caudal dentate nucleus of the macaque cerebellum. J. Neurophysiol. 109, 2129–2144 (2013).

  41. 41.

    Ohmae, S., Uematsu, A. & Tanaka, M. Temporally specific sensory signals for the detection of stimulus omission in the primate deep cerebellar nuclei. J. Neurosci. 33, 15432–15441 (2013).

  42. 42.

    Narain, D., Remington, E. D., Zeeuw, C. I. D. & Jazayeri, M. A cerebellar mechanism for learning prior distributions of time intervals. Nat. Commun. 9, 469 (2018).

  43. 43.

    Gao, Z. et al. A cortico-cerebellar loop for motor planning. Nature https://doi.org/10.1038/s41586-018-0633-x (2018).

  44. 44.

    Remington, E. D., Egger, S. W., Narain, D., Wang, J. & Jazayeri, M. A dynamical systems perspective on flexible motor timing. Trends Cogn. Sci. 22, 938–952 (2018).

  45. 45.

    Berniker, M. & Kording, K. Estimating the sources of motor errors for adaptation and generalization. Nat. Neurosci. 11, 1454–1461 (2008).

  46. 46.

    Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

  47. 47.

    Chaisangmongkon, W., Swaminathan, S. K., Freedman, D. J. & Wang, X.-J. Computing by robust transience: how the fronto-parietal network performs sequential, category-based decisions. Neuron 93, 1504–1517.e4 (2017).

  48. 48.

    Mastrogiuseppe, F. & Ostojic, S. Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron 99, 609–623.e29 (2018).

  49. 49.

    Friston, K. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138 (2010).

  50. 50.

    Rao, R. P. N. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999).

  51. 51.

    Jazayeri, M. & Shadlen, M. N. Temporal context calibrates interval timing. Nat. Neurosci. 13, 1020–1026 (2010).

  52. 52.

    Stengel, R. F. Optimal Control and Estimation (Dover Publications, 1994).

  53. 53.

    Schlag, J. & Schlag-Rey, M. Evidence for a supplementary eye field. J. Neurophysiol. 57, 179–200 (1987).

  54. 54.

    Huerta, M. F. & Kaas, J. H. Supplementary eye field as defined by intracortical microstimulation: connections in macaques. J. Comp. Neurol. 293, 299–330 (1990).

  55. 55.

    Fujii, N., Mushiake, H. & Tanji, J. Distribution of eye- and arm-movement-related neuronal activity in the SEF and in the SMA and Pre-SMA of monkeys. J. Neurophysiol. 87, 2158–2166 (2002).

  56. 56.

    Matsuzaka, Y., Aizawa, H. & Tanji, J. A motor area rostral to the supplementary motor area (presupplementary motor area) in the monkey: neuronal activity during a learned motor task. J. Neurophysiol. 68, 653–662 (1992).

  57. 57.

    Yu, B. M. et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102, 614–635 (2009).

Download references

Acknowledgements

M.J. is supported by the NIH (NINDS-NS078127), the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation (grants 325542 and 542993SPI), the McKnight Foundation, the Center for Sensorimotor Neural Engineering (grant UWSC6200 (BPO4405)) and the McGovern Institute.

Author information

S.W.E. and M.J. designed the task. S.W.E. collected behavioral and neural data from monkey B. C.-J.C. collected behavioral and neural data from monkey G. E.D.R. designed the KiNeT analysis. S.W.E. performed all analyses. S.W.E. and M.J. interpreted the results and wrote the paper.

Correspondence to Mehrdad Jazayeri.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Neuroscience thanks Matthew Kaufman, Reza Shadmehr and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1

Bayesian versus EKF model in the 1-2-3-Go task. (a) Bayesian model. The sample interval (ts) is presented in the S1-S2 and S2-S3 epochs. The model using a Bayes-least-squares (fBLS) estimator to compute an estimate (eS3-Go) based on two measurements of ts (mS1-S2 and mS2-S3) and the prior distribution, and aims to produce a matching interval (tp). As illustrated by the three boxes under the three epochs, both measurement noise (S1-S2 and S2-S3 epochs), and production noise (S3-Go epoch) are modeled as Gaussian with standard deviations σm and σp that scale with ts with constant of proportionality wm and wp, respectively. The green, orange and red vertical lines show values of mS1-S2, mS2-S3 and tp for an example trial (see Methods). (b) A grayscale plot of eS3-Go as a function of mS1-S2 and mS2-S3 for the Bayesian model. The red lines show various combinations of mS1-S2 and mS2-S3 that lead to the same eS3-Go. (c) Extended Kalman filter (EKF) model. The initial interval estimate, eS1-S2, at S1 is set to the mean of the prior. At S2, the algorithm computes an updated estimate, eS2-S3, by adding eS1-S2 to a nonlinear function (f*) of the difference between eS1-S2, and mS1-S2. At S3, this process is repeated to generate an updated estimate, eS3-Go. The functional form of f* was matched to the Bayesian estimate after one measurement. The parameters k1 and k2, which specify the gain of the update were set to 1 and 0.5, respectively (see Methods). (d) A grayscale plot of eS3-Go as a function of mS1-S2 and mS2-S3 for EKF with the same format as in b. (e) Ratio of the log likelihood of the data given the EKF model to the log likelihood of the data given the Bayesian model; n = 861 and 415 total trials for monkeys B and G, respectively.

Supplementary Figure 2

Average variance explained by polynomials of increasing order. The degree of polynomial required to best describe the temporal response profiles of individual neurons was determined by fitting a polynomial to firing rates computed from a subset of training trials (50% of trials, sampled randomly without replacement) during the S1-S2, S2-S3, and S3-Go epochs. Polynomials were fit across ts for the first 600 ms of the temporal response for each epoch. The quality of fit was assessed by the explained variance, Rn2, between the polynomial of order n and the firing rate computed from the remaining validation trials. Each column above plots the mean (+/- 95% confidence intervals; N = 115 neurons) Rn2 as a function of polynomial order for a given epoch of the task. Vertical dashed line indicates the point at which increasing polynomial order no longer increased the quality of fit. Analyses indicated that a 6th order polynomial was sufficient to describe the temporal response independent of sample interval.

Supplementary Figure 3

Observation of temporal scaling is robust to polynomial order. a) The increase in scaling as a function of polynomial order for the S2-S3 (top) and S3-Go (bottom) epochs. The increase in scaling was computed based on the z-scored rank-sum statistic of the one sided Wilcoxon computed on the distribution of ∆R2 across neurons S2-S3 and S3-Go data compared to the S1-S2 data. Positive values indicate increased scaling during the S2-S3 or S3-Go epochs. Values of Z greater than 1.645 (horizontal dashed lines) indicate significant increase at an α of 0.05. N = 115 neurons. b) Same as Fig. 4d with the order of the polynomial selected to optimize the fit to unscaled data on a neuron by neuron basis (Wilcoxon results inset; N = 115 neurons).

Supplementary Figure 4

Dynamic encoding of ts by individual DMFC neurons. a) Firing rate profile from an example neuron as a function of ts shown in the same format as Fig. 4a. A linear-nonlinear-Poisson (LNP) model was fit to spike count data in a 150 ms window centered on each triangle and these fits were used in subsequent panels. PSTH is representative of the population of 115 neurons. Orange and yellow-orange triangles indicate time points of data shown in panel b. b) Mean firing rates of test data (+/- standard error; n = 30 trials sampled with replacement) for neuron in panel at 300 (orange circles) and 500 (yellow circles) ms following S2 as a function of ts. Orange line shows the fit of the LNP model to training data from the 300 ms time point. c) Grayscale plot showing the root mean squared error (RMSE) between the LNP model fit to spike counts at ttrain and observed spike counts at ttest for the example neuron shown in panel a at various combinations of ttrain and ttest (n = 30 trials sampled with replacement from test data). Cyan and magenta indicate cross-temporal (i.e., ttrainttest) and auto-temporal (i.e., ttrain = ttest) errors, respectively. d) The cumulative frequency of the log ratio of RMSE for ttrainttest to ttrain = ttest. A consistent coding scheme would have similar RMSE values and would lead to a log ratio of zero. Values larger than zero are indicative of non-stationary coding of ts (see Methods). For example, a value of 0.7 indicates cross-temporal prediction errors were, on average, twice that of auto-temporal prediction errors. e) Mean cross-temporal prediction error across neurons during the S2-S3 epoch. f) Cross-temporal prediction errors across single units (mean +/- standard error; n = 115) as a function of ttest for several different values of ttrain (colors, see horizontal lines in panel e). g) The sensitivity parameter of the LNP model, β(t), for each neuron (row) as a function of task epoch and time (columns). Entries in the matrix where the LNP model did not better fit the data than a constant coding model (see Methods) were replaced with zero. h) The loadings of the top three left (top) and top three right (bottom) singular vectors of the sensitivity matrix (panel g) lack structure. See also Fig. 5.

Supplementary Figure 5

Population decoding of ts. a) Schematic illustrating the relationship between the linear-nonlinear Poisson (LNP) encoding model and the decoding analysis (see Methods). The parameters of the LNP model were fit to each neuron on a subset of training trials. We then inferred ts based on the spike counts of remaining trials (see Methods). b) Each panel shows ts decoded for validation data using the LNP model fitted to training data. Results are sorted by time (columns) during each task epoch (rows). The LNP model was able to decode ts at all time points following S2 from the population of 115 single units. c) Top: Performance (percent variance explained; Pearson’s r) of a dynamic decoder as a function of time since the last flash for each epoch (gray levels). The dynamic decoder infers ts at each time bin using the LNP model fitted to training data in the same time bin. Bottom: Performance of a stationary decoder using an LNP model fitted to test data at t = 0 ms. Each line corresponds to a different, random selection of 150 test trials from the population of 115 single units to demonstrate the reliability of decoding. The stationary decoder performed poorly compared to the dynamic decoder suggesting that the representation of ts across the population is dynamic, which is consistent with results from single unit analysis (Fig. 5f) and PCA (Fig. 6a).

Supplementary Figure 6

Results of encoding analysis applied to units recorded from monkey G. a) Same as Fig. 5c and d for monkey G. b) Same as Supplementary Figure 4g and h for monkey G. c) Same as Supplementary Figure 5b for monkey G.

Supplementary Figure 7

Results of encoding analysis applied to units recorded from monkey G. Same as Supplementary Figure 6 for monkey B. See also Fig. 5.

Supplementary Figure 8

Percent variance explained as a function of the number of principal components (PCs) used to reconstruct the data. Left: Explained variance in the first (light gray), second (medium gray), and third (dark gray) epochs as a function of the number of PCs derived from activity in the S1-S2 epoch. Middle and right. Same as Left with PCs derived from activity in the S2-S3, and S3-Go epochs, respectively. The first 10 PCs derived from activity within an epoch were sufficient to capture 94.05%, 93.39% and 94.00% of variance within that epoch. In contrast, between 128 and 156 PCs were needed to explain 90% of the variance in another epoch, suggesting neural activity traversed distinct subspaces of during each epoch of the task.

Supplementary Figure 9

Neural trajectories and principal component analysis (PCA) for individual monkey data. Conventions as in Fig. 6a and Supplementary Fig. 8. Left and right trajectory plots show results from two different views. PCA was performed on n = 63 and 118 units for Monkey B and G, respectively.

Supplementary Figure 10

Speeds and distances between neural trajectories derived from principal components (PCs) derived from neural activity from S1 to Go. a) Speed of each neural trajectory, \(\Omega ^{[t_s]}\), compared to the speed of the reference trajectory, \(\Omega ^{[800]}\), using data from S1 to Go. Colored lines show the progression of elapsed time on \(\Omega ^{[t_s]}\) (\({\mathbf{t}}^{[t_s]}\)) as a function of elapsed time on \(\Omega ^{[800]}\) (\({\mathbf{t}}^{[800]}\)) for different ts. Shadings indicate median +/- 95% confidence intervals. Unity line correspond to no difference in speed. Dashed lines represent the expected relationship between \({\mathbf{t}}^{[t_s]}\) and \({\mathbf{t}}^{[800]}\) under the internal model hypothesis for an observer with perfect knowledge of ts. b) Distance (\(\delta ^{t_s}\)) between nearby states on \(\Omega ^{[t_s]}\) and \(\Omega ^{[800]}\) as a function of time from S1 to Go. Horizontal dashed line (\(\delta ^{t_s}\)=0) corresponds to overlapping trajectories. Conventions as in panel a.

Supplementary Figure 11

Speeds and distances for each monkey. a) Speed of each neural trajectory, \(\Omega ^{[t_s]}\), compared to the speed of the reference trajectory, \(\Omega ^{[800]}\). Shadings indicate median +/- 95% confidence intervals calculated by bootstrapping (n = 100 resamples). Unity line correspond to no difference in speed. Dashed lines represent the expected relationship between \({\mathbf{t}}^{[t_s]}\) and \({\mathbf{t}}^{[800]}\) under the internal model hypothesis for an observer with perfect knowledge of ts. b) To estimate relative speed, we fitted a piecewise linear model with two segments to data in panel a. The first segment assumed a constant \({\mathbf{t}}^{[t_s]}\) up to time t0, and the second segment assumed a linear relationship between \({\mathbf{t}}^{[t_s]}\) and \({\mathbf{t}}^{[800]}\) (\({\mathbf{t}}^{[t_s]} = \beta {\mathbf{t}}^{[800]} + c\)). We estimated relative speed by the slope of the second segment fitted to samples of the bootstrap distribution. c) Distance (\(\delta ^{t_s}\)) between nearby states on \(\Omega ^{[t_s]}\) and \(\Omega ^{[800]}\) as a function of time. Shadings indicate median +/- 95% confidence intervals calculated by bootstrapping (n = 100 resamples).

Supplementary Figure 12

Speed and distance of neural trajectories aligned to Go. a) Speeds. For the internal model hypothesis, KiNeT predicts that the slope of regression line relating of \({\mathbf{t}}^{[t_s]}\) to \({\mathbf{t}}^{[800]}\) would be steeper for longer ts and shallower for shorter ts. Results were consistent with this prediction. Shadings indicate median +/- 95% confidence intervals calculated by bootstrapping (n = 100 resamples). Unity line correspond to no difference in speed. Dashed lines represent the expected relationship between \({\mathbf{t}}^{[t_s]}\) and \({\mathbf{t}}^{[800]}\) under the internal model hypothesis for an observer with perfect knowledge of ts. b) Distances. For the internal model hypothesis, KiNeT predicts that the distances would be organized systematically according to ts. Results were consistent with this prediction. Shadings indicate median +/- 95% confidence intervals calculated by bootstrapping (n = 100 resamples).

Supplementary Figure 13

Speed and distance between neural trajectories reflect animal’s internal estimates for both monkeys. Internal estimates from trajectory speed were derived from slopes computed as in Supplementary Fig. 11. The colored dots show multiple estimates of \(\hat t_e\) derived from bootstrapping (n = 100). The solid curves show interval estimates derived from EKF model fits. Unity indicates perfect estimates of ts and the horizontal line represents the mean of the prior. Insets: differences in RMSE between models assuming the speed/distance reflects ts versus te(S2) (S2-S3 epoch) or te(S3) (S3-Go epoch).

Supplementary information

Supplementary Information

Supplementary Figs. 1–13.

Reporting Summary

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark