Neural correlates of reliability-based cue weighting during multisensory integration

Journal name:
Nature Neuroscience
Year published:
Published online
Corrected online


Integration of multiple sensory cues is essential for precise and accurate perception and behavioral performance, yet the reliability of sensory signals can vary across modalities and viewing conditions. Human observers typically employ the optimal strategy of weighting each cue in proportion to its reliability, but the neural basis of this computation remains poorly understood. We trained monkeys to perform a heading discrimination task from visual and vestibular cues, varying cue reliability randomly. The monkeys appropriately placed greater weight on the more reliable cue, and population decoding of neural responses in the dorsal medial superior temporal area closely predicted behavioral cue weighting, including modest deviations from optimality. We found that the mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits. These results provide direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference.

At a glance


  1. Cue-conflict configuration and example behavioral session.
    Figure 1: Cue-conflict configuration and example behavioral session.

    (a) Monkeys were presented with visual (optic flow) and/or vestibular (inertial motion) heading stimuli in the horizontal plane. The heading (θ) was varied in fine steps around straight ahead, and the task was to indicate rightward or leftward heading with a saccade after each trial. On a subset of visual-vestibular (combined) trials, the headings specified by each cue were separated by a conflict angle (Δ) of ±4°, where positive Δ indicates visual to the right of vestibular, and vice versa for negative Δ. Schematic shows two possible combinations of θ and Δ. (b) Psychometric functions for an example session showing the proportion of rightward choices as a function of heading for the single-cue conditions. Psychophysical thresholds were taken as the s.d. (σ) of the best-fitting cumulative Gaussian function (smooth curves) for each modality. Single-cue thresholds were used to predict (via equation (2)), the weights that an optimal observer should assign to each cue during combined trials. (c) Psychometric functions for the combined modality at low (16%) coherence, plotted separately for each value of Δ. The shifts of the PSEs during cue-conflict were used to compute observed vestibular weights (equation (4)). (d) Data are presented as in c, but for the high (60%) coherence combined trials.

  2. Average behavioral performance.
    Figure 2: Average behavioral performance.

    (a,b) Optimal (equation (2), open symbols and dashed line) and observed (equation (4), filled symbols and solid line) vestibular weights as a function of visual motion coherence (cue reliability), shown separately for the two monkeys (a, monkey Y, N = 40 sessions; b, monkey W, N = 26). (c,d) Optimal (equation (3)) and observed (estimated from the psychometric fits) psychophysical thresholds, normalized separately by each monkey's vestibular threshold. Error bars represent 95% confidence intervals computed with a bootstrap procedure.

  3. Example MSTd neuron showing a correlate of trial-by-trial cue reweighting.
    Figure 3: Example MSTd neuron showing a correlate of trial-by-trial cue reweighting.

    (ac) Mean firing rate (spikes per s) ± s.e.m. is plotted as a function of heading for the single-cue trials (a), and combined trials at low (b) and high (c) coherence. The shift in combined tuning curves with cue conflict, in opposite directions for the two levels of reliability, forms the basis for the reweighting effects in the population decoding analysis depicted in Figures 4 and 6 (see Supplementary Figs. 1 and 2 for single-cell neurometric analyses).

  4. Likelihood-based decoding approach used to simulate behavioral performance based on MSTd activity.
    Figure 4: Likelihood-based decoding approach used to simulate behavioral performance based on MSTd activity.

    (a,b) Example likelihood functions (P(r|θ)) for the single-cue modalities. Four individual trials of the same heading (θ = 1.2°, green arrow) are superimposed for each condition. Likelihoods were computed from equation (14) using simulated population responses (r) comprised of random draws of single-neuron activity. (c) Simulated psychometric functions for a decoded population that included all 108 MSTd neurons in our sample. (d,e) Combined modality likelihood functions for θ = 1.2° (green arrow and dashed line) and Δ = +4°, for low (cyan) and high (blue) coherence. Black and red inverted triangles indicate the headings specified by vestibular and visual cues, respectively, in this stimulus configuration. (f) Psychometric functions for the simulated combined modality, showing the shift in the PSE resulting from the change in coherence (that is, reweighting).

  5. Visual-vestibular congruency and average MSTd tuning curves.
    Figure 5: Visual-vestibular congruency and average MSTd tuning curves.

    (a) Histogram of congruency index (CI) values for monkey Y (top), monkey W (middle) and both animals together (bottom). Positive congruency index values indicate consistent tuning slope across visual (60% coh) and vestibular single-cue conditions, whereas negative values indicate opposite tuning slopes. Filled bars indicate congruency index values whose constituent correlation coefficients were both statistically significant11; however, here we defined congruent and opposite cells by an arbitrary criterion of congruency index > 0.4 and congruency index < −0.4, respectively. (b,c) Population average of MSTd tuning curves for the five stimulus conditions, vestibular (black), low-coherence visual (magenta, dashed), high-coherence visual (red), low-coherence combined (cyan, dashed) and high-coherence combined (blue), separated into congruent (b) and opposite (c) classes. Prior to averaging, some neurons' tuning preferences were mirrored such that all cells preferred rightward heading in the high-coherence visual modality.

  6. Population decoding results and comparison with monkey behavior.
    Figure 6: Population decoding results and comparison with monkey behavior.

    (af) Weights (left column, data are presented as in Fig. 2a,b; from equation (2) and (4)) and thresholds (right column, data presented as in Fig. 2c,d; from equation (3) and psychometric fits to real or simulated choice data) quantifying the performance of an optimal observer reading out MSTd population activity. Thresholds were normalized by the value of the vestibular threshold. The population of neurons included in the decoder was varied to examine the readout of all cells (a,b), opposite cells only (c,d; note the different ordinate scale in d) or congruent cells only (e,f). (g,h) Monkey behavioral performance (pooled across the two animals) is summarized. Error bars indicate 95% confidence intervals.

  7. Goodness-of-fit of linear weighted sum model and distribution of vestibular and visual neural weights.
    Figure 7: Goodness-of-fit of linear weighted sum model and distribution of vestibular and visual neural weights.

    Combined responses during the discrimination task (N = 108) were modeled as a weighted sum of visual and vestibular responses, separately for each coherence level (equation (5)). (a,b) Histograms of a goodness-of-fit metric (R2), taken as the square of the correlation coefficient between the modeled response and the real response. The statistical significance of this correlation was used to code the R2 histograms. (c,d) Histograms of vestibular (c) and visual (d) neural weights, separated by coherence (gray bars = 16%, black bars = 60%). Color-matched arrowheads indicate medians of the distributions. Only neurons with significant R2 values for both coherences were included (N = 83).

  8. Comparison of optimal and actual (fitted) neural weights.
    Figure 8: Comparison of optimal and actual (fitted) neural weights.

    (a) Actual weight ratios (Aves/Avis) for each cell were derived from the best-fitting linear model (equation (5), as in Fig. 7), and optimal weight ratios (ρopt) for the corresponding cells were computed according to equation (7). Symbol color indicates coherence (16%, blue; 60%, red) and shape indicates monkey identity. Note that the derivation of equation (7) assumes congruent tuning (see Supplementary Analysis), and ρopt is therefore constrained to be positive (because the sign of the tuning slopes will be equal). Thus, only congruent cells with positive weight ratios were included in this comparison (N = 36 for low coherence, 37 for high coherence). (b,d) Decoder performance (data are presented as in Figure 6, using equation (2)–(4) and fits to simulated choice data) based on congruent neurons, after replacing combined modality responses with weighted sums of single-cue responses, using the optimal weights from equation (7) (abscissa in a). (c,e) Data are presented as in b and c, using the actual (fitted) weights (ordinate in a) to generate the artificial combined responses.

Change history

Corrected online 03 April 2013
In the version of this article initially published, there were typographical errors in the numerators of equations (12) and (13). The terms μcomb in equation (12) and PSEvis in equation (13) were preceded by plus signs; the correct equations contain minus signs in those locations. The errors have been corrected in the HTML and PDF versions of the article.


  1. Alais, D. & Burr, D. The ventriloquist effect results from near-optimal bimodal integration. Curr. Biol. 14, 257262 (2004).
  2. Ernst, M.O. & Banks, M.S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429433 (2002).
  3. Hillis, J.M., Watt, S.J., Landy, M.S. & Banks, M.S. Slant from texture and disparity cues: optimal cue combination. J. Vis. 4, 967992 (2004).
  4. Jacobs, R.A. Optimal integration of texture and motion cues to depth. Vision Res. 39, 36213629 (1999).
  5. Knill, D.C. & Saunders, J.A. Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Res. 43, 25392558 (2003).
  6. Landy, M.S., Maloney, L.T., Johnston, E.B. & Young, M. Measurement and modeling of depth cue combination: in defense of weak fusion. Vision Res. 35, 389412 (1995).
  7. van Beers, R.J., Wolpert, D.M. & Haggard, P. When feeling is more important than seeing in sensorimotor adaptation. Curr. Biol. 12, 834837 (2002).
  8. Young, M.J., Landy, M.S. & Maloney, L.T. A perturbation analysis of depth perception from combinations of texture and motion cues. Vision Res. 33, 26852696 (1993).
  9. Clark, J.J. & Yuille, A.L. Data Fusion for Sensory Information Processing Systems (Kluwer Academic, Boston, 1990).
  10. Cochran, W.G. Problems arising in the analysis of a series of similar experiments. J. R. Stat. Soc. 4 (suppl.) 102118 (1937).
  11. Gu, Y., Angelaki, D.E. & DeAngelis, G.C. Neural correlates of multisensory cue integration in macaque MSTd. Nat. Neurosci. 11, 12011210 (2008).
  12. Gu, Y., DeAngelis, G.C. & Angelaki, D.E. A functional link between area MSTd and heading perception based on vestibular signals. Nat. Neurosci. 10, 10381047 (2007).
  13. Fetsch, C.R., Turner, A.H., DeAngelis, G.C. & Angelaki, D.E. Dynamic reweighting of visual and vestibular cues during self-motion perception. J. Neurosci. 29, 1560115612 (2009).
  14. Ma, W.J., Beck, J.M., Latham, P.E. & Pouget, A. Bayesian inference with probabilistic population codes. Nat. Neurosci. 9, 14321438 (2006).
  15. Landy, M.S., Banks, M.S. & Knill, D.C. Ideal-observer models of cue integration. in Sensory Cue Integration (eds. Trommershäuser, J., Kording, K.P. & Landy, M.S.) 529 (Oxford University Press, New York, 2011).
  16. Knill, D.C. & Richards, W. Perception as Bayesian Inference (Cambridge University Press, New York, 1996).
  17. Gu, Y., Watkins, P.V., Angelaki, D.E. & DeAngelis, G.C. Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area. J. Neurosci. 26, 7385 (2006).
  18. Battaglia, P.W., Jacobs, R.A. & Aslin, R.N. Bayesian integration of visual and auditory signals for spatial localization. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 20, 13911397 (2003).
  19. Rosas, P., Wagemans, J., Ernst, M.O. & Wichmann, F.A. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 22, 801809 (2005).
  20. Duffy, C.J. MST neurons respond to optic flow and translational movement. J. Neurophysiol. 80, 18161827 (1998).
  21. Takahashi, K. et al. Multimodal coding of three-dimensional rotation and translation in area MSTd: comparison of visual and vestibular selectivity. J. Neurosci. 27, 97429756 (2007).
  22. Dayan, P. & Abbott, L.F. Theoretical Neuroscience (MIT press, Cambridge, Massachusetts, 2001).
  23. Foldiak, P. The 'ideal homunculus': statistical inference from neural population responses. in Computation and Neural Systems (eds. F.H. Eeckman & J.M. Bower) 5560 (Kluwer Academic Publishers, Norwell, Massachusetts, 1993).
  24. Sanger, T.D. Probability density estimation for the interpretation of neural population codes. J. Neurophysiol. 76, 27902793 (1996).
  25. Morgan, M.L., DeAngelis, G.C. & Angelaki, D.E. Multisensory integration in macaque visual cortex depends on cue reliability. Neuron 59, 662673 (2008).
  26. Britten, K.H., Shadlen, M.N., Newsome, W.T. & Movshon, J.A. Responses of neurons in macaque MT to stochastic motion signals. Vis. Neurosci. 10, 11571169 (1993).
  27. Heuer, H.W. & Britten, K.H. Linear responses to stochastic motion signals in area MST. J. Neurophysiol. 98, 11151124 (2007).
  28. Ohshiro, T., Angelaki, D.E. & DeAngelis, G.C. A normalization model of multisensory integration. Nat. Neurosci. 14, 775782 (2011).
  29. Knill, D.C. & Pouget, A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci. 27, 712719 (2004).
  30. Stein, B.E. & Meredith, M.A. The Merging of the Senses (MIT Press, Cambridge, Massachusetts, 1993).
  31. Bremmer, F., Klam, F., Duhamel, J.R., Ben Hamed, S. & Graf, W. Visual-vestibular interactive responses in the macaque ventral intraparietal area (VIP). Eur. J. Neurosci. 16, 15691586 (2002).
  32. Chen, A., DeAngelis, G.C. & Angelaki, D.E. Representation of vestibular and visual cues to self-motion in ventral intraparietal cortex. J. Neurosci. 31, 1203612052 (2011).
  33. Fukushima, K. Corticovestibular interactions: anatomy, electrophysiology and functional considerations. Exp. Brain Res. 117, 116 (1997).
  34. Xiao, Q., Barborica, A. & Ferrera, V.P. Radial motion bias in macaque frontal eye field. Vis. Neurosci. 23, 4960 (2006).
  35. Lewis, J.W. & Van Essen, D.C. Corticocortical connections of visual, sensorimotor and multimodal processing areas in the parietal lobe of the macaque monkey. J. Comp. Neurol. 428, 112137 (2000).
  36. Stanton, G.B., Bruce, C.J. & Goldberg, M.E. Topography of projections to posterior cortical areas from the macaque frontal eye fields. J. Comp. Neurol. 353, 291305 (1995).
  37. Felleman, D.J. & Van Essen, D.C. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 147 (1991).
  38. Maunsell, J.H. & van Essen, D.C. The connections of the middle temporal visual area (MT) and their relationship to a cortical hierarchy in the macaque monkey. J. Neurosci. 3, 25632586 (1983).
  39. Lappe, M., Bremmer, F., Pekel, M., Thiele, A. & Hoffmann, K.P. Optic flow processing in monkey STS: a theoretical and experimental approach. J. Neurosci. 16, 62656285 (1996).
  40. Fukushima, K. Extraction of visual motion and optic flow. Neural Netw. 21, 774785 (2008).
  41. Perrone, J.A. & Stone, L.S. Emulating the visual receptive-field properties of MST neurons with a template model of heading estimation. J. Neurosci. 18, 59585975 (1998).
  42. Knill, D.C. Robust cue integration: a Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. J. Vis. 7, 124 (2007).
  43. Cheng, K., Shettleworth, S.J., Huttenlocher, J. & Rieser, J.J. Bayesian integration of spatial information. Psychol. Bull. 133, 625637 (2007).
  44. Körding, K.P. et al. Causal inference in multisensory perception. PLoS ONE 2, e943 (2007).
  45. Komatsu, H. & Wurtz, R.H. Relation of cortical areas MT and MST to pursuit eye movements. I. Localization and visual properties of neurons. J. Neurophysiol. 60, 580603 (1988).
  46. Tanaka, K. & Saito, H. Analysis of motion of the visual field by direction, expansion/contraction, and rotation cells clustered in the dorsal part of the medial superior temporal area of the macaque monkey. J. Neurophysiol. 62, 626641 (1989).
  47. Fetsch, C.R. et al. Spatiotemporal properties of vestibular responses in area MSTd. J. Neurophysiol. 104, 15061522 (2010).
  48. Wichmann, F.A. & Hill, N.J. The psychometric function. I. Fitting, sampling, and goodness of fit. Percept. Psychophys. 63, 12931313 (2001).
  49. Jazayeri, M. & Movshon, J.A. Optimal representation of sensory information by neural populations. Nat. Neurosci. 9, 690696 (2006).
  50. Gu, Y., Fetsch, C.R., Adeyemo, B., DeAngelis, G.C. & Angelaki, D.E. Decoding of MSTd population activity accounts for variations in the precision of heading perception. Neuron 66, 596609 (2010).

Download references

Author information

  1. These authors contributed equally to this work.

    • Gregory C DeAngelis &
    • Dora E Angelaki


  1. Department of Anatomy and Neurobiology, Washington University School of Medicine, Saint Louis, Missouri, USA.

    • Christopher R Fetsch,
    • Gregory C DeAngelis &
    • Dora E Angelaki
  2. Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, USA.

    • Alexandre Pouget &
    • Gregory C DeAngelis
  3. Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland.

    • Alexandre Pouget
  4. Department of Neuroscience, Baylor College of Medicine, Houston, Texas, USA.

    • Dora E Angelaki


C.R.F., G.C.D. and D.E.A. conceived the study and designed the analyses. C.R.F. performed the experiments and analyzed the data. A.P. derived equations 6–8 and consulted on all theoretical aspects of the work. All of the authors wrote the paper.

Competing financial interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to:

Author details

Supplementary information

PDF files

  1. Supplementary Text and Figures (2 MB)

    Supplementary Figures 1–6 and Supplementary Analysis

Additional data