Dynamic routing of task-relevant signals for decision making in dorsolateral prefrontal cortex

Journal name:
Nature Neuroscience
Volume:
18,
Pages:
295–301
Year published:
DOI:
doi:10.1038/nn.3918
Received
Accepted
Published online

Abstract

Neurons in the dorsolateral prefrontal cortex (DLPFC) encode a diverse array of sensory and mnemonic signals, but little is known about how this information is dynamically routed during decision making. We analyzed the neuronal activity in the DLPFC of monkeys performing a probabilistic reversal task where information about the probability and magnitude of reward was provided by the target color and numerical cues, respectively. The location of the target of a given color was randomized across trials and therefore was not relevant for subsequent choices. DLPFC neurons encoded signals related to both task-relevant and irrelevant features, but only task-relevant mnemonic signals were encoded congruently with choice signals. Furthermore, only the task-relevant signals related to previous events were more robustly encoded following rewarded outcomes. Thus, multiple types of neural signals are flexibly routed in the DLPFC so as to favor actions that maximize reward.

At a glance

Figures

  1. Behavioral task and performance.
    Figure 1: Behavioral task and performance.

    (a) Probabilistic reversal task and magnitude combinations used (inset). (b) The proportion of trials in which the animal chose the same target color or location as in the previous trial after the previous choice was rewarded (win-stay) or unrewarded (lose-stay) (n = 45 sessions in monkey O and 73 sessions in monkey U).

  2. Population summary and single-neuron examples for activity related to events in the previous trial.
    Figure 2: Population summary and single-neuron examples for activity related to events in the previous trial.

    (a) Fraction of neurons significantly encoding outcomes, chosen locations and chosen colors in the previous trial (n = 226 neurons; 77 and 149 neurons from monkeys O and U, respectively). (bd) Example neurons showing effect on firing rate of outcome in the previous trial (b), previously chosen location (c) and previously chosen color (d). Gray background, target period; shaded areas represent ± s.e.m. (ad).

  3. Population summary and single-neuron examples for activity related to events in the current trial.
    Figure 3: Population summary and single-neuron examples for activity related to events in the current trial.

    (a) Fraction of neurons encoding the chosen target location, chosen color, target color positions and magnitudes in the current trial (n = 226 neurons). (be) Examples of neurons showing effect on firing rate of target position (320 trials for each) (b), chosen color (c) and chosen location (d) and encoding reward magnitude of the rightward target (e). Gray background, target period; shaded areas represent ± s.e.m. (ae).

  4. Population summary and single-neuron examples related to interaction effects.
    Figure 4: Population summary and single-neuron examples related to interaction effects.

    (a) Fraction of neurons showing effects of PRL, previously chosen color and outcome and their interaction corresponding to the HVL (n = 226 neurons). (bd) Examples of neurons showing encoding of HVL (b), previously chosen color × previous outcome interaction (c) or PRL (d). Gray background, target period; shaded areas represent ± s.e.m. (ad).

  5. Congruent coding of HVL and choice.
    Figure 5: Congruent coding of HVL and choice.

    (a) Relationship between regression coefficients for HVL versus choice (n = 226 neurons). (b) Relationship between regression coefficients for PRL versus choice (n = 226 neurons). (c,d) Time course of PC1 (see Online Methods) related to choice, HVL and PRL using signed (c) or unsigned (d) regression coefficients (n = 226 neurons). Arrows indicate critical values estimated from 10,000 random shuffles (P < 0.05).

  6. Effects of reward on task-relevant and task-irrelevant signals in the DLPFC.
    Figure 6: Effects of reward on task-relevant and task-irrelevant signals in the DLPFC.

    (a) Firing rate in example neuron encoding the HVL, shown separately for leftward and rightward choices in the current trial (t). (b) Decoding accuracy for the current location of the previously chosen color, previously chosen location and currently chosen location for previously rewarded and unrewarded trials (t – 1) (n = 216 neurons that survived our criterion for cross-validation; Online Methods). Shaded areas represent ±s.e.m. The scatterplots (bottom) show the fraction of correct classifications for each neuron during the target period. Colors indicate whether zero, one or both decoding analyses applied to previously rewarded and unrewarded trials were significantly higher than chance (P < 0.05), and large symbols indicate that the difference between rewarded and unrewarded trials was statistically significant (z-test, P < 0.05).

  7. Anatomical distributions for neurons encoding HVL, PRL and choice.
    Supplementary Fig. 1: Anatomical distributions for neurons encoding HVL, PRL and choice.

    Color symbols correspond to the neurons that showed significant modulations for each variable (n=226 neurons, both monkeys combined).

References

  1. Funahashi, S., Bruce, C.J. & Goldman-Rakic, P.S. Mnemonic coding of visual space in the monkey's dorsolateral prefrontal cortex. J. Neurophysiol. 61, 331349 (1989).
  2. Lara, A.H. & Wallis, J.D. Executive control processes underlying multi-item working memory. Nat. Neurosci. 17, 876883 (2014).
  3. Romo, R., Brody, C.D., Hernández, A. & Lemus, L. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399, 470473 (1999).
  4. Constantinidis, C., Franowicz, M.N. & Goldman-Rakic, P.S. The sensory nature of mnemonic representation in the primate prefrontal cortex. Nat. Neurosci. 4, 311316 (2001).
  5. Ó Scalaidhe, S.P., Wilson, F.A. & Goldman-Rakic, P.S. Areal segregation of face-processing neurons in prefrontal cortex. Science 278, 11351138 (1997).
  6. Rao, S.C., Rainer, G. & Miller, E.K. Integration of what and where in the primate prefrontal cortex. Science 276, 821824 (1997).
  7. Miller, E.K. & Cohen, J.D. An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 24, 167202 (2001).
  8. Tanji, J. & Hoshi, E. Role of the lateral prefrontal cortex in executive behavioral control. Physiol. Rev. 88, 3757 (2008).
  9. Lebedev, M.A., Messinger, A., Kralik, J.D. & Wise, S.P. Representation of attended versus remembered locations in prefrontal cortex. PLoS Biol. 2, e365 (2004).
  10. Messinger, A., Lebedev, M.A., Kralik, J.D. & Wise, S.P. Multitasking of attention and memory functions in the primate prefrontal cortex. J. Neurosci. 29, 56405653 (2009).
  11. Watanabe, M. Reward expectancy in primate prefrontal neurons. Nature 382, 629632 (1996).
  12. Leon, M.I. & Shadlen, M.N. Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron 24, 415425 (1999).
  13. Barraclough, D.J., Conroy, M.L. & Lee, D. Prefrontal cortex and decision making in a mixed-strategy game. Nat. Neurosci. 7, 404410 (2004).
  14. Kim, S., Hwang, J. & Lee, D. Prefrontal coding of temporally discounted values during intertemporal choice. Neuron 59, 161172 (2008).
  15. Kim, S., Cai, X., Hwang, J. & Lee, D. Prefrontal and striatal activity related to values of objects and locations. Front. Neurosci. 6, 108 (2012).
  16. Kennerley, S.W., Dahmubed, A.F., Lara, A.H. & Wallis, J.D. Neurons in the frontal lobe encode the value of multiple decision variables. J. Cogn. Neurosci. 21, 11621178 (2009).
  17. Lauwereyns, J. et al. Responses to task-irrelevant visual features by primate prefrontal neurons. J. Neurophysiol. 86, 20012010 (2001).
  18. Genovesio, A., Tsujimoto, S., Navarra, G., Falcone, R. & Wise, S.P. Autonomous encoding of irrelevant goals and outcomes by prefrontal cortex neurons. J. Neurosci. 34, 19701978 (2014).
  19. Sutton, R.S. & Barto, A.G. Reinforcement Learning: An Introduction (MIT Press, 1998).
  20. Tobler, P.N., Fiorillo, C.D. & Schultz, W. Adaptive coding of reward value by dopamine neurons. Science 307, 16421645 (2005).
  21. Knutson, B., Taylor, J., Kaufman, M., Peterson, R. & Glover, G. Distributed neural representation of expected value. J. Neurosci. 25, 48064812 (2005).
  22. Yacubian, J. et al. Dissociable systems for gain- and loss-related value predictions and errors of prediction in the human brain. J. Neurosci. 26, 95309537 (2006).
  23. Tobler, P.N., Christopoulos, G.I., O'Doherty, J.P., Dolan, R.J. & Schultz, W. Neuronal distortions of reward probability without choice. J. Neurosci. 28, 1170311711 (2008).
  24. Christopoulos, G.I., Tobler, P.N., Bossaerts, P., Dolan, R.J. & Schultz, W. Neural correlates of value, risk, and risk aversion contributing to decision making under risk. J. Neurosci. 29, 1257412583 (2009).
  25. Venkatraman, V., Payne, J.W., Bettman, J.R., Luce, M.F. & Huettel, S.A. Separate neural mechanisms underlie choices and strategic preferences in risky decision making. Neuron 62, 593602 (2009).
  26. Berns, G.S. & Bell, E. Striatal topography of probability and magnitude information for decisions under uncertainty. Neuroimage 59, 31663172 (2012).
  27. Donahue, C.H., Seo, H. & Lee, D. Cortical signals for rewarded actions and strategic exploration. Neuron 80, 223234 (2013).
  28. Histed, M.H., Pasupathy, A. & Miller, E.K. Learning substrates in the primate prefrontal cortex and striatum: sustained activity related to successful actions. Neuron 63, 244253 (2009).
  29. Ito, M. & Doya, K. Validation of decision-making models and analysis of decision variables in the rat basal ganglia. J. Neurosci. 29, 98619874 (2009).
  30. Daw, N.D., Niv, Y. & Dayan, P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 8, 17041711 (2005).
  31. Hampton, A.N., Bossaerts, P. & O'Doherty, J.P. The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans. J. Neurosci. 26, 83608367 (2006).
  32. Lee, D., Seo, H. & Jung, M.W. Neural basis of reinforcement learning and decision making. Annu. Rev. Neurosci. 35, 287308 (2012).
  33. Gläscher, J., Daw, N., Dayan, P. & O'Doherty, J.P. States versus rewards: dissociable neural prediction error signals underlying model-based and model- free reinforcement learning. Neuron 66, 585595 (2010).
  34. Daw, N.D., Gershman, S.J., Seymour, B., Dayan, P. & Dolan, R.J. Model-based influences on humans' choices and striatal prediction errors. Neuron 69, 12041215 (2011).
  35. Abe, H. & Lee, D. Distributed coding of actual and hypothetical outcomes in the orbital and dorsolateral prefrontal cortex. Neuron 70, 731741 (2011).
  36. Lee, S.W., Shimojo, S. & O'Doherty, J.P. Neural computations underlying arbitration between model-based and model-free learning. Neuron 81, 687699 (2014).
  37. Seo, H., Cai, X., Donahue, C.H. & Lee, D. Neural correlates of strategic reasoning during competitive games. Science 346, 340343 (2014).
  38. Asaad, W.F., Rainer, G. & Miller, E.K. Neural activity in the primate prefrontal cortex during associative learning. Neuron 21, 13991407 (1998).
  39. Curtis, C.E. & Lee, D. Beyond working memory: the role of persistent activity in decision making. Trends Cogn. Sci. 14, 216222 (2010).
  40. Bernacchia, A., Seo, H., Lee, D. & Wang, X.-J. A reservoir of time constants for memory traces in cortical neurons. Nat. Neurosci. 14, 366372 (2011).
  41. Genovesio, A., Brasted, P.J., Mitz, A.R. & Wise, S.P. Prefrontal cortex activity related to abstract response strategies. Neuron 47, 307320 (2005).
  42. Genovesio, A., Brasted, P.J. & Wise, S.P. Representation of future and previous spatial goals by separate neural populations in prefrontal cortex. J. Neurosci. 26, 73057316 (2006).
  43. Fuster, J.M., Bodner, M. & Kroger, J.K. Cross-modal and cross-temporal association in neurons of frontal cortex. Nature 405, 347351 (2000).
  44. Mante, V., Sussillo, D., Shenoy, K.V. & Newsome, W.T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 7884 (2013).
  45. Chen, L.L. & Wise, S.P. Neuronal activity in the supplementary eye field during acquisition of conditional oculomotor associations. J. Neurophysiol. 73, 11011121 (1995).
  46. Chen, L.L. & Wise, S.P. Evolution of directional preferences in the supplementary eye field during acquisition of conditional oculomotor associations. J. Neurosci. 16, 30673081 (1996).
  47. Chen, L.L. & Wise, S.P. Conditional oculomotor learning: population vectors in the supplementary eye field. J. Neurophysiol. 78, 11661169 (1997).
  48. Singer, A.C. & Frank, L.M. Rewarded outcomes enhance reactivation of experience in the hippocampus. Neuron 64, 910921 (2009).
  49. Rigotti, M. et al. The importance of mixed selectivity in complex cognitive tasks. Nature 497, 585590 (2013).
  50. Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nat. Rev. Neurosci. 2, 820829 (2001).

Download references

Author information

  1. Present address: Gladstone Institute of Neurological Disease, San Francisco, California, USA.

    • Christopher H Donahue

Affiliations

  1. Department of Neurobiology, Yale University School of Medicine, New Haven, Connecticut, USA.

    • Christopher H Donahue &
    • Daeyeol Lee
  2. Kavli Institute for Neuroscience, Yale University School of Medicine, New Haven, Connecticut, USA.

    • Daeyeol Lee
  3. Department of Psychology, Yale University, New Haven, Connecticut, USA.

    • Daeyeol Lee

Contributions

C.H.D. and D.L. designed the experiments and wrote the manuscript. C.H.D. did the experiment and analyzed the data.

Competing financial interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to:

Author details

Supplementary information

Supplementary Figures

  1. Supplementary Figure 1: Anatomical distributions for neurons encoding HVL, PRL and choice. (106 KB)

    Color symbols correspond to the neurons that showed significant modulations for each variable (n=226 neurons, both monkeys combined).

PDF files

  1. Supplementary Text and Figures (482 KB)

    Supplementary Figure 1 and Supplementary Table 1

  2. Supplementary Methods Checklist (406 KB)

Additional data