Perspective | Published:

Using goal-driven deep learning models to understand sensory cortex

Nature Neuroscience volume 19, pages 356365 (2016) | Download Citation

Abstract

Fueled by innovation in the computer vision and artificial intelligence communities, recent developments in computational neuroscience have used goal-driven hierarchical convolutional neural networks (HCNNs) to make strides in modeling neural single-unit and population responses in higher visual cortical areas. In this Perspective, we review the recent progress in a broader modeling context and describe some of the key technical innovations that have supported it. We then outline how the goal-driven HCNN approach can be used to delve even more deeply into understanding the development and organization of sensory cortical processing.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

References

  1. 1.

    & Untangling invariant object recognition. Trends Cogn. Sci. 11, 333–341 (2007).

  2. 2.

    , & How does the brain solve visual object recognition? Neuron 73, 415–434 (2012).

  3. 3.

    & Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47 (1991).

  4. 4.

    , & The topography of high-order human object areas. Trends Cogn. Sci. 6, 176–184 (2002).

  5. 5.

    et al. Do we know what the early visual system does? J. Neurosci. 25, 10577–10597 (2005).

  6. 6.

    , & Trade-off between curvature tuning and position invariance in visual area V4. Proc. Natl. Acad. Sci. USA 110, 11618–11623 (2013).

  7. 7.

    , & Spectral receptive field properties explain shape selectivity in area V4. J. Neurophysiol. 96, 3492–3505 (2006).

  8. 8.

    , , , & Neural responses to polar, hyperbolic, and Cartesian gratings in area V4 of the macaque monkey. J. Neurophysiol. 76, 2718–2739 (1996).

  9. 9.

    , , & How MT cells analyze the motion of visual patterns. Nat. Neurosci. 9, 1421–1431 (2006).

  10. 10.

    & Receptive fields of single neurones in the cat's striate cortex. J. Physiol. (Lond.) 148, 574–591 (1959).

  11. 11.

    Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193–202 (1980).

  12. 12.

    & Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999).

  13. 13.

    , & A feedforward architecture accounts for rapid categorization. Proc. Natl. Acad. Sci. USA 104, 6424–6429 (2007).

  14. 14.

    Learning Deep Architectures for AI (Now Publishers, 2009).

  15. 15.

    , , & A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Comput. Biol. 5, e1000579 (2009).

  16. 16.

    & Convolutional networks for images, speech, and time series. in The Handbook of Brain Theory and Neural Networks 255–258 (MIT Press, 1995).

  17. 17.

    & Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2012).

  18. 18.

    , , & Hierarchical modular optimization of convolutional networks achieves representations similar to macaque it and human ventral stream. Adv. Neural Inf. Process. Syst. 26, 3093–3101 (2013).

  19. 19.

    , & Responses of striate cortex cells to grating and checkerboard patterns. J. Physiol. (Lond.) 291, 483–505 (1979).

  20. 20.

    & An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J. Neurophysiol. 58, 1233–1258 (1987).

  21. 21.

    , & Spatial summation in the receptive fields of simple cells in the cat's striate cortex. J. Physiol. (Lond.) 283, 53–77 (1978).

  22. 22.

    , , & Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex. J. Comput. Neurosci. 20, 111–136 (2006).

  23. 23.

    Possible principles underlying the transformations of sensory messages. in Sensory Communication Vol. 1, 217–234 (1961).

  24. 24.

    & Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).

  25. 25.

    & Neural representation and the cortical code. Annu. Rev. Neurosci. 23, 613–647 (2000).

  26. 26.

    , & Learning sparse image codes using a wavelet pyramid architecture. Adv. Neural Inf. Process. Syst. 14, 887–893 (2001).

  27. 27.

    , & Shape representation in the inferior temporal cortex of monkeys. Curr. Biol. 5, 552–563 (1995).

  28. 28.

    , , & Trade-off between object selectivity and tolerance in monkey inferotemporal cortex. J. Neurosci. 27, 12292–12307 (2007).

  29. 29.

    Relating population-code representations between man, monkey, and computational models. Front. Neurosci. 3, 363–373 (2009).

  30. 30.

    Visual routines. Cognition 18, 97–159 (1984).

  31. 31.

    & Visual feature integration and the temporal correlation hypothesis. Annu. Rev. Neurosci. 18, 555–586 (1995).

  32. 32.

    , , & Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance. J. Neurosci. 35, 13402–13418 (2015).

  33. 33.

    et al. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. USA 111, 8619–8624 (2014).

  34. 34.

    et al. Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Comput. Biol. 10, e1003963 (2014).

  35. 35.

    & Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol. 10, e1003915 (2014).

  36. 36.

    & Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. J. Neurosci. 35, 10005–10014 (2015).

  37. 37.

    , , & Curvature processing dynamics in macaque area V4. Cereb. Cortex 23, 198–209 (2013).

  38. 38.

    & Metamers of the ventral stream. Nat. Neurosci. 14, 1195–1201 (2011).

  39. 39.

    & Population coding of shape in area V4. Nat. Neurosci. 5, 1332–1338 (2002).

  40. 40.

    , , & Functional organization of auditory cortex revealed by neural networks optimized for auditory tasks. Soc. Neurosci. Abstr. 466.04 (2015).

  41. 41.

    , , & CNN features off-the-shelf: an astounding baseline for recognition. in Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Conference on, 512–519 (IEEE, 2014).

  42. 42.

    Large-scale machine learning with stochastic gradient descent. in Proc. COMPSTAT 2010, 177–186 (Springer, 2010).

  43. 43.

    , & ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).

  44. 44.

    et al. Silicon neurons that compute. in Artificial Neural Networks and Machine Learning–ICANN 2012, 121–128 (Springer, 2012).

  45. 45.

    , & Practical bayesian optimization of machine learning algorithms. Adv. Neural Inf. Process. Syst. 26, 2951–2959 (2012).

  46. 46.

    , & Making a science of model search: hyperparameter optimization in hundreds of dimensions for vision architectures. In Proc. 30th International Conference on Machine Learning 115–123, (2013).

  47. 47.

    , & The Caltech-256 object category dataset. Caltech Technical Report, (2007).

  48. 48.

    , & Why is real-world visual object recognition hard? PLoS Comput. Biol. 4, e27 (2008).

  49. 49.

    et al. ImageNet: a large-scale hierarchical image database. in CVPR 2009, IEEE Conference on Computer Vision and Pattern Recognition, 248–288 (IEEE, 2009).

  50. 50.

    & Very deep convolutional networks for large-scale image recognition. Preprint at (2014).

  51. 51.

    et al. Going deeper with convolutions. Preprint at (2014).

  52. 52.

    , & The unreasonable effectiveness of data. IEEE Intell. Syst. 24, 8–12 (2009).

  53. 53.

    et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008).

  54. 54.

    , & Do deep neural networks learn facial action units when doing expression recognition? Preprint at (2015).

  55. 55.

    , , & The “wake-sleep” algorithm for unsupervised neural networks. Science 268, 1158–1161 (1995).

  56. 56.

    , , , & Unsupervised structure learning: hierarchical recursive composition, suspicious coincidence and competitive exclusion. in Computer Vision–ECCV 2008, 759–773 (Springer, 2008).

  57. 57.

    Deep learning of representations for unsupervised and transfer learning. In Unsupervised and Transfer Learning: Challenges in Machine Learning Vol. 7 (eds. Guyon, I., Dror, G & Lemaire, V.) 29–41 (Microtome, 2013).

  58. 58.

    , , & Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

  59. 59.

    , & Incentivizing exploration in reinforcement learning with deep predictive models. Preprint at (2015).

  60. 60.

    et al. Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015).

  61. 61.

    , & Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012).

  62. 62.

    & Neural differentiation tracks improved recall of competing memories following interleaved study and retrieval practice. Cereb. Cortex 25, 3994–4008 (2015).

  63. 63.

    , , & Fast readout of object identity from macaque inferior temporal cortex. Science 310, 863–866 (2005).

  64. 64.

    & Selectivity and tolerance (“invariance”) both increase as visual information propagates from cortical area V4 to IT. J. Neurosci. 30, 12978–12995 (2010).

  65. 65.

    , , & Categorical representation of visual stimuli in the primate prefrontal cortex. Science 291, 312–316 (2001).

  66. 66.

    , , & Signals in inferotemporal and perirhinal cortex suggest an untangling of visual target information. Nat. Neurosci. 16, 1132–1139 (2013).

  67. 67.

    Understanding brains: details, intuition, and big data. PLoS Biol. 13, e1002147 (2015).

  68. 68.

    , & A neural algorithm of artistic style Preprint at (2015).

  69. 69.

    , , , & A neural code for three-dimensional object shape in macaque inferotemporal cortex. Nat. Neurosci. 11, 1352–1360 (2008).

  70. 70.

    , & Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination. Proc. Natl. Acad. Sci. USA 112, 6730–6735 (2015).

  71. 71.

    , & Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information (MIT Press, 2010).

  72. 72.

    The scope of neuroethology. Behav. Brain Sci. 7, 367–381 (1984).

  73. 73.

    et al. Intriguing properties of neural networks. Preprint at (2013).

  74. 74.

    , & Explaining and harnessing adversarial examples. Preprint at (2014).

Download references

Author information

Affiliations

  1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.

    • Daniel L K Yamins
    •  & James J DiCarlo
  2. McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.

    • Daniel L K Yamins
    •  & James J DiCarlo

Authors

  1. Search for Daniel L K Yamins in:

  2. Search for James J DiCarlo in:

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to Daniel L K Yamins.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/nn.4244

Further reading