Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

The neuroconnectionist research programme

Abstract

Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call ‘neuroconnectionism’. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: The neuroconnectionist research cycle.
Fig. 2: Lakatosian research programmes.
Fig. 3: Schematic of the Goldilocks zone of biological abstraction.
Fig. 4: The current neuroconnectionist toolkit for model testing.
Fig. 5: The historical progression of the neuroconnectionism belt in visual computational neuroscience is highly progressive.

Similar content being viewed by others

References

  1. Churchland, P. S. & Sejnowski, T. J. Blending computational and experimental neuroscience. Nat. Rev. Neurosci. 17, 667–668 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A. & Poeppel, D. Neuroscience needs behaviour: correcting a reductionist bias. Neuron 93, 480–490 (2017).

    Article  CAS  PubMed  Google Scholar 

  3. Kanwisher, N. & Yovel, G. The fusiform face area: a cortical region specialized for the perception of faces. Philos. Trans. R. Soc. B Biol. Sci. 361, 2109–2128 (2006).

    Article  Google Scholar 

  4. Sergent, J., Ohta, S. & Macdonald, B. Functional neuroanatomy of face and object processing: a positron emission tomography study. Brain 115, 15–36 (1992).

    Article  PubMed  Google Scholar 

  5. Tong, F., Nakayama, K., Vaughan, J. T. & Kanwisher, N. Binocular rivalry and visual awareness in human extrastriate cortex. Neuron 21, 753–759 (1998).

    Article  CAS  PubMed  Google Scholar 

  6. Tsao, D. Y., Freiwald, W. A., Knutsen, T. A., Mandeville, J. B. & Tootell, R. B. Faces and objects in macaque cerebral cortex. Nat. Neurosci. 6, 989–995 (2003).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Rust, N. C. & Movshon, J. A. In praise of artifice. Nat. Neurosci. 8, 1647–1650 (2005).

    Article  CAS  PubMed  Google Scholar 

  8. Vinken, K., Konkle, T. & Livingstone, M. The neural code for ‘face cells’ is not face specific. Preprint at bioRxiv https://doi.org/10.1101/2022.03.06.483186 (2022).

    Article  Google Scholar 

  9. McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).

    Article  Google Scholar 

  10. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  CAS  PubMed  Google Scholar 

  11. Schmidhuber, J. Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015).

    Article  PubMed  Google Scholar 

  12. Schrimpf, M. et al. Brain-score: which artificial neural network for object recognition is most brain-like? Preprint at bioRxiv https://doi.org/10.1101/407007 (2020).

    Article  Google Scholar 

  13. Cichy, R. M. et al. The Algonauts Project: a platform for communication between the sciences of biological and artificial intelligence. Preprint at arXiv https://doi.org/10.48550/arXiv.1905.05675 (2019).

    Article  Google Scholar 

  14. Allen, E. J. et al. A massive 7 T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat. Neurosci. 25, 116–126 (2022).

    Article  CAS  PubMed  Google Scholar 

  15. Willeke, K. F. et al. The sensorium competition on predicting large-scale mouse primary visual cortex activity. Preprint at arXiv https://doi.org/10.48550/arXiv.2206.08666 (2022).

    Article  Google Scholar 

  16. RichardWebster, B., DiFalco, A., Caldesi, E. & Scheirer, W. J. Perceptual-score: a psychophysical measure for assessing the biological plausibility of visual recognition models. Preprint at arXiv https://doi.org/10.48550/arXiv.2210.08632 (2022).

    Article  Google Scholar 

  17. Schlangen, D. Targeting the benchmark: on methodology in current natural language processing research. Preprint at arXiv https://doi.org/10.48550/arXiv.2007.04792 (2020).

    Article  Google Scholar 

  18. Rumelhart, D. E., McClelland, J. L. & Group, P. R. Parallel Distributed Processing Vol. 1 (IEEE, 1988).

  19. Cichy, R. M., Khosla, A., Pantazis, D., Torralba, A. & Oliva, A. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 6, 27755 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Fukushima, K. & Miyake, S. Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition. in Competition and Cooperation in Neural Nets 267–285 (Springer, 1982).

  21. Guclu, U. & van Gerven, M. A. J. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. J. Neurosci. 35, 10005–10014 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  22. Khaligh-Razavi, S.-M. & Kriegeskorte, N. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol. 10, e1003915 (2014).

    Article  PubMed  PubMed Central  Google Scholar 

  23. Kietzmann, T. C. et al. Recurrence is required to capture the representational dynamics of the human visual system. Proc. Natl Acad. Sci. USA 116, 21854–21863 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Seeliger, K. et al. Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage 180, 253–266 (2018).

    Article  CAS  PubMed  Google Scholar 

  25. Yamins, D. L. et al. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl Acad. Sci. USA 111, 8619–8624 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V. & McDermott, J. H. A task-optimized neural network replicates human auditory behaviour, predicts brain responses, and reveals a cortical processing hierarchy. Neuron 98, 630–644.e16 (2018).

    Article  CAS  PubMed  Google Scholar 

  27. Saddler, M. R., Gonzalez, R. & McDermott, J. H. Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception. Nat. Commun. 12, 7278 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Cadena, S. A. et al. Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks. Preprint at bioRxiv https://doi.org/10.1101/2022.05.18.492503 (2022).

    Article  Google Scholar 

  29. Jackson, R. L., Rogers, T. T. & Lambon Ralph, M. A. Reverse-engineering the cortical architecture for controlled semantic cognition. Nat. Hum. Behav. 5, 774–786 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Saxe, A. M., McClelland, J. L. & Ganguli, S. A mathematical theory of semantic development in deep neural networks. Proc. Natl Acad. Sci. USA 116, 11537–11546 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Doerig, A. et al. Semantic scene descriptions as an objective of human vision. Preprint at arXiv https://doi.org/10.48550/arXiv.2209.11737 (2022).

    Article  Google Scholar 

  32. Caucheteux, C. & King, J.-R. Brains and algorithms partially converge in natural language processing. Commun. Biol. 5, 134 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  33. Schrimpf, M. et al. The neural architecture of language: integrative modeling converges on predictive processing. Proc. Natl Acad. Sci. USA https://doi.org/10.1073/pnas.2105646118 (2021).

    Article  Google Scholar 

  34. Hannagan, T., Agrawal, A., Cohen, L. & Dehaene, S. Emergence of a compositional neural code for written words: recycling of a convolutional neural network for reading. Proc. Natl Acad. Sci. USA 118, e2104779118 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Botvinick, M., Wang, J. X., Dabney, W., Miller, K. J. & Kurth-Nelson, Z. Deep reinforcement learning and its neuroscientific implications. Neuron 107, 603–616 (2020).

    Article  CAS  PubMed  Google Scholar 

  36. Dabney, W. et al. A distributional code for value in dopamine-based reinforcement learning. Nature 577, 671–675 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Quax, S. & van Gerven, M. Emergent mechanisms of evidence integration in recurrent neural networks. PLoS ONE 13, e0205676 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  39. Lindsay, G. W. & Miller, K. D. How biological attention mechanisms improve task performance in a large-scale visual system model. eLife 7, e38105 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  40. Orhan, A. E. & Ma, W. J. A diverse range of factors affect the nature of neural representations underlying short-term memory. Nat. Neurosci. 22, 275–283 (2019).

    Article  CAS  PubMed  Google Scholar 

  41. Cross, L., Cockburn, J., Yue, Y. & O’Doherty, J. P. Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments. Neuron 109, 724–738.e7 (2021).

    Article  CAS  PubMed  Google Scholar 

  42. Feulner, B. et al. Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat. Commun. 13, 5163 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Merel, J., Botvinick, M. & Wayne, G. Hierarchical motor control in mammals and machines. Nat. Commun. 10, 5489 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  44. Michaels, J. A., Schaffelhofer, S., Agudelo-Toro, A. & Scherberger, H. A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping. Proc. Natl Acad. Sci. USA 117, 32124–32135 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  45. Sussillo, D., Churchland, M. M., Kaufman, M. T. & Shenoy, K. V. A neural network that finds a naturalistic solution for the production of muscle activity. Nat. Neurosci. 18, 1025–1033 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Bao, P., She, L., McGill, M. & Tsao, D. Y. A map of object space in primate inferotemporal cortex. Nature 583, 103–108 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  47. Blauch, N. M., Behrmann, M. & Plaut, D. C. A connectivity-constrained computational account of topographic organization in primate high-level visual cortex. Proc. Natl Acad. Sci. USA 119, e2112566119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  48. Dobs, K., Martinez, J., Kell, A. J. E. & Kanwisher, N. Brain-like functional specialization emerges spontaneously in deep neural networks. Sci. Adv. 8, eabl8913 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  49. Doerig, A., Krahmer, B. & Kietzmann, T. Emergence of topographic organization in a non-convolutional deep neural network (Neuromatch 40). Perception 51, 74–75 (2022).

    Google Scholar 

  50. Higgins, I. et al. Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons. Nat. Commun. 12, 6456(2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  51. Lee, H. et al. Topographic deep artificial neural networks reproduce the hallmarks of the primate inferior temporal cortex face processing network. Preprint at bioRxiv https://doi.org/10.1101/2020.07.09.185116 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  52. Kietzmann, T. C., McClure, P. & Kriegeskorte, N. Deep neural networks in computational neuroscience. Neuroscience https://doi.org/10.1093/acrefore/9780190264086.013.46 (2019).

    Article  Google Scholar 

  53. Kriegeskorte, N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu. Rev. Vis. Sci. 1, 417–446 (2015).

    Article  PubMed  Google Scholar 

  54. Lindsay, G. W. Convolutional neural networks as a model of the visual system: past, present, and future. J. Cogn. Neurosci. 33, 2017–2031 (2021).

    Article  PubMed  Google Scholar 

  55. Marblestone, A. H., Wayne, G. & Kording, K. P. Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci. 10, 94 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  56. Richards, B. A. et al. A deep learning framework for neuroscience. Nat. Neurosci. 22, 1761–1770 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Saxe, A., Nelli, S. & Summerfield, C. If deep learning is the answer, what is the question? Nat. Rev. Neurosci. 22, 55–67 (2020).

    Article  PubMed  Google Scholar 

  58. Van Gerven, M. Computational foundations of natural intelligence. Front. Comput. Neurosci. 11, 112 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  59. Bowers, J. S. et al. Deep problems with neural network models of human vision. Behav. Brain Sci. https://doi.org/10.1017/S0140525X22002813 (2022).

    Article  PubMed  Google Scholar 

  60. Leek, E. C., Leonardis, A. & Heinke, D. Deep neural networks and image classification in biological vision. Vis. Res. 197, 108058 (2022).

    Article  Google Scholar 

  61. Marcus, G. Deep learning: a critical appraisal. Preprint at arXiv https://doi.org/10.48550/arXiv.1801.00631 (2018).

    Article  Google Scholar 

  62. Serre, T. Deep learning: the good, the bad, and the ugly. Annu. Rev. Vis. Sci. 5, 399–426 (2019).

    Article  PubMed  Google Scholar 

  63. Cao, R. & Yamins, D. Explanatory models in neuroscience: part 1 — taking mechanistic abstraction seriously. Preprint at arXiv https://doi.org/10.48550/arXiv.2104.01490 (2021).

    Article  Google Scholar 

  64. Cichy, R. M. & Kaiser, D. Deep neural networks as scientific models. Trends Cogn. Sci. 23, 305–317 (2019).

    Article  PubMed  Google Scholar 

  65. Storrs, K. R. & Kriegeskorte, N. Deep learning for cognitive neuroscience. Preprint at arXiv https://doi.org/10.48550/arXiv.1903.01458 (2019).

    Article  Google Scholar 

  66. Barrett, D. G., Morcos, A. S. & Macke, J. H. Analyzing biological and artificial neural networks: challenges with opportunities for synergy? Curr. Opin. Neurobiol. 55, 55–64 (2019).

    Article  CAS  PubMed  Google Scholar 

  67. Zador, A. M. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat. Commun. 10, 3770 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  68. Yang, G. R. & Wang, X.-J. Artificial neural networks for neuroscientists: a primer. Neuron 107, 1048–1070 (2020).

    Article  CAS  PubMed  Google Scholar 

  69. Wichmann, F. A. & Geirhos, R. Are deep neural networks adequate behavioural models of human visual perception? Annu. Rev. Vis. Sci. https://doi.org/10.1146/annurev-vision-120522-031739 (2023).

    Article  PubMed  Google Scholar 

  70. Pulvermüller, F., Tomasello, R., Henningsen-Schomers, M. R. & Wennekers, T. Biological constraints on neural network models of cognitive function. Nat. Rev. Neurosci. 22, 488–502 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  71. Lakatos, I. Falsification and the methodology of scientific research programmes. in Can Theories Be Refuted? 205–259 (Springer, 1976).

  72. Anderson, J. R., Matessa, M. & Lebiere, C. ACT-R: a theory of higher level cognition and its relation to visual attention. Hum. Comput. Interact. 12, 439–462 (1997).

    Article  Google Scholar 

  73. Wittgenstein, L. Philosophical Investigations (John Wiley & Sons, 2009).

  74. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. in Advances in Neural Information Processing Systems 1097–1105 (ACM, 2012).

  75. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. Preprint at arXiv https://doi.org/10.48550/arXiv.1409.1556 (2014).

    Article  Google Scholar 

  76. Nonaka, S., Majima, K., Aoki, S. C. & Kamitani, Y. Brain hierarchy score: which deep neural networks are hierarchically brain-like? iScience 24, 103013 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  77. Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P. & De Lange, F. P. A. hierarchy of linguistic predictions during natural language comprehension. Proc. Natl Acad. Sci. USA 119, e2201968119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  78. Ponce, C. R. et al. Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences. Cell 177, 999–1009.e10 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  79. Tuli, S., Dasgupta, I., Grant, E. & Griffiths, T. L. Are convolutional neural networks or transformers more like human vision? Preprint at arXiv https://doi.org/10.48550/arXiv.2105.07197 (2021).

    Article  Google Scholar 

  80. Markram, H. The human brain project. Sci. Am. 306, 50–55 (2012).

    Article  PubMed  Google Scholar 

  81. Nandi, A. et al. Single-neuron models linking electrophysiology, morphology, and transcriptomics across cortical cell types. Cell Rep. 40, 111176 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  82. Wolfram, S. Cellular automata as models of complexity. Nature 311, 419–424 (1984).

    Article  Google Scholar 

  83. Siegelmann, H. T. & Sontag, E. D. On the computational power of neural nets. J. Comput. Syst. Sci. 50, 132–150 (1995).

    Article  Google Scholar 

  84. Ali, A., Ahmad, N., de Groot, E., van Gerven, M. A. J. & Kietzmann, T. C. Predictive coding is a consequence of energy efficiency in recurrent neural networks. Patterns 3, 100639 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  85. Jaeger, H. The ‘echo state’ approach to analysing and training recurrent neural networks — with an erratum note. Bonn. Ger. Ger. Natl Res. Cent. Inf. Technol. GMD Tech. Rep. 148, 13 (2001).

    Google Scholar 

  86. Maass, W., Natschläger, T. & Markram, H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).

    Article  PubMed  Google Scholar 

  87. LeCun, Y. et al. Handwritten digit recognition with a back-propagation network. in Advances in Neural Information Processing Systems 396–404 (NIPS, 1990).

  88. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  CAS  PubMed  Google Scholar 

  89. Doerig, A., Schmittwilken, L., Sayim, B., Manassi, M. & Herzog, M. H. Capsule networks as recurrent models of grouping and segmentation. PLoS Comput. Biol. 16, e1008017 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  90. Güçlü, U. & Van Gerven, M. A. Modeling the dynamics of human brain activity with recurrent neural networks. Front. Comput. Neurosci. 11, 7 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  91. Kar, K. & DiCarlo, J. J. Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. Neuron 109, 164–176.e5 (2021).

    Article  CAS  PubMed  Google Scholar 

  92. Lindsay, G. W., Mrsic-Flogel, T. D. & Sahani, M. Bio-inspired neural networks implement different recurrent visual processing strategies than task-trained ones do. Preprint at bioRxiv https://doi.org/10.1101/2022.03.07.483196 (2022).

    Article  Google Scholar 

  93. Linsley, D., Kim, J. & Serre, T. Sample-efficient image segmentation through recurrence. Preprint at arXiv https://doi.org/10.48550/arXiv.1811.11356 (2018).

    Article  Google Scholar 

  94. Nayebi, A. et al. Goal-driven recurrent neural network models of the ventral visual stream. Preprint at bioRxiv https://doi.org/10.1101/2021.02.17.431717 (2021).

    Article  Google Scholar 

  95. Thorat, S., Aldegheri, G. & Kietzmann, T. C. Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization. Preprint at arXiv https://doi.org/10.48550/arXiv.2111.07898 (2021).

    Article  Google Scholar 

  96. Bertalmío, M. et al. Evidence for the intrinsically nonlinear nature of receptive fields in vision. Sci. Rep. 10, 16277 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  97. Quax, S. C., D’Asaro, M. & van Gerven, M. A. Adaptive time scales in recurrent neural networks. Sci. Rep. 10, 11360 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  98. Voelker, A., Kajić, I. & Eliasmith, C. Legendre memory units: continuous-time representation in recurrent neural networks. in Advances in Neural Information Processing Systems Vol. 32 (NeurIPS, 2019).

  99. Bohte, S. M. The evidence for neural information processing with precise spike-times: a survey. Nat. Comput. 3, 195–206 (2004).

    Article  Google Scholar 

  100. Gerstner, W. & Kistler, W. M. Spiking Neuron Models: Single Neurons, Populations, Plasticity (Cambridge Univ. Press, 2002).

  101. Sörensen, L. K., Zambrano, D., Slagter, H. A., Bohté, S. M. & Scholte, H. S. Leveraging spiking deep neural networks to understand the neural mechanisms underlying selective attention. J. Cogn. Neurosci. 34, 655–674 (2022).

    Article  PubMed  Google Scholar 

  102. Zenke, F. & Ganguli, S. Superspike: supervised learning in multilayer spiking neural networks. Neural Comput. 30, 1514–1541 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  103. Stimberg, M., Brette, R. & Goodman, D. F. Brian 2, an intuitive and efficient neural simulator. eLife 8, e47314 (2019).

    CAS  Google Scholar 

  104. Guerguiev, J., Lillicrap, T. P. & Richards, B. A. Towards deep learning with segregated dendrites. eLife 6, e22901 (2017).

    Google Scholar 

  105. Sacramento, J., Ponte Costa, R., Bengio, Y. & Senn, W. Dendritic cortical microcircuits approximate the backpropagation algorithm. in Advances in Neural Information Processing Systems Vol. 31 (NeurIPS, 2018).

  106. Antolík, J., Hofer, S. B., Bednar, J. A. & Mrsic-Flogel, T. D. Model constrained by visual hierarchy improves prediction of neural responses to natural scenes. PLoS Comput. Biol. 12, e1004927 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  107. Cadena, S. A. et al. Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Comput. Biol. 15, e1006897 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  108. Ecker, A. S. et al. A rotation-equivariant convolutional neural network model of primary visual cortex. Preprint at arXiv https://doi.org/10.48550/arXiv.1809.10504 (2018).

    Article  Google Scholar 

  109. Kindel, W. F., Christensen, E. D. & Zylberberg, J. Using deep learning to probe the neural code for images in primary visual cortex. J. Vis. 19, 29–29 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  110. Klindt, D., Ecker, A. S., Euler, T. & Bethge, M. Neural system identification for large populations separating ‘what’ and ‘where’. in Advances in Neural Information Processing Systems Vol. 30 (NIPS, 2017).

  111. Seeliger, K. et al. End-to-end neural system identification with neural information flow. PLoS Comput. Biol. 17, e1008558 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  112. St-Yves, G. & Naselaris, T. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces. NeuroImage 180, 188–202 (2018).

    Article  PubMed  Google Scholar 

  113. Tripp, B. Approximating the architecture of visual cortex in a convolutional network. Neural Comput. 31, 1551–1591 (2019).

    Article  PubMed  Google Scholar 

  114. Bellec, P. & Boyle, J. Bridging the gap between perception and action: the case for neuroimaging. Preprint at PsyarXiv https://doi.org/10.31234/osf.io/3epws (2019).

    Article  Google Scholar 

  115. Hebart, M. N. et al. THINGS: a database of 1,854 object concepts and more than 26,000 naturalistic object images. PLoS ONE 14, e0223792 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  116. Naselaris, T., Allen, E. & Kay, K. Extensive sampling for complete models of individual brains. Curr. Opin. Behav. Sci. 40, 45–51 (2021).

    Article  Google Scholar 

  117. Seeliger, K., Sommers, R. P., Güçlü, U., Bosch, S. E. & Van Gerven, M. A. J. A large single-participant fMRI dataset for probing brain responses to naturalistic stimuli in space and time. Preprint at bioRxiv https://doi.org/10.1101/687681 (2019).

    Article  Google Scholar 

  118. Siegle, J. H. et al. Survey of spiking in the mouse visual system reveals functional hierarchy. Nature 592, 86–92 (2021).

    Article  CAS  PubMed  Google Scholar 

  119. Mehrer, J., Spoerer, C. J., Jones, E. C., Kriegeskorte, N. & Kietzmann, T. C. An ecologically motivated image dataset for deep learning yields better models of human vision. Proc. Natl Acad. Sci. USA 118, e2011417118 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  120. Chen, T., Kornblith, S., Norouzi, M. & Hinton, G. A simple framework for contrastive learning of visual representations. in International Conference on Machine Learning 1597–1607 (PMLR, 2020).

  121. Konkle, T. & Alvarez, G. A. A self-supervised domain-general learning framework for human ventral stream representation. Preprint at Nat. Commun. 13, 491 (2020).

    Article  Google Scholar 

  122. Choksi, B. et al. Predify: augmenting deep neural networks with brain-inspired predictive coding dynamics. Adv. Neural Inf. Process. Syst. 34, 14069–14083 (2021).

    Google Scholar 

  123. Lotter, W., Kreiman, G. & Cox, D. A neural network trained for prediction mimics diverse features of biological neurons and perception. Nat. Mach. Intell. 2, 210–219 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  124. Soulos, P. & Isik, L. Disentangled face representations in deep generative models and the human brain. in NeurIPS 2020 Workshop SVRHM (NeurIPS, 2020).

  125. Storrs, K. R., Anderson, B. L. & Fleming, R. W. Unsupervised learning predicts human perception and misperception of gloss. Nat. Hum. Behav. 5, 1402–1417 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  126. Franzius, M., Sprekeler, H. & Wiskott, L. Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Comput. Biol. 3, e166 (2007).

    Article  PubMed  PubMed Central  Google Scholar 

  127. Franzius, M., Wilbert, N. & Wiskott, L. Invariant object recognition with slow feature analysis. in International Conference on Artificial Neural Networks 961–970 (Springer, 2008).

  128. Kayser, C., Einhäuser, W., Dümmer, O., König, P. & Körding, K. Extracting slow subspaces from natural videos leads to complex cells. in Artificial Neural Networks — ICANN 2001 Vol. 2130 (eds Dorffner, G., Bischof, H. & Hornik, K.) 1075–1080 (Springer, 2001).

  129. Wiskott, L. & Sejnowski, T. J. Slow feature analysis: unsupervised learning of invariances. Neural Comput. 14, 715–770 (2002).

    Article  PubMed  Google Scholar 

  130. Wyss, R., König, P. & Verschure, P. F. J. A model of the ventral visual system based on temporal stability and local memory. PLoS Biol. 4, e120 (2006).

    Article  PubMed  PubMed Central  Google Scholar 

  131. Lindsay, G. W., Merel, J., Mrsic-Flogel, T. & Sahani, M. Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning. Preprint at arXiv https://doi.org/10.48550/arXiv.2112.02027 (2021).

    Article  Google Scholar 

  132. Dwivedi, K., Bonner, M. F., Cichy, R. M. & Roig, G. Unveiling functions of the visual cortex using task-specific deep neural networks. PLoS Comput. Biol. 17, e1009267 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  133. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).

    Article  Google Scholar 

  134. Ahmad, N., Schrader, E. & van Gerven, M. Constrained parameter inference as a principle for learning. Preprint at arXiv https://doi.org/10.48550/arXiv.2203.13203 (2022).

    Article  Google Scholar 

  135. Lillicrap, T. P., Santoro, A., Marris, L., Akerman, C. J. & Hinton, G. Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020).

    Article  CAS  PubMed  Google Scholar 

  136. Lillicrap, T. P., Cownden, D., Tweed, D. B. & Akerman, C. J. Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7, 13276 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  137. Pozzi, I., Bohte, S. & Roelfsema, P. Attention-gated brain propagation: how the brain can implement reward-based error backpropagation. Adv. Neural Inf. Process. Syst. 33, 2516–2526 (2020).

    Google Scholar 

  138. Richards, B. A. & Lillicrap, T. P. Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019).

    Article  CAS  PubMed  Google Scholar 

  139. Hebb, D. O. The Organization of Behaviour: A Neuropsychological Theory (Psychology Press, 2005).

  140. Rao, R. P. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79 (1999).

    Article  CAS  PubMed  Google Scholar 

  141. Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–69 (1982).

    Article  Google Scholar 

  142. Saxe, A. M., McClelland, J. L. & Ganguli, S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. Preprint at arXiv https://doi.org/10.48550/arXiv.1312.6120 (2013).

    Article  Google Scholar 

  143. Benjamin, A. S., Zhang, L.-Q., Qiu, C., Stocker, A. & Kording, K. P. Efficient neural codes naturally emerge through gradient descent learning. Nat. Commun. 13, 7972 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  144. Munakata, Y. & Pfaffly, J. Hebbian learning and development. Dev. Sci. 7, 141–148 (2004).

    Article  PubMed  Google Scholar 

  145. Berrios, W. & Deza, A. Joint rotational invariance and adversarial training of a dual-stream transformer yields state of the art brain-score for area V4. Preprint at https://doi.org/10.48550/arXiv.2203.06649 (2022).

  146. St-Yves, G., Allen, E. J., Wu, Y., Kay, K. & Naselaris, T. Brain-optimized neural networks learn non-hierarchical models of representation in human visual cortex. Preprint at bioRxiv https://doi.org/10.1101/2022.01.21.477293 (2022).

    Article  Google Scholar 

  147. Hasenstaub, A., Otte, S., Callaway, E. & Sejnowski, T. J. Metabolic cost as a unifying principle governing neuronal biophysics. Proc. Natl Acad. Sci. USA 107, 12329–12334 (2010).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  148. Stone, J. V. Principles of Neural Information Theory: Computational Neuroscience and Metabolic Efficiency (Tutorial Introductions) (Tutorial Introductions, 2018).

  149. Wang, Z., Wei, X.-X., Stocker, A. A. & Lee, D. D. Efficient neural codes under metabolic constraints. in Advances in Neural Information Processing Systems Vol. 29 (NIPS, 2016).

  150. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  151. Dosovitskiy, A. et al. An image is worth 16 × 16 words: transformers for image recognition at scale. Preprint at arXiv https://doi.org/10.48550/arXiv.2010.11929 (2020).

    Article  Google Scholar 

  152. Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).

    Article  CAS  PubMed  Google Scholar 

  153. Mnih, V. et al. Playing Atari with deep reinforcement learning. Preprint at arXiv https://doi.org/10.48550/arXiv.1312.5602 (2013).

    Article  Google Scholar 

  154. Vinyals, O. et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019).

    Article  CAS  PubMed  Google Scholar 

  155. Spoerer, C. J., Kietzmann, T. C., Mehrer, J., Charest, I. & Kriegeskorte, N. Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision. PLoS Comput. Biol. 16, e1008215 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  156. Geirhos, R. et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. in International Conference on Learning Representations (ICLR, 2018).

  157. Geirhos, R. et al. Generalisation in humans and deep neural networks. Advances in Neural Information Processing Systems Vol. 31 (NIPS, 2018).

  158. Singer, J. J., Seeliger, K., Kietzmann, T. C. & Hebart, M. N. From photos to sketches-how humans and deep neural networks process objects across different levels of visual abstraction. J. Vis. 22, 4 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  159. Doerig, A., Bornet, A., Choung, O. H. & Herzog, M. H. Crowding reveals fundamental differences in local vs. global processing in humans and machines. Vis. Res. 167, 39–45 (2020).

    Article  CAS  PubMed  Google Scholar 

  160. Funke, C. M. et al. Comparing the ability of humans and DNNs to recognise closed contours in cluttered images. in 18th Annual Meeting of the Vision Sciences Society (VSS 2018) 213 (VSS, 2018).

  161. Jacob, G., Pramod, R. T., Katti, H. & Arun, S. P. Qualitative similarities and differences in visual object representations between brains and deep networks. Nat. Commun. 12, 1872 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  162. Kim, J., Linsley, D., Thakkar, K. & Serre, T. Disentangling neural mechanisms for perceptual grouping. Preprint at arXiv https://doi.org/10.48550/arXiv.1906.01558 (2019).

    Article  Google Scholar 

  163. Loke, J. et al. A critical test of deep convolutional neural networks’ ability to capture recurrent processing in the brain using visual masking. J. Cogn. Neurosci. 34, 2390–2405 (2022).

    Article  PubMed  Google Scholar 

  164. RichardWebster, B., Anthony, S. & Scheirer, W. Psyphy: a psychophysics driven evaluation framework for visual recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 41 (IEEE, 2018).

  165. Sörensen, L. K., Bohté, S. M., De Jong, D., Slagter, H. A. & Scholte, H. S. Mechanisms of human dynamic object recognition revealed by sequential deep neural networks. Preprint at bioRxiv https://doi.org/10.1101/2022.04.06.487259 (2022).

    Article  Google Scholar 

  166. Firestone, C. Performance vs. competence in human–machine comparisons. Proc. Natl Acad. Sci. USA 117, 26562–26571 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  167. Lonnqvist, B., Bornet, A., Doerig, A. & Herzog, M. H. A comparative biology approach to DNN modeling of vision: a focus on differences, not similarities. J. Vis. 21, 17–17 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  168. Ma, W. J. & Peters, B. A neural network walks into a lab: towards using deep nets as models for human behaviour. Preprint at arXiv https://doi.org/10.48550/arXiv.2005.02181 (2020).

    Article  Google Scholar 

  169. Neri, P. Deep networks may capture biological behaviour for shallow, but not deep, empirical characterizations. Neural Netw. 152, 244–266 (2022).

    Article  PubMed  Google Scholar 

  170. Kriegeskorte, N., Mur, M. & Bandettini, P. A. Representational similarity analysis-connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4 (2008).

    PubMed  PubMed Central  Google Scholar 

  171. Kriegeskorte, N. & Wei, X.-X. Neural tuning and representational geometry. Nat. Rev. Neurosci. 22, 703–718 (2021).

    Article  CAS  PubMed  Google Scholar 

  172. Kaniuth, P. & Hebart, M. N. Feature-reweighted representational similarity analysis: a method for improving the fit between computational models, brains, and behaviour. NeuroImage 257, 119294 (2022).

    Article  PubMed  Google Scholar 

  173. Storrs, K. R., Kietzmann, T. C., Walther, A., Mehrer, J. & Kriegeskorte, N. Diverse deep neural networks all predict human inferior temporal cortex well, after training and fitting. J. Cogn. Neurosci. 33, 2044–2064 (2021).

    PubMed  Google Scholar 

  174. Kornblith, S., Norouzi, M., Lee, H. & Hinton, G. Similarity of neural network representations revisited. in International Conference on Machine Learning 3519–3529 (PMLR, 2019).

  175. Kriegeskorte, N. & Diedrichsen, J. Peeling the onion of brain representations. Annu. Rev. Neurosci. 42, 407–432 (2019).

    Article  CAS  PubMed  Google Scholar 

  176. Naselaris, T., Kay, K. N., Nishimoto, S. & Gallant, J. L. Encoding and decoding in fMRI. NeuroImage 56, 400–410 (2011).

    Article  PubMed  Google Scholar 

  177. van Gerven, M. A. J. A primer on encoding models in sensory neuroscience. J. Math. Psychol. 76, 172–183 (2017).

    Article  Google Scholar 

  178. Sexton, N. J. & Love, B. C. Reassessing hierarchical correspondences between brain and deep networks through direct interface. Sci. Adv. 8, eabm2219 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  179. Bashivan, P., Kar, K. & DiCarlo, J. J. Neural population control via deep image synthesis. Science 364, aav9436 (2019).

    Article  Google Scholar 

  180. Gu, Z. et al. NeuroGen: activation optimized image synthesis for discovery neuroscience. NeuroImage 247, 118812 (2022).

    Article  PubMed  Google Scholar 

  181. Ratan Murty, N. A., Bashivan, P., Abate, A., DiCarlo, J. J. & Kanwisher, N. Computational models of category-selective brain regions enable high-throughput tests of selectivity. Nat. Commun. 12, 5540 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  182. Mehrer, J., Spoerer, C. J., Kriegeskorte, N. & Kietzmann, T. C. Individual differences among deep neural network models. Nat. Commun. 11, 5725 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  183. Doshi, F. R. & Konkle, T. Visual object topographic motifs emerge from self-organization of a unified representational space. Preprint at bioRxiv https://doi.org/10.1101/2022.09.06.506403 (2022).

    Article  Google Scholar 

  184. Geadah, V., Horoi, S., Kerg, G., Wolf, G. & Lajoie, G. Goal-driven optimization of single-neuron properties in artificial networks reveals regularization role of neural diversity and adaptation. Preprint at bioRxiv https://doi.org/10.1101/2022.04.29.489963 (2022).

    Article  Google Scholar 

  185. Elsayed, G., Ramachandran, P., Shlens, J. & Kornblith, S. Revisiting spatial invariance with low-rank local connectivity. in International Conference on Machine Learning 2868–2879 (PMLR, 2020).

  186. Zaadnoordijk, L., Besold, T. R. & Cusack, R. Lessons from infant learning for unsupervised machine learning. Nat. Mach. Intell. 4, 510–520 (2022).

    Article  Google Scholar 

  187. Rane, S. et al. Predicting word learning in children from the performance of computer vision systems. Preprint at arXiv https://doi.org/10.48550/arXiv.2207.09847 (2022).

    Article  Google Scholar 

  188. Cadena, S. A. et al. How well do deep neural networks trained on object recognition characterize the mouse visual system? In Neuro-AI Workshop at the Neural Information Processing Conference (NeurIPS, 2019).

  189. Cao, R. & Yamins, D. Explanatory models in neuroscience: part 2 — constraint-based intelligibility. Preprint at arXiv https://doi.org/10.48550/arXiv.2104.01489 (2021).

    Article  Google Scholar 

  190. Kanwisher, N., Khosla, M. & Dobs, K. Using artificial neural networks to ask ‘why’ questions of minds and brains. Trends Neurosci. 46, 240–254 (2023).

    Article  CAS  PubMed  Google Scholar 

  191. Olshausen, B. A. & Field, D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).

    Article  CAS  PubMed  Google Scholar 

  192. Cichy, R. M., Khosla, A., Pantazis, D. & Oliva, A. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks. NeuroImage 153, 346–358 (2017).

    Article  Google Scholar 

  193. Eickenberg, M., Gramfort, A., Varoquaux, G. & Thirion, B. Seeing it all: convolutional network layers map the function of the human visual system. NeuroImage 152, 184–194 (2017).

    Article  PubMed  Google Scholar 

  194. Averbeck, B. B. Pruning recurrent neural networks replicates adolescent changes in working memory and reinforcement learning. Proc. Natl Acad. Sci. USA 119, e2121331119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  195. Rust, N. C. & Jannuzi, B. G. Identifying objects and remembering images: insights from deep neural networks. Curr. Dir. Psychol. Sci. 31, 09637214221083663 (2022).

    Article  Google Scholar 

  196. Tanaka, H. et al. From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction. Adv. Neural Inf. Process. Syst. https://papers.nips.cc/paper_files/paper/2019/hash/eeaebbffb5d29ff62799637fc51adb7b-Abstract.html (2019).

  197. Berner, J., Grohs, P., Kutyniok, G. & Petersen, P. The modern mathematics of deep learning. in Mathematical Aspects of Deep Learning (eds Grohs, P. & Kutyniok, G.) 1–111 (Cambridge Univ. Press, 2022); https://doi.org/10.1017/9781009025096.002.

  198. Olshausen, B. A. & Field, D. J. Sparse coding with an overcomplete basis set: a strategy employed by V1? Vis. Res. 37, 3311–3325 (1997).

    Article  CAS  PubMed  Google Scholar 

  199. Nakkiran, P. et al. Deep double descent: where bigger models and more data hurt. J. Stat. Mech. Theory Exp. 2021, 124003 (2021).

    Article  Google Scholar 

  200. Jacot, A., Gabriel, F. & Hongler, C. Neural tangent kernel: convergence and generalization in neural networks. in Advances in Neural Information Processing Systems Vol. 31 (NIPS, 2018).

  201. Simsek, B. et al. Geometry of the loss landscape in overparameterized neural networks: symmetries and invariances. in International Conference on Machine Learning 9722–9732 (PMLR, 2021).

  202. Minh, D., Wang, H. X., Li, Y. F. & Nguyen, T. N. Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev. 55, 3503–3568 (2022).

    Article  Google Scholar 

  203. Kar, K., Kornblith, S. & Fedorenko, E. Interpretability of artificial neural network models in artificial intelligence versus neuroscience. Nat. Mach. Intell. 4, 1065–1067 (2022).

    Article  Google Scholar 

  204. Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: visualising image classification models and saliency maps. Preprint at arXiv https://doi.org/10.48550/arXiv.1312.6034 (2013).

    Article  Google Scholar 

  205. Zeiler, M. D. & Fergus, R. Visualizing and understanding convolutional networks. in European Conference on Computer Vision 818–833 (Springer, 2014).

  206. Ribeiro, M. T., Singh, S. & Guestrin, C. ‘Why should I trust you?’ Explaining the predictions of any classifier. in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135–1144 (ACM, 2016).

  207. Fong, R. C. & Vedaldi, A. Interpretable explanations of black boxes by meaningful perturbation. in Proceedings of the IEEE International Conference on Computer Vision 3429–3437 (IEEE, 2017).

  208. Olah, C., Mordvintsev, A. & Schubert, L. Feature visualization. Distill 2, e7 (2017).

    Article  Google Scholar 

  209. Hendricks, L. A. et al. Generating visual explanations. in European Conference on Computer Vision 3–19 (Springer, 2016).

  210. Herzog, M. H. & Manassi, M. Uncorking the bottleneck of crowding: a fresh look at object recognition. Curr. Opin. Behav. Sci. 1, 86–93 (2015).

    Article  Google Scholar 

  211. Doerig, A. et al. Beyond Bouma’s window: how to explain global aspects of crowding? PLOS Comput. Biol. 15, e1006580 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  212. Herzog, M. H., Sayim, B., Chicherov, V. & Manassi, M. Crowding, grouping, and object recognition: a matter of appearance. J. Vis. 15, 5–5 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  213. Sabour, S., Frosst, N. & Hinton, G. E. Dynamic routing between capsules. in Advances in Neural Information Processing Systems 3856–3866 (NIPS, 2017).

  214. Bornet, A., Doerig, A., Herzog, M. H., Francis, G. & Van der Burg, E. Shrinking Bouma’s window: how to model crowding in dense displays. PLoS Comput. Biol. 17, e1009187 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  215. Choung, O.-H., Bornet, A., Doerig, A. & Herzog, M. H. Dissecting (un) crowding. J. Vis. 21, 10 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  216. Spoerer, C. J., McClure, P. & Kriegeskorte, N. Recurrent convolutional neural networks: a better model of biological object recognition. Front. Psychol. 8, 1551 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  217. Kar, K., Kubilius, J., Schmidt, K., Issa, E. B. & DiCarlo, J. J. Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behaviour. Nat. Neurosci. 22, 974 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  218. van Bergen, R. S. & Kriegeskorte, N. Going in circles is the way forward: the role of recurrence in visual inference. Curr. Opin. Neurobiol. 65, 176–193 (2020).

    Article  PubMed  Google Scholar 

  219. Kreiman, G. & Serre, T. Beyond the feedforward sweep: feedback computations in the visual cortex. Primates 9, 16 (2019).

    Google Scholar 

  220. Nayebi, A. et al. Recurrent connections in the primate ventral visual stream mediate a trade-off between task performance and network size during core object recognition. Neural Comput. 34, 1652–1675 (2022).

    Article  PubMed  Google Scholar 

  221. Sullivan, J., Mei, M., Perfors, A., Wojcik, E. & Frank, M. C. SAYCam: a large, longitudinal audiovisual dataset recorded from the infant’s perspective. Open. Mind 5, 20–29 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  222. Clay, V., König, P., Kühnberger, K.-U. & Pipa, G. Learning sparse and meaningful representations through embodiment. Neural Netw. 134, 23–41 (2021).

    Article  PubMed  Google Scholar 

  223. Gan, C. et al. The threeDworld transport challenge: a visually guided task-and-motion planning benchmark for physically realistic embodied AI. Preprint at arXiv https://doi.org/10.48550/arXiv.2103.14025 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  224. Chen, Y. et al. COCO-Search18 fixation dataset for predicting goal-directed attention control. Sci. Rep. 11, 8776 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  225. Zhuang, C. et al. Unsupervised neural network models of the ventral visual stream. Proc. Natl Acad. Sci. USA 118, e2014196118 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  226. Konkle, T. & Alvarez, G. A. A self-supervised domain-general learning framework for human ventral stream representation. Nat. Commun. 13, 491 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  227. Bakhtiari, S., Mineault, P., Lillicrap, T., Pack, C. & Richards, B. The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning. in Advances in Neural Information Processing Systems Vol. 34 (NIPS, 2021).

  228. Nayebi, A. et al. Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation. Preprint at bioRxiv https://doi.org/10.1101/2021.06.16.448730 (2022).

    Article  Google Scholar 

  229. Mineault, P., Bakhtiari, S., Richards, B. & Pack, C. Your head is there to move you around: goal-driven models of the primate dorsal pathway. in Advances in Neural Information Processing Systems Vol. 34 (NIPS, 2021).

  230. Stringer, S. M., Rolls, E. T. & Trappenberg, T. P. Self-organizing continuous attractor network models of hippocampal spatial view cells. Neurobiol. Learn. Mem. 83, 79–92 (2005).

    Article  CAS  PubMed  Google Scholar 

  231. Tsodyks, M. Attractor neural network models of spatial maps in hippocampus. Hippocampus 9, 481–489 (1999).

    Article  CAS  PubMed  Google Scholar 

  232. Uria, B. et al. The spatial memory pipeline: a model of egocentric to allocentric understanding in mammalian brains. Preprint at bioRxiv https://doi.org/10.1101/2020.11.11.378141 (2020).

    Article  Google Scholar 

  233. Whittington, J. C. et al. The Tolman–Eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation. Cell 183, 1249–1263.e23 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  234. Whittington, J. C., Warren, J. & Behrens, T. E. Relating transformers to models and neural representations of the hippocampal formation. Preprint at arXiv https://doi.org/10.48550/arXiv.2112.04035 (2021).

    Article  Google Scholar 

  235. Acunzo, D. J., Low, D. M. & Fairhall, S. L. Deep neural networks reveal topic-level representations of sentences in medial prefrontal cortex, lateral anterior temporal lobe, precuneus, and angular gyrus. NeuroImage 251, 119005 (2022).

    Article  PubMed  Google Scholar 

  236. Riveland, R. & Pouget, A. A neural model of task compositionality with natural language instructions. Preprint at bioRxiv https://doi.org/10.1101/2022.02.22.481293 (2022).

    Article  Google Scholar 

  237. Xu, P., Zhu, X. & Clifton, D. A. Multimodal learning with transformers: a survey. Preprint at arXiv https://doi.org/10.48550/arXiv.2206.06488 (2022).

    Article  Google Scholar 

  238. Ivanova, A. A. et al. Beyond linear regression: mapping models in cognitive neuroscience should align with research goals. Preprint at arXiv https://doi.org/10.48550/arXiv.2208.10668 (2022).

    Article  Google Scholar 

  239. Peterson, J. C., Abbott, J. T. & Griffiths, T. L. Evaluating (and improving) the correspondence between deep neural networks and human representations. Cogn. Sci. 42, 2648–2669 (2018).

    Article  PubMed  Google Scholar 

  240. Golan, T., Raju, P. C. & Kriegeskorte, N. Controversial stimuli: pitting neural networks against each other as models of human cognition. Proc. Natl Acad. Sci. USA 117, 29330–29337 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  241. Geirhos, R., Meding, K. & Wichmann, F. A. Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency. Adv. Neural Inf. Process. Syst. 33, 13890–13902 (2020).

    Google Scholar 

  242. Biscione, V. & Bowers, J. S. Do DNNs trained on natural images acquire Gestalt properties? Preprint at arXiv https://doi.org/10.48550/arXiv.2203.07302 (2022).

    Article  Google Scholar 

  243. Feather, J., Durango, A., Gonzalez, R. & McDermott, J. Metamers of neural networks reveal divergence from human perceptual systems. Advances in Neural Information Processing Systems Vol. 32 (NIPS, 2019).

  244. Mastrogiuseppe, F. & Ostojic, S. Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron 99, 609–623.e29 (2018).

    Article  CAS  PubMed  Google Scholar 

  245. Dujmović, M., Bowers, J., Adolfi, F. & Malhotra, G. The pitfalls of measuring representational similarity using representational similarity analysis. Preprint at bioRxiv https://doi.org/10.1101/2022.04.05.487135 (2022).

    Article  Google Scholar 

  246. Elmoznino, E. & Bonner, M. F. High-performing neural network models of visual cortex benefit from high latent dimensionality. Preprint at bioRxiv https://doi.org/10.1101/2022.07.13.499969 (2022).

    Article  Google Scholar 

  247. Schaeffer, R., Khona, M. & Fiete, I. R. No free lunch from deep learning in neuroscience: a case study through models of the entorhinal-hippocampal circuit. in ICML 2022 2nd AI for Science Workshop (ICML, 2022).

  248. Crick, F. The recent excitement about neural networks. Nature 337, 129–132 (1989).

    Article  CAS  PubMed  Google Scholar 

  249. Szegedy, C. et al. Intriguing properties of neural networks. in 2nd International Conference on Learning Representations, ICLR 2014 (ICLR, 2014).

  250. Moosavi-Dezfooli, S.-M., Fawzi, A. & Frossard, P. Deepfool: a simple and accurate method to fool deep neural networks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2574–2582 (IEEE, 2016).

  251. Nguyen, A., Yosinski, J. & Clune, J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 427–436 (IEEE, 2015).

  252. Baker, N., Lu, H., Erlikhman, G. & Kellman, P. J. Deep convolutional networks do not classify based on global object shape. PLoS Comput. Biol. 14, e1006613 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  253. Heinke, D., Wachman, P., van Zoest, W. & Leek, E. C. A failure to learn object shape geometry: implications for convolutional neural networks as plausible models of biological vision. Vis. Res. 189, 81–92 (2021).

    Article  PubMed  Google Scholar 

  254. Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. Preprint at arXiv https://doi.org/10.48550/arXiv.1412.6572 (2014).

    Article  Google Scholar 

  255. Bai, T., Luo, J., Zhao, J., Wen, B. & Wang, Q. Recent advances in adversarial training for adversarial robustness. Preprint at arXiv https://doi.org/10.48550/arXiv.2102.01356 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  256. Dapello, J. et al. Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. Adv. Neural Inf. Process. Syst. 33, 13073–13087 (2020).

    Google Scholar 

  257. Malhotra, G., Evans, B. D. & Bowers, J. S. Hiding a plane with a pixel: examining shape-bias in CNNs and the benefit of building in biological constraints. Vis. Res. 174, 57–68 (2020).

    Article  PubMed  Google Scholar 

  258. Machiraju, H., Choung, O.-H., Herzog, M. H. & Frossard, P. Empirical advocacy of bio-inspired models for robust image recognition. Preprint at arXiv https://doi.org/10.48550/arXiv.2205.09037 (2022).

    Article  Google Scholar 

  259. Ilyas, A. et al. Adversarial examples are not bugs, they are features. Preprint at arXiv https://doi.org/10.48550/arXiv.1905.02175 (2019).

    Article  Google Scholar 

  260. Geirhos, R. et al. Shortcut learning in deep neural networks. Nat. Mach. Intell. 2, 665–673 (2020).

    Article  Google Scholar 

  261. Elsayed, G. et al. Adversarial examples that fool both computer vision and time-limited humans. in Advances in Neural Information Processing Systems 3910–3920 (NIPS, 2018).

  262. Guo, C. et al. Adversarially trained neural representations are already as robust as biological neural representations. in International Conference on Machine Learning 8072–8081 (PMLR, 2022).

  263. Zhou, Z. & Firestone, C. Humans can decipher adversarial images. Nat. Commun. 10, 1334 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  264. Hermann, K., Chen, T. & Kornblith, S. The origins and prevalence of texture bias in convolutional neural networks. Adv. Neural Inf. Process. Syst. 33, 19000–19015 (2020).

    Google Scholar 

  265. Evans, B. D., Malhotra, G. & Bowers, J. S. Biological convolutions improve DNN robustness to noise and generalisation. Neural Netw. 148, 96–110 (2022).

    Article  PubMed  Google Scholar 

  266. Geirhos, R. et al. Partial success in closing the gap between human and machine vision. in Advances in Neural Information Processing Systems Vol. 34 (NIPS, 2021).

  267. Jagadeesh, A. V. & Gardner, J. L. Texture-like representation of objects in human visual cortex. Proc. Natl Acad. Sci. USA 119, e2115302119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  268. Fodor, J. A. & Pylyshyn, Z. W. Connectionism and cognitive architecture: a critical analysis. Cognition 28, 3–71 (1988).

    Article  CAS  PubMed  Google Scholar 

  269. Jackendoff, R. Précis of foundations of language: brain, meaning, grammar, evolution. Behav. Brain Sci. 26, 651–665 (2003).

    Article  PubMed  Google Scholar 

  270. Marcus, G. F. The Algebraic Mind: Integrating Connectionism and Cognitive Science (MIT Press, 2003).

  271. Quilty-Dunn, J., Porot, N. & Mandelbaum, E. The best game in town: the re-emergence of the language of thought hypothesis across the cognitive sciences. Behav. Brain Sci. https://doi.org/10.1017/S0140525X22002849 (2022).

    Article  PubMed  Google Scholar 

  272. Chomsky, N. Language and Mind (Cambridge Univ. Press, 2006).

  273. Frankland, S. M. & Greene, J. D. Concepts and compositionality: in search of the brain’s language of thought. Annu. Rev. Psychol. 71, 273–303 (2020).

    Article  PubMed  Google Scholar 

  274. Pinker, S. & Prince, A. On language and connectionism: analysis of a parallel distributed processing model of language acquisition. Cognition 28, 73–193 (1988).

    Article  CAS  PubMed  Google Scholar 

  275. Hornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989).

    Article  Google Scholar 

  276. Santoro, A., Lampinen, A., Mathewson, K., Lillicrap, T. & Raposo, D. Symbolic behaviour in artificial intelligence. Preprint at arXiv https://doi.org/10.48550/arXiv.2102.03406 (2021).

    Article  Google Scholar 

  277. Mul, M., Bouchacourt, D. & Bruni, E. Mastering emergent language: learning to guide in simulated navigation. Preprint at arXiv https://doi.org/10.48550/arXiv.1908.05135 (2019).

    Article  Google Scholar 

  278. ChatGPT: optimizing language models for dialogue. OpenAI https://openai.com/blog/chatgpt/ (2022).

  279. Shahriar, S. & Hayawi, K. Let’s have a chat! A conversation with ChatGPT: technology, applications, and limitations. Preprint at arXiv https://doi.org/10.48550/arXiv.2302.13817 (2023).

    Article  Google Scholar 

  280. OpenAI. GPT-4 technical report. Preprint at arXiv https://doi.org/10.48550/arXiv.2303.08774 (2023).

    Article  Google Scholar 

  281. Hinton, G. How to represent part-whole hierarchies in a neural network. Preprint at arXiv https://doi.org/10.48550/arXiv.2102.12627 (2021).

    Article  Google Scholar 

  282. Higgins, I. et al. beta-vae: learning basic visual concepts with a constrained variational framework. International Conference on Learning Representations https://openreview.net/forum?id=Sy2fzU9gl (2017).

  283. Higgins, I. et al. Towards a definition of disentangled representations. Preprint at arXiv https://doi.org/10.48550/arXiv.1812.02230 (2018).

    Article  Google Scholar 

  284. Eslami, S. A. et al. Neural scene representation and rendering. Science 360, 1204–1210 (2018).

    Article  CAS  PubMed  Google Scholar 

  285. Graves, A., Wayne, G. & Danihelka, I. Neural turing machines. Preprint at arXiv https://doi.org/10.48550/arXiv.1410.5401 (2014).

    Article  Google Scholar 

  286. Garnelo, M., Arulkumaran, K. & Shanahan, M. Towards deep symbolic reinforcement learning. Preprint at arXiv https://doi.org/10.48550/arXiv.1609.05518 (2016).

    Article  Google Scholar 

  287. Holyoak, K. J. The proper treatment of symbols. in Cognitive Dynamics: Conceptual and Representational Change in Humans and Machines Vol. 229 (Psychology Press, 2000).

  288. Smolensky, P., McCoy, R. T., Fernandez, R., Goldrick, M. & Gao, J. Neurocompositional computing: from the central paradox of cognition to a new generation of AI systems. Preprint at arXiv https://doi.org/10.48550/arXiv.2205.01128 (2022).

    Article  Google Scholar 

  289. Hummel, J. E. Getting symbols out of a neural architecture. Connect. Sci. 23, 109–118 (2011).

    Article  Google Scholar 

  290. Smolensky, P. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif. Intell. 46, 159–216 (1990).

    Article  Google Scholar 

  291. Eliasmith, C. How to Build a Brain: A Neural Architecture for Biological Cognition (Oxford Univ. Press, 2013).

  292. Flesch, T., Juechems, K., Dumbalska, T., Saxe, A. & Summerfield, C. Orthogonal representations for robust context-dependent task performance in brains and neural networks. Neuron 110, 1258–1270 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  293. Molano-Mazon, M. et al. NeuroGym: an open resource for developing and sharing neuroscience tasks. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/aqc9n (2022).

    Article  Google Scholar 

  294. Koulakov, A., Shuvaev, S., Lachi, D. & Zador, A. Encoding innate ability through a genomic bottleneck. Preprint at bioRxiv https://doi.org/10.1101/2021.03.16.435261 (2022).

    Article  Google Scholar 

  295. Heinke, D. Computational modelling in behavioural neuroscience: methodologies and approaches (minutes of discussions at the workshop in Birmingham, UK, in May 2007). in Computational Modelling in Behavioural Neuroscience 346–352 (Psychology Press, 2009).

  296. Hubel, D. H. & Wiesel, T. N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160, 106–154 (1962).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  297. Riesenhuber, M. & Poggio, T. Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999).

    Article  CAS  PubMed  Google Scholar 

  298. Wen, H. et al. Neural encoding and decoding with deep learning for dynamic natural vision. Cereb. Cortex 28, 4136–4160 (2018).

    Article  PubMed  Google Scholar 

  299. Popper, K. The Logic of Scientific Discovery (Routledge, 2005).

  300. Duhem, P. M. M. The Aim and Structure of Physical Theory Vol. 13 (Princeton Univ. Press, 1991).

  301. Duhem, P. Physical theory and experiment. in Can Theories Be Refuted? 1–40 (Springer, 1976).

  302. Gillies, D. Philosophy of science in the twentieth century: four central themes. Br. J. Philos. Sci. 45, 1066–1069 (1994).

    Article  Google Scholar 

  303. Quine, W. v. O. Two dogmas of empiricism. in Can theories Be refuted? 41–64 (Springer, 1976).

  304. Kuhn, T. S. The Structure of Scientific Revolutions (Univ. Chicago Press, 2012).

Download references

Acknowledgements

The authors acknowledge support by the SNF grant 203018 (A.D.), the ERC stg grant 101039524 TIME (T.C.K.) and the Max Planck Research Group grant of Martin N. Hebart (K.S.).

Author information

Authors and Affiliations

Authors

Contributions

A.D., R.P.S., K.S. and T.C.K. initiated the project and wrote the first draft of the article. A.D., R.P.S., K.S., B.R., J.I., G.W.L., T.K., M.A.J.v.G. and T.C.K. contributed significantly to subsequent versions of this manuscript. All authors researched data for the article and contributed substantially to the conceptualization of the research programme.

Corresponding author

Correspondence to Adrien Doerig.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Reviews Neuroscience thanks Gemma Roig, who co-reviewed with Martina Vilas; Benjamin Cowley; Dietmar Heinke; and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Doerig, A., Sommers, R.P., Seeliger, K. et al. The neuroconnectionist research programme. Nat Rev Neurosci 24, 431–450 (2023). https://doi.org/10.1038/s41583-023-00705-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41583-023-00705-w

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing