Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

A neuro-vector-symbolic architecture for solving Raven’s progressive matrices

A preprint version of the article is available at arXiv.


Neither deep neural networks nor symbolic artificial intelligence (AI) alone has approached the kind of intelligence expressed in humans. This is mainly because neural networks are not able to decompose joint representations to obtain distinct objects (the so-called binding problem), while symbolic AI suffers from exhaustive rule searches, among other problems. These two problems are still pronounced in neuro-symbolic AI, which aims to combine the best of the two paradigms. Here we show that the two problems can be addressed with our proposed neuro-vector-symbolic architecture (NVSA) by exploiting its powerful operators on high-dimensional distributed representations that serve as a common language between neural networks and symbolic AI. The efficacy of NVSA is demonstrated by solving Raven’s progressive matrices datasets. Compared with state-of-the-art deep neural network and neuro-symbolic approaches, end-to-end training of NVSA achieves a new record of 87.7% average accuracy in RAVEN, and 88.1% in I-RAVEN datasets. Moreover, compared with the symbolic reasoning within the neuro-symbolic approaches, the probabilistic reasoning of NVSA with less expensive operations on the distributed representations is two orders of magnitude faster.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Get just this article for as long as you need it


Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Illustration of the binding problem in the neural networks and our solution.
Fig. 2: Proposed NVSA.
Fig. 3: NVSA backend.

Data availability

All three datasets used in this work are openly available. RAVEN is accessible at and PGM at I-RAVEN can be generated using the code provided at

Code availability

The code used to generate the results of this study is available in the accompanying GitHub repository57.


  1. Raven, J., Court, J. & Raven, J. Raven’s Progressive Matrices (Oxford Psychologists Press, 1938).

  2. Carpenter, P. A., Just, M. A. & Shell, P. What one intelligence test measures: a theoretical account of the processing in the Raven progressive matrices test. Psychol. Rev. 97, 404–431 (1990).

    Article  Google Scholar 

  3. Bilker, W. B. et al. Development of abbreviated nine-item forms of the Raven’s standard progressive matrices test. Assessment (2012).

  4. Barrett, D. G. T., Hill, F., Santoro, A., Morcos, A. S. & Lillicrap, T. Measuring abstract reasoning in neural networks. In Proc. International Conference on Machine Learning (ICML) (eds Dy, J. & Krause, A.) (PMLR, 2018).

  5. Zheng, K., Zha, Z.-J. & Wei, W. Abstract reasoning with distracting features. In Advances in Neural Information Processing Systems (NeurIPS) (eds Wallach, H. et al.) (Curran Associates Inc., 2019).

  6. Zhang, C. et al. Learning perceptual inference by contrasting. In Advances in Neural Information Processing Systems (NeurIPS) (eds Wallach, H. et al.) (Curran Associates Inc., 2019).

  7. Zhang, C., Gao, F., Jia, B., Zhu, Y. & Zhu, S.-C. RAVEN: a dataset for relational and analogical visual reasoning. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019).

  8. Hu, S., Ma, Y., Liu, X., Wei, Y. & Bai, S. Stratified rule-aware network for abstract visual reasoning. In Proc. AAAI Conference on Artificial Intelligence (AAAI) (AAAI Press, 2021).

  9. Jahrens, M. & Martinetz, T. Solving Raven’s progressive matrices with multi-layer relation networks. In 2020 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2020).

  10. Benny, Y., Pekar, N. & Wolf, L. Scale-localized abstract reasoning. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2021).

  11. Zhuo, T. & Kankanhalli, M. Solving Raven’s progressive matrices with neural networks. Preprint at arXiv (2020).

  12. Zhuo, T., Huang, Q. & Kankanhalli, M. Unsupervised abstract reasoning for Raven’s problem matrices. IEEE Trans. Image Process. 30, 8332–8341 (2021).

    Article  Google Scholar 

  13. Chalmers, D. J., French, R. M. & Hofstadter, D. R. High-level perception, representation, and analogy: a critique of artificial intelligence methodology. J. Exp. Theor. Artif. Intell. 4, 185–211 (1992).

    Article  Google Scholar 

  14. Fodor, J. A. & Pylyshyn, Z. W. Connectionism and cognitive architecture: a critical analysis. Cognition 28, 3–71 (1988).

    Article  Google Scholar 

  15. d’Avila Garcez, A., Broda, K. B. & Gabbay, D. M. Neural-Symbolic Learning System: Foundations and Applications (Springer, 2002).

  16. Marcus, G. F. The Algebraic Mind: Integrating Connectionism and Cognitive Science (MIT Press, 2001).

  17. Marcus, G. & Davis, E. Insights for AI from the human mind. Commun. ACM 64, 38–41 (2020).

    Article  Google Scholar 

  18. Yi, K. et al. Neural-symbolic VQA: disentangling reasoning from vision and language understanding. In Advances in Neural Information Processing Systems (NeurIPS) (eds Bengio, S. et al.) (Curran Associates Inc., 2018).

  19. Mao, J., Gan, C., Kohli, P., Tenenbaum, J. B. & Wu, J. The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. In International Conference on Learning Representations (ICLR) (, 2019).

  20. Han, C., Mao, J., Gan, C., Tenenbaum, J. & Wu, J. Visual concept–metaconcept learning. In Advances in Neural Information Processing Systems (NeurIPS) (eds Wallach, H. et al.) (Curran Associates Inc., 2019).

  21. Mei, L., Mao, J., Wang, Z., Gan, C. & Tenenbaum, J. B. FALCON: fast visual concept learning by integrating images, linguistic descriptions, and conceptual relations. In International Conference on Learning Representations (ICLR) (, 2022).

  22. Yi, K. et al. Clevrer: collision events for video representation and reasoning. In International Conference on Learning Representations (ICLR) (, 2020).

  23. Zhang, C., Jia, B., Zhu, S.-C. & Zhu, Y. Abstract spatial–temporal reasoning via probabilistic abduction and execution. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2021).

  24. Shah, V. et al. Knowledge-based analogical reasoning in neuro-symbolic latent spaces. In Proc. 16th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy) (d'Avila Garcez, A. & Jiménez-Ruiz, E.) (, 2022).

  25. Rosenblatt, F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms (Spartan Books, 1961).

  26. Rachkovskij, D. A. & Kussul, E. M. Binding and normalization of binary sparse distributed representations by context-dependent thinning. Neural Comput. 13, 411–452 (2001).

    Article  MATH  Google Scholar 

  27. Malsburg, C. V. D. in Brain Theory (eds Palm, G. & Aertsen, A.) 161–176 (Springer, 1986).

  28. Malsburg, C. V. D. The what and why of binding: the modeler’s perspective. Neuron 24, 95–104 (1999).

    Article  Google Scholar 

  29. Gayler, R. W. in Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences (eds Holyoak, K. et al.), 405 (1998).

  30. Gayler, R. W. Vector symbolic architectures answer Jackendoff’s challenges for cognitive neuroscience. In Joint International Conference on Cognitive Science (ICCS/ASCS) (Springer, 2003).

  31. Plate, T. A. Holographic reduced representations. IEEE Trans. Neural Netw. 6, 623–641 (1995).

    Article  Google Scholar 

  32. Plate, T. A. Holographic Reduced Representations: Distributed Representation for Cognitive Structures (Center for the Study of Language and Information, Stanford, 2003).

  33. Kanerva, P. Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput. 1, 139–159 (2009).

    Article  Google Scholar 

  34. Kanerva, P. Large patterns make great symbols: an example of learning from example. In Proc. International Workshop on Hybrid Neural Systems (Springer, 1998).

  35. Plate, T. A. Analogy retrieval and processing with distributed vector representations. Expert Syst. (2000).

  36. Gayler, R. W. & Levy, S. D. A distributed basis for analogical mapping. In New Frontiers in Analogy Research: Proc. Second International Analogy Conference-Analogy (eds Kokinov, B. et al.) (New Bulgarian University Press, 2009).

  37. Rasmussen, D. & Eliasmith, C. A neural model of rule generation in inductive reasoning. Top. Cogn. Sci. 3, 140–153 (2011).

    Article  Google Scholar 

  38. Emruli, B., Gayler, R. W. & Sandin, F. Analogical mapping and inference with binary spatter codes and sparse distributed memory. In International Joint Conference on Neural Networks (IJCNN) (IEEE, 2013).

  39. Laiho, M., Poikonen, J. H., Kanerva, P. & Lehtonen, E. High-dimensional computing with sparse vectors. In 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS) (IEEE, 2015).

  40. Frady, E. P., Kleyko, D., Kymn, C. J., Olshausen, B. A. & Sommer, F. T. Computing on functions using randomized vector representations. Preprint at arXiv (2021).

  41. Wu, Y., Dong, H., Grosse, R. & Ba, J. The scattering compositional learner: discovering objects, attributes, relationships in analogical reasoning. Preprint at arXiv (2020).

  42. Małkiński, M. & Mańdziuk, J. Deep learning methods for abstract visual reasoning: a survey on Raven’s progressive matrices. Preprint at arXiv (2022).

  43. Mitchell, M. Abstraction and analogy-making in artificial intelligence. Ann. N. Y. Acad. Sci. 1505, 79–101 (2021).

    Article  Google Scholar 

  44. Zhuo, T. & Kankanhalli, M. Effective abstract reasoning with dual-contrast network. In International Conference on Learning Representations (ICLR) (, 2021).

  45. Frady, E. P., Kent, S. J., Olshausen, B. A. & Sommer, F. T. Resonator networks, 1: an efficient solution for factoring high-dimensional, distributed representations of data structures. Neural Comput. 32, 2311–2331 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  46. Kent, S. J., Frady, E. P., Sommer, F. T. & Olshausen, B. A. Resonator networks, 2: factorization performance and capacity compared to optimization-based methods. Neural Comput. 32, 2332–2388 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  47. Langenegger, J. et al. In-memory factorization of holographic perceptual representations. In press Nat. Nanotechnol. (2023).

  48. Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. & Eleftheriou, E. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020).

    Article  Google Scholar 

  49. Karunaratne, G. et al. In-memory hyperdimensional computing. Nat. Electron. 3, 327–337 (2020).

    Article  Google Scholar 

  50. Karunaratne, G. et al. Robust high-dimensional memory-augmented neural networks. Nat. Commun. 12, 2468 (2021).

  51. Lin, H. et al. Implementation of highly reliable and energy efficient in-memory hamming distance computations in 1 kb 1-transistor-1-memristor arrays. Adv. Mater. Technol. 6, 2100745 (2021).

  52. Li, H. et al. Memristive crossbar arrays for storage and computing applications. Adv. Intell. Syst. 3, 2100017 (2021).

  53. Serb, A., Kobyzev, I., Wang, J. & Prodromakis, T. A semi-holographic hyperdimensional representation system for hardware-friendly cognitive computing. Philos. Trans. R. Soc. A 378, 20190162 (2020).

  54. Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. A survey on hyperdimensional computing aka vector symbolic architectures, part I: models and data transformations. ACM Comput. Surv. 55, 130 (2022).

  55. Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. A survey on hyperdimensional computing aka vector symbolic architectures, part II: applications, cognitive models, and challenges. ACM Comput. Surv. 55, 175 (2022).

  56. Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (1992).

    Article  MATH  Google Scholar 

  57. Hersche, M., Zeqiri, M., Benini, L., Sebastian, A. & Rahimi, A. IBM/neuro-vector-symbolic-architectures. Zenodo (2023).

Download references


This work is supported by the Swiss National Science foundation (SNF), grant 200800. We thank S. El Messoussi for helping with the generalization experiments, R. W. Gayler for insightful comments that contributed to the final shape of the manuscript, and L. Rudin for the careful proofreading. We also thank A. Gray, L. Horesh, K. Clarkson, I. Yunus Akhalwaya and M. Ernoult for fruitful discussions, and C. Apte and R. Haas for managerial support.

Author information

Authors and Affiliations



A.R. defined the research question and direction. M.H. and M.Z. conceived the methodology and performed the experiments. L.B., A.S. and A.R. supervised the project. M.H. and A.R. wrote the manuscript with input from all authors.

Corresponding authors

Correspondence to Abu Sebastian or Abbas Rahimi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Yixin Zhu, Tao Zhuo, Ari Morcos and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Fig. 1, Notes 1–5 and references.

A detailed illustration of NVSA and its real-time demo.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hersche, M., Zeqiri, M., Benini, L. et al. A neuro-vector-symbolic architecture for solving Raven’s progressive matrices. Nat Mach Intell (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing