Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Explainable neural networks that simulate reasoning


The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Get just this article for as long as you need it


Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Connectivity principles of ENNs.
Fig. 2: The explainability of ENN neural structure and firing.
Fig. 3: Structural analysis and flexibility of ENNs.
Fig. 4: Transferring algorithms from simple to complex problems.
Fig. 5: Decision boundary robustness to noise and adversarial attacks.

Data availability

The datasets used in this work are included with the code59. The MNIST and CIFAR-10 datasets are publicly available5,45. Source data are provided with this paper.

Code availability

The code used to build, train and analyze ENNs as well as the various training and test sets have been deposited in Code Ocean59.


  1. Minsky, M. Logical vs. analogical or symbolic vs. connectionist or neat vs. scruffy. AI Magazine 12, 34–51 (1991).

    Google Scholar 

  2. Gardenfors, P. Conceptual spaces as a framework for knowledge representation. Mind Matter 2, 9–27 (2004).

    Google Scholar 

  3. Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  4. McClelland, J. L. & Rogers, T. T. The parallel distributed processing approach to semantic cognition. Nat. Rev. Neurosci. 4, 310–322 (2003).

    Article  Google Scholar 

  5. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  6. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems 25 1097–1105 (NIPS, 2012).

  7. Shwartz-Ziv, R. & Tishby, N. Opening the black box of deep neural networks via information. Preprint at (2017).

  8. Szegedy, C. et al. Intriguing properties of neural networks. Preprint at (2013).

  9. Marcus, G. Deep learning: a critical appraisal. Preprint at (2018).

  10. Kindermans, P.-J. et al. in The (Un)reliability of Saliency Methods 267–280 (Springer, 2019).

  11. Olah, C. et al. The building blocks of interpretability. Distill 3, e10 (2018).

    Article  Google Scholar 

  12. Lake, B. M., Salakhutdinov, R. & Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science 350, 1332–1338 (2015).

    Article  MathSciNet  Google Scholar 

  13. Wang, M. & Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 312, 135–153 (2018).

    Article  Google Scholar 

  14. McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).

    Article  MathSciNet  Google Scholar 

  15. Han, S., Pool, J., Tran, J. & Dally, W. Learning both weights and connections for efficient neural network. In Proc. Neural Information Processing Systems 28 1135–1143 (NIPS, 2015)

  16. Happel, B. L. & Murre, J. M. Design and evolution of modular neural network architectures. Neural Netw. 7, 985–1004 (1994).

    Article  Google Scholar 

  17. Roy, A. A theory of the brain: localist representation is used widely in the brain. Front. Psychol. 3, 551 (2012).

    Article  Google Scholar 

  18. Quiroga, R. Q. & Kreiman, G. Measuring sparseness in the brain: comment on Bowers (2009). Psychol. Rev. 117, 291–297 (2010).

    Article  Google Scholar 

  19. Roy, A. A theory of the brain—the brain uses both distributed and localist (symbolic) representation. In Proc. 2011 International Joint Conference on Neural Networks 215–221 (2011).

  20. Tolhurst, D. J., Smyth, D. & Thompson, I. D. The sparseness of neuronal responses in ferret primary visual cortex. J. Neurosci. 29, 2355–2370 (2009).

    Article  Google Scholar 

  21. Yen, S.-C., Baker, J. & Gray, C. M. Heterogeneity in the responses of adjacent neurons to natural stimuli in cat striate cortex. J. Neurophysiol. 97, 1326–1341 (2007).

    Article  Google Scholar 

  22. Richards, B. A. & Lillicrap, T. P. Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019).

    Article  Google Scholar 

  23. Bengio, Y., Lee, D.-H., Bornschein, J., Mesnard, T. & Lin, Z. Towards biologically plausible deep learning. Preprint at (2016).

  24. Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw. 111, 47–63 (2019).

    Article  Google Scholar 

  25. Wittgenstein, L. Philosophical Investigations 3rd edn (Basil Blackwell, 1968).

  26. Maddox, W. & Ashby, F. Comparing decision bound and exemplar models of categorization. Percept. Psychophys. 53, 49–70 (1993).

    Article  Google Scholar 

  27. Ashby, F. & Maddox, W. Human category learning. Annu. Rev. Psychol. 56, 149–178 (2005).

    Article  Google Scholar 

  28. Aristotle. Book III. In De Anima (ed. Ross, W. D.) (Oxford: Clarendon Press, 1931).

  29. Aquinas, T. Prima Pars. In Summa Theologiae. q78 (Ave Maria Press, 2000).

  30. Aquinas, T. Question 1: Truth. In Quaestiones disputatae de Veritate (ed. Mulligan, R.) (Henry Regnery Company, 1952).

  31. Hume, D. Book 1: Of the Understanding. In A Treatise of Human Nature (ed. Selby-Bigge, L.A.) (Oxford: Clarendon Press, 1896).

  32. Kant, I. Introduction. In Critique of Pure Reason (eds. Guyer, P. & Wood, A.) (Cambridge Univ. Press, 1998).

  33. Aristotle. Book VII. In Metaphysics (ed. Ross, W. D.) (Oxford: Clarendon Press, 1924).

  34. Aristotle. Section I. In Categories (ed. Ross, W. D.) (Oxford: Clarendon Press, 1928).

  35. Bonaventure. Chapter III. In Itinerarium Mentis in Deum (ed. Cousins E.) (Paulist Press, 1978).

  36. Aquinas, T. De Ente et Essentia (ed. Bobik, J.) (University of Notre Dame Press, 1965).

  37. Aristotle. Posterior Analytics (ed. Ross, W. D.) (Oxford: Clarendon Press, 1925).

  38. Aquinas, T. Prima Pars. In Summa Theologiae. q79 (Ave Maria Press, 2000).

  39. Hobhouse, L. T. The Theory of Knowledge: a Contribution to Some Problems of Logic and Metaphysics 3rd edn (Methuen & Co., 1921).

  40. Gurney, K. in An Introduction to Neural Networks Ch. 3 (Taylor & Francis, 1997).

  41. Bellmund, J. L. S., Gardenfors, P., Moser, E. I. & Doeller, C. F. Navigating cognition: spatial codes for human thinking. Science 362, eaat6766 (2018).

    Article  Google Scholar 

  42. Reich, D. S., Lucchinetti, C. F. & Calabresi, P. A. Multiple sclerosis. N. Engl. J. Med. 378, 169–180 (2018).

    Article  Google Scholar 

  43. Korczyn, A. D., Vakhapova, V. & Grinberg, L. T. Vascular dementia. J. Neurol. Sci. 322, 2–10 (2012).

    Article  Google Scholar 

  44. Evans, J. S. B. T. Dual-processing accounts of reasoning, judgment and social cognition. Annu. Rev. Psychol. 59, 255–278 (2008).

    Article  Google Scholar 

  45. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images, Technical Report TR-2009 (Univ. Toronto, 2012).

  46. Jha, D. et al. Lightlayers: parameter efficient dense and convolutional layers for image classification. In Parallel and Distributed Computing: Applications and Technologies (eds Zhang, Y., Xu, Y. & Tian, H.) 285–296 (Springer, 2021).

  47. Brette, R. Philosophy of the spike: rate-based vs. spike-based theories of the brain. Front. Syst. Neurosci. 9, 151 (2015).

    Article  Google Scholar 

  48. Qiao, F., Zhao, L. & Peng, X. Learning to learn single domain generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 12553–12562 (IEEE Computer Society, 2020).

  49. Arnold, A., Nallapati, R. & Cohen, W. W. A comparative study of methods for transductive transfer learning. In Proc. Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007) 77–82 (IEEE, 2007).

  50. Hu, S., Zhang, K., Chen, Z. & Chan, L. Domain generalization via multidomain discriminant analysis. In Proc. Machine Learning Research Vol. 115 (eds Adams, R. & Gogate, V.) 292–302 (PMLR, 2020).

  51. Karp, R. Reducibility among combinatorial problems. Complexity Comput. Comput. 40, 85–103 (1972).

    Article  MathSciNet  Google Scholar 

  52. Johnson, D. S. Approximation algorithms for combinatorial problems. J. Comput. Syst. Sci. 9, 256–278 (1974).

    Article  MathSciNet  Google Scholar 

  53. Poloczek, M., Schnitger, G., Williamson, D. & Zuylen, A. Greedy algorithms for the maximum satisfiability problem: simple algorithms and inapproximability bounds. SIAM J. Comput. 46, 1029–1061 (2017).

    Article  MathSciNet  Google Scholar 

  54. Hyafil, L. & Rivest, R. L. Constructing optimal binary decision trees is NP-complete. Inf. Process. Lett. 5, 15–17 (1976).

    Article  MathSciNet  Google Scholar 

  55. Bose, N. K. & Garga, A. K. Neural network design using Voronoi diagrams. IEEE Trans. Neural Netw. 4, 778–787 (1993).

    Article  Google Scholar 

  56. Riesenhuber, M. & Poggio, T. Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999).

    Article  Google Scholar 

  57. Cooper, L. N. & Bear, M. F. The BCM theory of synapse modification at 30: interaction of theory with experiment. Nat. Rev. Neurosci. 13, 798–810 (2012).

    Article  Google Scholar 

  58. Bergstra, J. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).

    MathSciNet  MATH  Google Scholar 

  59. Blazek, P. J. & Lin, M. M. Essence neural networks. CodeOcean (2021).

Download references


We acknowledge the Cecil H. and Ida Green Foundation, the Welch Foundation (grant no. I-1958-20180324) and the anonymous-donor-supported UTSW High Risk/High Impact grant for funding this research.

Author information

Authors and Affiliations



P.J.B. and M.M.L. designed the research. P.J.B. performed the research, contributed new analytical tools and analyzed data. P.J.B. and M.M.L. wrote the manuscript.

Corresponding author

Correspondence to Milo M. Lin.

Ethics declarations

Competing interests

The authors have filed an international patent related to this work (PCT/US2021/019470).

Additional information

Peer review information Nature Computational Science thanks the anonymous reviewers for their contribution to the peer review of this work. Handling editor: Ananya Rastogi, in collaboration with the Nature Computational Science team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–10, text, Table 1 and references.

Source data

Source Data Fig. 2

Images, connectivity matrices and neuron activity level matrices.

Source Data Fig. 3

Images and network performance graphical data.

Source Data Fig. 4

Network performance data used to generate graphs.

Source Data Fig. 5

Images and network performance data used to generate graphs.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Blazek, P.J., Lin, M.M. Explainable neural networks that simulate reasoning. Nat Comput Sci 1, 607–618 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing