Abstract
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout





Code availability
The code used to build, train and analyze ENNs as well as the various training and test sets have been deposited in Code Ocean59.
References
Minsky, M. Logical vs. analogical or symbolic vs. connectionist or neat vs. scruffy. AI Magazine 12, 34–51 (1991).
Gardenfors, P. Conceptual spaces as a framework for knowledge representation. Mind Matter 2, 9–27 (2004).
Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
McClelland, J. L. & Rogers, T. T. The parallel distributed processing approach to semantic cognition. Nat. Rev. Neurosci. 4, 310–322 (2003).
Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems 25 1097–1105 (NIPS, 2012).
Shwartz-Ziv, R. & Tishby, N. Opening the black box of deep neural networks via information. Preprint at https://arxiv.org/pdf/1703.00810.pdf (2017).
Szegedy, C. et al. Intriguing properties of neural networks. Preprint at https://arxiv.org/pdf/1312.6199.pdf (2013).
Marcus, G. Deep learning: a critical appraisal. Preprint at https://arxiv.org/pdf/1801.00631.pdf (2018).
Kindermans, P.-J. et al. in The (Un)reliability of Saliency Methods 267–280 (Springer, 2019).
Olah, C. et al. The building blocks of interpretability. Distill 3, e10 (2018).
Lake, B. M., Salakhutdinov, R. & Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science 350, 1332–1338 (2015).
Wang, M. & Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 312, 135–153 (2018).
McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).
Han, S., Pool, J., Tran, J. & Dally, W. Learning both weights and connections for efficient neural network. In Proc. Neural Information Processing Systems 28 1135–1143 (NIPS, 2015)
Happel, B. L. & Murre, J. M. Design and evolution of modular neural network architectures. Neural Netw. 7, 985–1004 (1994).
Roy, A. A theory of the brain: localist representation is used widely in the brain. Front. Psychol. 3, 551 (2012).
Quiroga, R. Q. & Kreiman, G. Measuring sparseness in the brain: comment on Bowers (2009). Psychol. Rev. 117, 291–297 (2010).
Roy, A. A theory of the brain—the brain uses both distributed and localist (symbolic) representation. In Proc. 2011 International Joint Conference on Neural Networks 215–221 (2011).
Tolhurst, D. J., Smyth, D. & Thompson, I. D. The sparseness of neuronal responses in ferret primary visual cortex. J. Neurosci. 29, 2355–2370 (2009).
Yen, S.-C., Baker, J. & Gray, C. M. Heterogeneity in the responses of adjacent neurons to natural stimuli in cat striate cortex. J. Neurophysiol. 97, 1326–1341 (2007).
Richards, B. A. & Lillicrap, T. P. Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019).
Bengio, Y., Lee, D.-H., Bornschein, J., Mesnard, T. & Lin, Z. Towards biologically plausible deep learning. Preprint at https://arxiv.org/pdf/1502.04156.pdf (2016).
Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw. 111, 47–63 (2019).
Wittgenstein, L. Philosophical Investigations 3rd edn (Basil Blackwell, 1968).
Maddox, W. & Ashby, F. Comparing decision bound and exemplar models of categorization. Percept. Psychophys. 53, 49–70 (1993).
Ashby, F. & Maddox, W. Human category learning. Annu. Rev. Psychol. 56, 149–178 (2005).
Aristotle. Book III. In De Anima (ed. Ross, W. D.) (Oxford: Clarendon Press, 1931).
Aquinas, T. Prima Pars. In Summa Theologiae. q78 (Ave Maria Press, 2000).
Aquinas, T. Question 1: Truth. In Quaestiones disputatae de Veritate (ed. Mulligan, R.) (Henry Regnery Company, 1952).
Hume, D. Book 1: Of the Understanding. In A Treatise of Human Nature (ed. Selby-Bigge, L.A.) (Oxford: Clarendon Press, 1896).
Kant, I. Introduction. In Critique of Pure Reason (eds. Guyer, P. & Wood, A.) (Cambridge Univ. Press, 1998).
Aristotle. Book VII. In Metaphysics (ed. Ross, W. D.) (Oxford: Clarendon Press, 1924).
Aristotle. Section I. In Categories (ed. Ross, W. D.) (Oxford: Clarendon Press, 1928).
Bonaventure. Chapter III. In Itinerarium Mentis in Deum (ed. Cousins E.) (Paulist Press, 1978).
Aquinas, T. De Ente et Essentia (ed. Bobik, J.) (University of Notre Dame Press, 1965).
Aristotle. Posterior Analytics (ed. Ross, W. D.) (Oxford: Clarendon Press, 1925).
Aquinas, T. Prima Pars. In Summa Theologiae. q79 (Ave Maria Press, 2000).
Hobhouse, L. T. The Theory of Knowledge: a Contribution to Some Problems of Logic and Metaphysics 3rd edn (Methuen & Co., 1921).
Gurney, K. in An Introduction to Neural Networks Ch. 3 (Taylor & Francis, 1997).
Bellmund, J. L. S., Gardenfors, P., Moser, E. I. & Doeller, C. F. Navigating cognition: spatial codes for human thinking. Science 362, eaat6766 (2018).
Reich, D. S., Lucchinetti, C. F. & Calabresi, P. A. Multiple sclerosis. N. Engl. J. Med. 378, 169–180 (2018).
Korczyn, A. D., Vakhapova, V. & Grinberg, L. T. Vascular dementia. J. Neurol. Sci. 322, 2–10 (2012).
Evans, J. S. B. T. Dual-processing accounts of reasoning, judgment and social cognition. Annu. Rev. Psychol. 59, 255–278 (2008).
Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images, Technical Report TR-2009 (Univ. Toronto, 2012).
Jha, D. et al. Lightlayers: parameter efficient dense and convolutional layers for image classification. In Parallel and Distributed Computing: Applications and Technologies (eds Zhang, Y., Xu, Y. & Tian, H.) 285–296 (Springer, 2021).
Brette, R. Philosophy of the spike: rate-based vs. spike-based theories of the brain. Front. Syst. Neurosci. 9, 151 (2015).
Qiao, F., Zhao, L. & Peng, X. Learning to learn single domain generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 12553–12562 (IEEE Computer Society, 2020).
Arnold, A., Nallapati, R. & Cohen, W. W. A comparative study of methods for transductive transfer learning. In Proc. Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007) 77–82 (IEEE, 2007).
Hu, S., Zhang, K., Chen, Z. & Chan, L. Domain generalization via multidomain discriminant analysis. In Proc. Machine Learning Research Vol. 115 (eds Adams, R. & Gogate, V.) 292–302 (PMLR, 2020).
Karp, R. Reducibility among combinatorial problems. Complexity Comput. Comput. 40, 85–103 (1972).
Johnson, D. S. Approximation algorithms for combinatorial problems. J. Comput. Syst. Sci. 9, 256–278 (1974).
Poloczek, M., Schnitger, G., Williamson, D. & Zuylen, A. Greedy algorithms for the maximum satisfiability problem: simple algorithms and inapproximability bounds. SIAM J. Comput. 46, 1029–1061 (2017).
Hyafil, L. & Rivest, R. L. Constructing optimal binary decision trees is NP-complete. Inf. Process. Lett. 5, 15–17 (1976).
Bose, N. K. & Garga, A. K. Neural network design using Voronoi diagrams. IEEE Trans. Neural Netw. 4, 778–787 (1993).
Riesenhuber, M. & Poggio, T. Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999).
Cooper, L. N. & Bear, M. F. The BCM theory of synapse modification at 30: interaction of theory with experiment. Nat. Rev. Neurosci. 13, 798–810 (2012).
Bergstra, J. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).
Blazek, P. J. & Lin, M. M. Essence neural networks. CodeOcean https://doi.org/10.24433/CO.7389497.v1 (2021).
Acknowledgements
We acknowledge the Cecil H. and Ida Green Foundation, the Welch Foundation (grant no. I-1958-20180324) and the anonymous-donor-supported UTSW High Risk/High Impact grant for funding this research.
Author information
Authors and Affiliations
Contributions
P.J.B. and M.M.L. designed the research. P.J.B. performed the research, contributed new analytical tools and analyzed data. P.J.B. and M.M.L. wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors have filed an international patent related to this work (PCT/US2021/019470).
Additional information
Peer review information Nature Computational Science thanks the anonymous reviewers for their contribution to the peer review of this work. Handling editor: Ananya Rastogi, in collaboration with the Nature Computational Science team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Figs. 1–10, text, Table 1 and references.
Source data
Source Data Fig. 2
Images, connectivity matrices and neuron activity level matrices.
Source Data Fig. 3
Images and network performance graphical data.
Source Data Fig. 4
Network performance data used to generate graphs.
Source Data Fig. 5
Images and network performance data used to generate graphs.
Rights and permissions
About this article
Cite this article
Blazek, P.J., Lin, M.M. Explainable neural networks that simulate reasoning. Nat Comput Sci 1, 607–618 (2021). https://doi.org/10.1038/s43588-021-00132-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s43588-021-00132-w