Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Network model with internal complexity bridges artificial intelligence and neuroscience

Abstract

Artificial intelligence (AI) researchers currently believe that the main approach to building more general model problems is the big AI model, where existing neural networks are becoming deeper, larger and wider. We term this the big model with external complexity approach. In this work we argue that there is another approach called small model with internal complexity, which can be used to find a suitable path of incorporating rich properties into neurons to construct larger and more efficient AI models. We uncover that one has to increase the scale of the network externally to stimulate the same dynamical properties. To illustrate this, we build a Hodgkin–Huxley (HH) network with rich internal complexity, where each neuron is an HH model, and prove that the dynamical properties and performance of the HH network can be equivalent to a bigger leaky integrate-and-fire (LIF) network, where each neuron is a LIF neuron with simple internal complexity.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: The internal and external complexity of neurons and networks.
Fig. 2: A method of transition from tv-LIF to HH, which converges the complexity of the external connection to the interior of a single neuron.
Fig. 3: Equivalence diagram in high-precision simulation cases.
Fig. 4: Comparative results for learning tasks.
Fig. 5: Computing resources and statistical indicators analysis.

Similar content being viewed by others

Data availability

The MultiMNIST dataset can be found at https://drive.google.com/open?id=1VnmCmBAVh8f_BKJg1KYx-E137gBLXbGG or in the GitHub public repository at https://github.com/Xi-L/ParetoMTL/tree/master/multiMNIST/data. The data used in the deep reinforcement learning experiment are generated from the ‘InvertedDoublePendulum-v4’ and ‘InvertedPendulum-v4’ simulation environments in the gym library (https://gym.openai.com). Source data for Figs. 35 can be accessed via the following Zenodo repository: https://doi.org/10.5281/zenodo.12531887 (ref. 55). Source data are provided with this paper.

Code availability

All of the source code for reproducing the results in this paper is available at https://github.com/helx-20/complexity (ref. 55). We use Python v.3.8.12 (https://www.python.org/), NumPy v.1.21.2 (https://github.com/numpy/numpy), SciPy v.1.7.3 (https://www.scipy.org/), Matplotlib v.3.5.1 (https://github.com/matplotlib/matplotlib), Pandas v.1.4.1 (https://github.com/pandas-dev/pandas), Pillow v8.4.0 (https://pypi.org/project/Pillow), MATLAB R2021a software and the SAC algorithm (https://github.com/haarnoja/sac).

References

  1. Ouyang, L. et al. Training language models to follow instructions with human feedback. in Advances in Neural Information Processing Systems Vol. 35 27730–27744 (NeurIPS, 2022).

  2. Raffel, C. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 5485–5551 (2020).

    MathSciNet  Google Scholar 

  3. Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint at https://arxiv.org/abs/2108.07258 (2021).

  4. Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386 (1958).

    Article  Google Scholar 

  5. LeCun, Y. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 541–551 (1989).

    Article  Google Scholar 

  6. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017).

    Article  Google Scholar 

  7. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  8. Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).

    Article  MathSciNet  Google Scholar 

  9. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  Google Scholar 

  10. Cho, K. et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation. Preprint at https://arxiv.org/abs/1406.1078 (2014).

  11. Vaswani, A. et al. Attention is all you need. in 31st Conference on Neural Information Processing Systems (NIPS, 2017).

  12. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. in Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) 4171–4186 (Association for Computational Linguistics, 2019).

  13. Dosovitskiy, A. et al. An image is worth 16 × 16 words: transformers for image recognition at scale. in International Conference on Learning Representations (2020).

  14. Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. in Proc. IEEE/CVF International Conference on Computer Vision 10012–10022 (2021).

  15. Li, Y. Competition-level code generation with alphacode. Science 378, 1092–1097 (2022).

    Article  Google Scholar 

  16. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. & Chen, M. Hierarchical text-conditional image generation with clip latents. Preprint at https://arxiv.org/abs/2204.06125 (2022).

  17. Dauparas, J. Robust deep learning-based protein sequence design using proteinMPNN. Science 378, 49–56 (2022).

    Article  Google Scholar 

  18. Dayan, P. & Abbott, L. F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (MIT Press, 2005).

  19. Markram, H. The blue brain project. Nat. Rev. Neurosci. 7, 153–160 (2006).

    Article  Google Scholar 

  20. Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003).

    Article  Google Scholar 

  21. Eliasmith, C. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).

    Article  Google Scholar 

  22. Wilson, H. R. & Cowan, J. D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24 (1972).

    Article  Google Scholar 

  23. FitzHugh, R. Mathematical models of threshold phenomena in the nerve membrane. Bull. Math. Biophys. 17, 257–278 (1955).

    Article  Google Scholar 

  24. Nagumo, J., Arimoto, S. & Yoshizawa, S. An active pulse transmission line simulating nerve axon. Proc. IRE 50, 2061–2070 (1962).

    Article  Google Scholar 

  25. Lapicque, L. Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarization. J. Physiol. Pathol. Générale 9, 620–635 (1907).

    Google Scholar 

  26. Ermentrout, G. B. & Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253 (1986).

    Article  MathSciNet  Google Scholar 

  27. Fourcaud-Trocmé, N., Hansel, D., Van Vreeswijk, C. & Brunel, N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640 (2003).

    Article  Google Scholar 

  28. Teeter, C. Generalized leaky integrate-and-fire models classify multiple neuron types. Nat. Commun. 9, 709 (2018).

    Article  Google Scholar 

  29. Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500 (1952).

    Article  Google Scholar 

  30. Connor, J. & Stevens, C. Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J. Physiol. 213, 31–53 (1971).

    Article  Google Scholar 

  31. Hindmarsh, J. L. & Rose, R. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. Lond. B 221, 87–102 (1984).

    Article  Google Scholar 

  32. de Menezes, M. A. & Barabási, A.-L. Separating internal and external dynamics of complex systems. Phys. Rev. Let. 93, 068701 (2004).

    Article  Google Scholar 

  33. Ko, K.-I. On the computational complexity of ordinary differential equations. Information Control 58, 157–194 (1983).

    Article  MathSciNet  Google Scholar 

  34. Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. & Lang, K. J. Phoneme recognition using time-delay neural networks. IEEE Trans. Signal Proces. 37, 328–339 (1989).

    Article  Google Scholar 

  35. Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).

    Article  Google Scholar 

  36. Pei, J. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 572, 106–111 (2019).

    Article  Google Scholar 

  37. Davies, M. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).

    Article  Google Scholar 

  38. Zhou, P., Choi, D.-U., Lu, W. D., Kang, S.-M. & Eshraghian, J. K. Gradient-based neuromorphic learning on dynamical RRAM arrays. IEEE J. Emerging and Selected Topics in Circuits and Systems 12, 888–897 (2022).

    Article  Google Scholar 

  39. Wu, Y., Deng, L., Li, G., Zhu, J. & Shi, L. Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018).

    Article  Google Scholar 

  40. Haarnoja, T., Zhou, A., Abbeel, P. & Levine, S. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. in International Conference on Machine Learning 1861–1870 (PMLR, 2018).

  41. Tishby, N., Pereira, F. C. & Bialek, W. The information bottleneck method. Preprint at https://arxiv.org/abs/physics/0004057 (2000).

  42. Johnson, M. H. Functional brain development in humans. Nat. Rev. Neurosci. 2, 475–483 (2001).

    Article  Google Scholar 

  43. Rakic, P. Evolution of the neocortex: a perspective from developmental biology. Nat. Revi. Neurosci. 10, 724–735 (2009).

    Article  Google Scholar 

  44. Kandel, E. R. et al. Principles of Neural Science Vol. 4 (McGraw-Hill, 2000).

  45. Stelzer, F., Röhm, A., Vicente, R., Fischer, I. & Yanchuk, S. Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops. Nat. Commun. 12, 5164 (2021).

    Article  Google Scholar 

  46. Adeli, H. & Park, H. S. Optimization of space structures by neural dynamics. Neural Netw. 8, 769–781 (1995).

    Article  Google Scholar 

  47. Dubreuil, A., Valente, A., Beiran, M., Mastrogiuseppe, F. & Ostojic, S. The role of population structure in computations through neural dynamics. Nat. Neurosci. 25, 783–794 (2022).

    Article  Google Scholar 

  48. Tian, Y. et al. Theoretical foundations of studying criticality in the brain. Netw. Neurosci. 6, 1148–1185 (2022).

  49. Gidon, A. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 367, 83–87 (2020).

    Article  Google Scholar 

  50. Koch, C., Bernander, Ö. & Douglas, R. J. Do neurons have a voltage or a current threshold for action potential initiation? J. Comput. Neurosci. 2, 63–82 (1995).

    Article  Google Scholar 

  51. Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw. 111, 47–63 (2019).

    Article  Google Scholar 

  52. Lin, X., Zhen, H.-L., Li, Z., Zhang, Q.-F. & Kwong, S. Pareto multi-task learning. In 33rd Conference on Neural Information Processing Systems (NeurIPS, 2019).

  53. Molchanov, P., Tyree, S., Karras, T., Aila, T. & Kautz, J. Pruning convolutional neural networks for resource efficient inference. in International Conference on Learning Representations (2022).

  54. Alemi, A. A., Fischer, I., Dillon, J. V. & Murphy, K. Deep variational information bottleneck. in International Conference on Learning Representations (2022).

  55. Linxuan, H. Network model with internal complexity bridges artificial intelligence and neuroscience. Zenodo https://doi.org/10.5281/zenodo.12531887 (2024).

Download references

Acknowledgements

This work was partially supported by National Science Foundation for Distinguished Young Scholars (grant no. 62325603), National Natural Science Foundation of China (grant nos. 62236009, U22A20103, 62441606, 62332002, 62027804, 62425101, 62088102), Beijing Natural Science Foundation for Distinguished Young Scholars (grant no. JQ21015), the Hong Kong Polytechnic University under Project P0050631 and the CAAI-MindSpore Open Fund, developed on OpenI Community.

Author information

Authors and Affiliations

Authors

Contributions

G.L. proposed the initial idea and supervised the whole project. L.H. led the experiments, whereas Y.X. led the theoretical derivation. Y.X. took part in writing the code concerning the computational efficiency measurement and mutual information analysis. L.H., Y.X., W.H. and Y.L. took part in modifying the neuron models. W.H. and Y.L. took part in the design of the simulation and deep learning experiments, the computational efficiency measurement and the mutual information analysis; they also wrote the code concerning the network models and deep learning experiments. Yang Tian contributed to the design of the mutual information analysis. Y.W. contributed to writing the code concerning neuron models and HH network training methods. W.W. and Z.Z. contributed to the design of the deep learning experiments. J.H., Yonghong Tian and B.X. provided guidance for this work. G.L. led the writing of this paper, with all authors assisting in writing and reviewing the paper.

Corresponding authors

Correspondence to Yonghong Tian, Bo Xu or Guoqi Li.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Computational Science thanks Jason K. Eshraghian, Nicolas Fourcaud-Trocmé and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Primary Handling Editor: Ananya Rastogi, in collaboration with the Nature Computational Science team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Proof of Theorem 1, supporting experiments of network equivalence and Supplementary Figs. 1–9 and Tables 1–10.

Reporting Summary

Peer Review File

Supplementary Data 1

Data for Supplementary Fig. 1.

Supplementary Data 3

Data for Supplementary Fig. 3.

Supplementary Data 8

Data for Supplementary Fig. 8.

Supplementary Data 9

Data for Supplementary Fig. 9.

Source data

Source Data Fig. 3

Source data for Fig. 3.

Source Data Fig. 4

Source data for Fig. 4.

Source Data Fig. 5

Source data for Fig. 5.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, L., Xu, Y., He, W. et al. Network model with internal complexity bridges artificial intelligence and neuroscience. Nat Comput Sci 4, 584–599 (2024). https://doi.org/10.1038/s43588-024-00674-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s43588-024-00674-9

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics