Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 5 Issue 12, December 2023

Continual learning in biological and artificial intelligence

Wang et al. draw inspiration from a Drosophila learning system and incorporate its adaptive mechanisms of continual learning into artificial neural networks. By fusing biological and artificial intelligence, the authors show that neuro-inspired adaptability empowers artificial intelligence systems to acquire information sequentially, even in challenging and unpredictable environments. The image portrays a robotic Drosophila in flight, transitioning from day to night.

See Wang et al.

Image: Bo Hong. Cover design: Thomas Phillips.


Top of page ⤴

News & Views

  • Recommender systems are a predominant feature of online platforms and one of the most widespread applications of artificial intelligence. A new model captures information dynamics driven by algorithmic recommendations and offers ways to ensure that users are exposed to diverse content and information.

    • Fernando P. Santos
    News & Views
  • New research reveals a duality between neural network weights and neuron activities that enables a geometric decomposition of the generalization gap. The framework provides a way to interpret the effects of regularization schemes such as stochastic gradient descent and dropout on generalization — and to improve upon these methods.

    • Andrey Gromov
    News & Views
  • A framework for training artificial neural networks in physical space allows neuroscientists to build networks that look and function like real brains.

    • Filip Milisav
    • Bratislav Misic
    News & Views
Top of page ⤴


  • Skin-like flexible electronics (electronic skin) has great potential in medical practices to enable continuous tracking of physical and biochemical information. Xu et al. review the integration of AI methods and electronic skins, especially how data collected from sensors are processed by AI to extract features for human–machine interactions and health monitoring purposes.

    • Changhao Xu
    • Samuel A. Solomon
    • Wei Gao
    Review Article
Top of page ⤴


  • Continual learning is an innate ability in biological intelligence to accommodate real-world changes, but it remains challenging for artificial intelligence. Wang, Zhang and colleagues model key mechanisms of a biological learning system, in particular active forgetting and parallel modularity, to incorporate neuro-inspired adaptability to improve continual learning in artificial intelligence systems.

    • Liyuan Wang
    • Xingxing Zhang
    • Yi Zhong
  • A fundamental question in neuroscience is what are the constraints that shape the structural and functional organization of the brain. By bringing biological cost constraints into the optimization process of artificial neural networks, Achterberg, Akarca and colleagues uncover the joint principle underlying a large set of neuroscientific findings.

    • Jascha Achterberg
    • Danyal Akarca
    • Duncan E. Astle
    Article Open Access
  • Deep learning is a powerful method to process large datasets, and shown to be useful in many scientific fields, but models are highly parameterized and there are often challenges in interpretation and generalization. David Gleich and colleagues develop a method rooted in computational topology, starting with a graph-based topological representation of the data, to help assess and diagnose predictions from deep learning and other complex prediction methods.

    • Meng Liu
    • Tamal K. Dey
    • David F. Gleich
    Article Open Access
  • Geometric deep learning has become a powerful tool in virtual drug design, but it is not always obvious when a model makes incorrect predictions. Luo and colleagues improve the accuracy of their deep learning model using uncertainty calibration and Bayesian optimization in an active learning cycle.

    • Yunan Luo
    • Yang Liu
    • Jian Peng
  • Human and animal motion planning works at various timescales to allow the completion of complex tasks. Inspired by this natural strategy, Yuan and colleagues present a hierarchical motion planning approach for robotics, using deep reinforcement learning and predictive proprioception.

    • Kai Yuan
    • Noor Sajid
    • Zhibin Li
    Article Open Access
  • Prediction of high-level visual representations in the human brain may benefit from multimodal sources in network training and the incorporation of complex datasets. Wang and colleagues show that language pretraining and a large, diverse dataset together build better models of higher-level visual cortex compared to earlier models.

    • Aria Y. Wang
    • Kendrick Kay
    • Leila Wehbe
  • Graph neural networks have proved useful in modelling proteins and their ligand interactions, but it is not clear whether the patterns they identify have biological relevance or whether interactions are merely memorized. Mastropietro et al. use a Shapley value-based method to identify important edges in protein interaction graphs, enabling explanatory analysis of the model mechanisms.

    • Andrea Mastropietro
    • Giuseppe Pasculli
    • Jürgen Bajorath
  • Machine learning methods in cheminformatics have made great progress in using chemical structures of molecules, but a large portion of textual information remains scarcely explored. Liu and colleagues trained MoleculeSTM, a foundation model that aligns the structure and text modalities through contrastive learning, and show its utility on the downstream tasks of structure–text retrieval, text-guided editing and molecular property prediction.

    • Shengchao Liu
    • Weili Nie
    • Animashree Anandkumar
  • Data-driven surrogate models are used in computational physics and engineering to greatly speed up evaluations of the properties of partial differential equations, but they come with a heavy computational cost associated with training. Pestourie et al. combine a low-fidelity physics model with a generative deep neural network and demonstrate improved accuracy–cost trade-offs compared with standard deep neural networks and high-fidelity numerical solvers.

    • Raphaël Pestourie
    • Youssef Mroueh
    • Steven G. Johnson
  • Machine learning models have been widely used in the inverse design of new materials, but typically only linear properties could be targeted. Bastek and Kochmann show that video diffusion generative models can produce the nonlinear deformation and stress response of cellular materials under large-scale compression.

    • Jan-Hendrik Bastek
    • Dennis M. Kochmann
    Article Open Access
  • Virtual drug design has seen recent progress in methods that can generate new molecules with specific properties. Separately, methods have also improved in the task of computationally predicting the outcome of chemical reactions. Qiang and colleagues use the close relation of the two problems to train a model that aims at solving both tasks.

    • Bo Qiang
    • Yiran Zhou
    • Zhenming Liu
  • Interest in using large language models such as ChatGPT has grown rapidly, but concerns about safe and responsible use have emerged, in part because adversarial prompts can bypass existing safeguards with so-called jailbreak attacks. Wu et al. build a dataset of various types of jailbreak attack prompt and demonstrate a simple but effective technique to counter these attacks by encapsulating users’ prompts in another standard prompt that reminds ChatGPT to respond responsibly.

    • Yueqi Xie
    • Jingwei Yi
    • Fangzhao Wu
  • Theoretical frameworks aiming to understand deep learning rely on a so-called infinite-width limit, in which the ratio between the width of hidden layers and the training set size goes to zero. Pacelli and colleagues go beyond this restrictive framework by computing the partition function and generalization properties of fully connected, nonlinear neural networks, both with one and with multiple hidden layers, for the practically more relevant scenario in which the above ratio is finite and arbitrary.

    • R. Pacelli
    • S. Ariosto
    • P. Rotondo
Top of page ⤴

Challenge Accepted

  • Borrowing the format of public competitions from engineering and computer science, a new type of challenge in 2023 tested real-world AI applications with legal assessments based on the EU AI Act.

    • Thomas Burri
    Challenge Accepted
Top of page ⤴


Quick links