Volume 5

  • No. 12 December 2023

    Continual learning in biological and artificial intelligence

    Wang et al. draw inspiration from a Drosophila learning system and incorporate its adaptive mechanisms of continual learning into artificial neural networks. By fusing biological and artificial intelligence, the authors show that neuro-inspired adaptability empowers artificial intelligence systems to acquire information sequentially, even in challenging and unpredictable environments. The image portrays a robotic Drosophila in flight, transitioning from day to night.

    See Wang et al.

  • No. 11 November 2023

    Learning causal structures in biological data

    Finding cause-and-effect relationships between variables in complex datasets is a longstanding challenge in artificial intelligence and machine learning. The task is particularly daunting for high-dimensional data such as in biomedical applications. Lagemann et al. develop a deep learning approach that combines convolutional and graph neural networks for a scalable approach to find causal relationships from complex, noisy biological data.

    See Lagemann et al.

  • No. 10 October 2023

    Folding with large-scale protein language models

    The cover image shows a protein, folded in space and forming a stable 3D structure. AlphaFold has revolutionized the ability to predict protein structures. Work in this issue by Fang et al. further improves prediction capability and efficiency by combining a large-scale protein language model, trained on thousands of millions of primary structures in a self-supervised way, with the geometric learning capability of AlphaFold2.

    See Fang et al.

  • No. 9 September 2023

    Crystal Hamiltonian graph neural networks

    The need to quickly discover new materials and to understand their underlying physics in the presence of complex electron interactions calls for advanced simulation tools. Deng et al. propose CHGNet, a graph-neural-network-based machine learning interatomic potential that incorporates charge information. Pretrained on over 1.5 million inorganic crystal structures, CHGNet opens new opportunities for insights into ionic systems with charge interactions.

    See Deng et al.

  • No. 8 August 2023

    Feedback states for robot motor learning

    As deep reinforcement learning gains prominence in robot learning, understanding the importance of sensory feedback becomes crucial. Yu et al. quantitatively identify essential sensory feedback for effective learning of locomotion skills, enabling robust performance with minimal sensing dependencies and providing insights into the relationship between state observations and motor skills.

    See Yu et al.

  • No. 7 July 2023

    Hypergraphs for computational genomics

    The complexity of biological mechanisms requires analysis of gene expression over many cell and tissue types to understand the cause of diseases. The cover image shows a network of interconnected tissues in a human silhouette, symbolizing the hypergraph factorization approach of Viñas and colleagues in this issue, which integrates gene expression information from multiple collected tissues of an individual and imputes missing data.

    See Ramon Viñas et al.

  • No. 6 June 2023

    AI-based weather forecasting for worldwide stations

    Weather forecasting has long attracted interest from scientists but owing to the chaotic nature of the atmosphere, simulating the weather at high spatial resolution with conventional methods is challenging. Wu et al. propose a data-driven approach for accurate and interpretable forecasting of the weather, based on partial observations of scattered stations over the world (see cover). The authors’ unified deep learning model was successfully deployed to provide real-time weather forecasting services for competition venues during the 2022 Winter Olympics in Beijing.

    See Wu et al.

  • No. 5 May 2023

    Particle tracking with graph optimal transport learning

    A graph neural network approach, which incorporates an optimal transport-based algorithm, is developed for efficient tracking of particles in fluid flow. The image shows particle clouds at two different time steps (shown in blue and red).

    See Liang, J., Xu, C. & Cai, S.

  • No. 4 April 2023

    Guiding evolutionary computing

    Evolutionary computation has made impressive achievements in solving complex problems in science and industry, but a long-standing challenge is that there is no theoretical guarantee on the global optimum and the general reliability of solutions. A possible way to guide evolutionary computing and avoid local optimums is to incorporate representation learning, steering the approach to exploit one identified attention region of problem space.

    See Li et al.

  • No. 3 March 2023

    Pathways for small changes in large language models

    Large language models have, as their name implies, a large number of parameters: over 175 million for example for GPT-3. An analysis by Ding et al. in this issue explores how changing only a few parameters can bring a model onto a new path (as conceptually visualized in the cover image) to fine-tune them for new tasks.

    See Ding et al.

  • No. 2 February 2023

    Graph learning for resistive memory arrays

    Graph learning with deep neural networks has become a popular approach for a wide range of applications including in drug discovery, genomics and combinatorial optimization problems, but it is computationally expensive for large datasets. An emerging approach in chip design, which combines hardware and software design, is in-memory computing, which avoids the bottleneck of conventional digital hardware in shuttling data back and forth between memory and processing units. Wang et al. demonstrate the feasibility of graph learning with an energy-efficient in-memory chip approach, with an implementation of echo state graph neural networks in random resistor arrays (see cover).

    See Shaocong Wang et al.

  • No. 1 January 2023

    Insect-like plume tracking with reinforcement learning

    Flying insects excel at solving the computational challenge of tracking of odour plumes. Many aspects of the associated behaviour and the underlying neural circuitry are well studied, but measuring neural activity directly in freely behaving insects is not tractable. Singh et al. developed a complementary in silico approach that involves recurrent neural network artificial agents that use deep reinforcement learning to locate the source of simulated odour plumes. The trained agents produce trajectories with a strong resemblance to those of flying insects and learn to compute task-relevant variables with distinct dynamic structures in population activity.

    See Satpreet H. Singh et al.