Articles in 2023

Filter By:

  • The implementation of particle-tracking techniques with deep neural networks is a promising way to determine particle motion within complex flow structures. A graph neural network-enhanced method enables accurate particle tracking by significantly reducing the number of lost trajectories.

    • Séverine Atis
    • Lionel Agostini
    News & Views
  • New research reveals a duality between neural network weights and neuron activities that enables a geometric decomposition of the generalization gap. The framework provides a way to interpret the effects of regularization schemes such as stochastic gradient descent and dropout on generalization — and to improve upon these methods.

    • Andrey Gromov
    News & Views
  • A framework for training artificial neural networks in physical space allows neuroscientists to build networks that look and function like real brains.

    • Filip Milisav
    • Bratislav Misic
    News & Views
  • Borrowing the format of public competitions from engineering and computer science, a new type of challenge in 2023 tested real-world AI applications with legal assessments based on the EU AI Act.

    • Thomas Burri
    Challenge Accepted
  • Machine learning methods in cheminformatics have made great progress in using chemical structures of molecules, but a large portion of textual information remains scarcely explored. Liu and colleagues trained MoleculeSTM, a foundation model that aligns the structure and text modalities through contrastive learning, and show its utility on the downstream tasks of structure–text retrieval, text-guided editing and molecular property prediction.

    • Shengchao Liu
    • Weili Nie
    • Animashree Anandkumar
    Article
  • Theoretical frameworks aiming to understand deep learning rely on a so-called infinite-width limit, in which the ratio between the width of hidden layers and the training set size goes to zero. Pacelli and colleagues go beyond this restrictive framework by computing the partition function and generalization properties of fully connected, nonlinear neural networks, both with one and with multiple hidden layers, for the practically more relevant scenario in which the above ratio is finite and arbitrary.

    • R. Pacelli
    • S. Ariosto
    • P. Rotondo
    Article
  • Skin-like flexible electronics (electronic skin) has great potential in medical practices to enable continuous tracking of physical and biochemical information. Xu et al. review the integration of AI methods and electronic skins, especially how data collected from sensors are processed by AI to extract features for human–machine interactions and health monitoring purposes.

    • Changhao Xu
    • Samuel A. Solomon
    • Wei Gao
    Review Article
  • Interest in using large language models such as ChatGPT has grown rapidly, but concerns about safe and responsible use have emerged, in part because adversarial prompts can bypass existing safeguards with so-called jailbreak attacks. Wu et al. build a dataset of various types of jailbreak attack prompt and demonstrate a simple but effective technique to counter these attacks by encapsulating users’ prompts in another standard prompt that reminds ChatGPT to respond responsibly.

    • Yueqi Xie
    • Jingwei Yi
    • Fangzhao Wu
    Article
  • Machine learning models have been widely used in the inverse design of new materials, but typically only linear properties could be targeted. Bastek and Kochmann show that video diffusion generative models can produce the nonlinear deformation and stress response of cellular materials under large-scale compression.

    • Jan-Hendrik Bastek
    • Dennis M. Kochmann
    ArticleOpen Access
  • Virtual drug design has seen recent progress in methods that can generate new molecules with specific properties. Separately, methods have also improved in the task of computationally predicting the outcome of chemical reactions. Qiang and colleagues use the close relation of the two problems to train a model that aims at solving both tasks.

    • Bo Qiang
    • Yiran Zhou
    • Zhenming Liu
    Article
  • Data-driven surrogate models are used in computational physics and engineering to greatly speed up evaluations of the properties of partial differential equations, but they come with a heavy computational cost associated with training. Pestourie et al. combine a low-fidelity physics model with a generative deep neural network and demonstrate improved accuracy–cost trade-offs compared with standard deep neural networks and high-fidelity numerical solvers.

    • Raphaël Pestourie
    • Youssef Mroueh
    • Steven G. Johnson
    Article
  • Single-cell transcriptomics has provided a powerful approach to investigate cellular properties at unprecedented resolution. Sha et al. have developed an optimal transport-based algorithm called TIGON that can connect transcriptomic snapshots from different time points to obtain collective dynamical information, including cell population growth and the underlying gene regulatory network.

    • Yutong Sha
    • Yuchi Qiu
    • Qing Nie
    ArticleOpen Access
  • A fundamental question in neuroscience is what are the constraints that shape the structural and functional organization of the brain. By bringing biological cost constraints into the optimization process of artificial neural networks, Achterberg, Akarca and colleagues uncover the joint principle underlying a large set of neuroscientific findings.

    • Jascha Achterberg
    • Danyal Akarca
    • Duncan E. Astle
    ArticleOpen Access
  • Further progress in AI may require learning algorithms to generate their own data rather than assimilate static datasets. A Perspective in this issue proposes that they could do so by interacting with other learning agents in a socially structured way.

    Editorial
  • Advances in machine intelligence often depend on data assimilation, but data generation has been neglected. The authors discuss mechanisms that might achieve continuous novel data generation and the creation of intelligent systems that are capable of human-like innovation, focusing on social aspects of intelligence.

    • Edgar A. Duéñez-Guzmán
    • Suzanne Sadedin
    • Joel Z. Leibo
    Perspective
  • Traditionally, 3D graphics involves numerical methods for physical and virtual simulations of real-world scenes. Spielberg et al. review how deep learning enables differentiable visual computing, which determines how graphics outputs change when the environment changes, with applications in areas such as computer-aided design, manufacturing and robotics.

    • Andrew Spielberg
    • Fangcheng Zhong
    • Derek Nowrouzezahrai
    Review Article
  • Deep learning is a powerful method to process large datasets, and shown to be useful in many scientific fields, but models are highly parameterized and there are often challenges in interpretation and generalization. David Gleich and colleagues develop a method rooted in computational topology, starting with a graph-based topological representation of the data, to help assess and diagnose predictions from deep learning and other complex prediction methods.

    • Meng Liu
    • Tamal K. Dey
    • David F. Gleich
    ArticleOpen Access
  • Continual learning is an innate ability in biological intelligence to accommodate real-world changes, but it remains challenging for artificial intelligence. Wang, Zhang and colleagues model key mechanisms of a biological learning system, in particular active forgetting and parallel modularity, to incorporate neuro-inspired adaptability to improve continual learning in artificial intelligence systems.

    • Liyuan Wang
    • Xingxing Zhang
    • Yi Zhong
    Article