September 2023 Issue

September issue now live

Deng, B., Zhong, P., Jun, K. et al. CHGNet as a pretrained universal neural network potential for charge-informed atomistic modelling

Nature Machine Intelligence is a Transformative Journal; authors can publish using the traditional publishing route OR via immediate gold Open Access.

Our Open Access option complies with funder and institutional requirements.


  • Identifying interventions that can induce a desired effect is challenging owing to the combinatorial number of possible choices in design space. Zhang and colleagues propose an active learning approach with theoretical guarantees to discover optimal interventions in causal models, and demonstrate the framework in the context of genetic perturbation design using single-cell transcriptomic data.

    • Jiaqi Zhang
    • Louis Cammarata
    • Caroline Uhler
  • The recent accessibility of large language models brought them into contact with a large number of users and, due to the social nature of language, it is hard to avoid prescribing human characteristics such as intentions to a chatbot. Pataranutaporn and colleagues investigated how framing a bot as helpful or manipulative can influence this perception and the behaviour of the humans that interact with it.

    • Pat Pataranutaporn
    • Ruby Liu
    • Pattie Maes
  • Despite their efficiency advantages, the performance of photonic neural networks is hampered by the accumulation of inherent systematic errors. Zheng et al. propose a dual backpropagation training approach, which allows the network to adapt to systematic errors, thus outperforming state-of-the-art in situ training approaches.

    • Ziyang Zheng
    • Zhengyang Duan
    • Xing Lin
  • Local methods of explainable artificial intelligence identify where important features or inputs occur, while global methods try to understand what features or concepts have been learned by a model. The authors propose a concept-level explanation method that bridges the local and global perspectives, enabling more comprehensive and human-understandable explanations.

    • Reduan Achtibat
    • Maximilian Dreyer
    • Sebastian Lapuschkin
    ArticleOpen Access