May 24 Issue

May issue now live

Fürrutter, F., Muñoz-Gil, G. & Briegel, H.J. Quantum circuit synthesis with diffusion models.. 

Nature Machine Intelligence is a Transformative Journal; authors can publish using the traditional publishing route OR via immediate gold Open Access.

Our Open Access option complies with funder and institutional requirements.


  • Neural operators are powerful neural networks that approximate nonlinear dynamical systems and their responses. Cao and colleagues introduce the Laplace neural operator, a scalable approach that can effectively deal with non-periodic signals and transient responses and can outperform existing neural operators on certain classes of ODE and PDE problems.

    • Qianying Cao
    • Somdatta Goswami
    • George Em Karniadakis
  • The intersection of genomics and deep learning shows promise for real impact on healthcare and biological research, but the lack of interpretability in terms of biological mechanisms is limiting utility and further development. As a potential solution, Koo et al. present SQUID, an interpretability framework built using domain-specific genomic surrogate models.

    • Evan E. Seitz
    • David M. McCandlish
    • Peter K. Koo
  • As the number of AI models has rapidly grown, there is an increased focus on improving the documentation through model cards. Liang et al. explore questions around adoption practices and the type of information provided in model cards through a large-scale analysis of 32,111 model card documentation from 74,970 models.

    • Weixin Liang
    • Nazneen Rajani
    • James Zou
  • Transformers show much promise for applications in computational biology, but they rely on sequences, and a challenge is to incorporate 3D structural information. TopoFormer, proposed by Dong Chen et al., combines transformers with a mathematical multiscale topology technique to model 3D protein–ligand complexes, substantially enhancing performance in a range of prediction tasks of interest to drug discovery.

    • Dong Chen
    • Jian Liu
    • Guo-Wei Wei
  • Ziller and colleagues present a balanced investigation of the trade-off between privacy and performance when training artificially intelligent models for medical imaging analysis tasks. The authors evaluate the use of differential privacy in realistic threat scenarios, leading to their conclusion to promote the use of differential privacy, but implementing it in a manner that also retains performance.

    • Alexander Ziller
    • Tamara T. Mueller
    • Georgios Kaissis
    ArticleOpen Access
  • Personalized LLMs built with the capacity for emulating empathy are right around the corner. The effects on individual users need careful consideration.

  • Most research efforts in machine learning focus on performance and are detached from an explanation of the behaviour of the model. We call for going back to basics of machine learning methods, with more focus on the development of a basic understanding grounded in statistical theory.

    • Diego Marcondes
    • Adilson Simonis
    • Junior Barrera
  • Research papers can make a long-lasting impact when the code and software tools supporting the findings are made readily available and can be reused and built on. Our reusability reports explore and highlight examples of good code sharing practices.

  • Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.

    • Mike Horia Mihail Teodorescu
    • Mingang K. Geiger
    • Lily Morse