News & Comment

Filter By:

  • Research papers can make a long-lasting impact when the code and software tools supporting the findings are made readily available and can be reused and built on. Our reusability reports explore and highlight examples of good code sharing practices.

    Editorial
  • Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.

    • Mike Horia Mihail Teodorescu
    • Mingang K. Geiger
    • Lily Morse
    Comment
  • The area under the receiver operating characteristic curve (AUROC) of the test set is used throughout machine learning (ML) for assessing a model’s performance. However, when concordance is not the only ambition, this gives only a partial insight into performance, masking distribution shifts of model outputs and model instability.

    • Michael Roberts
    • Alon Hazan
    • Carola-Bibiane Schönlieb
    Comment
  • After several decades of developments in AI, has the inspiration that can be drawn from neuroscience been exhausted? Recent initiatives make the case for taking a fresh look at the intersection between the two fields.

    Editorial
  • Although federated learning is often seen as a promising solution to allow AI innovation while addressing privacy concerns, we argue that this technology does not fix all underlying data ethics concerns. Benefiting from federated learning in digital health requires acknowledgement of its limitations.

    • Marieke Bak
    • Vince I. Madai
    • Stuart McLennan
    Comment
  • Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance.

    • Eva Erman
    • Markus Furendal
    Comment
  • One of the most successful areas for deep learning in scientific discovery has been protein predictions and engineering. We take a closer look at four studies in this issue that advance protein science with innovative deep learning approaches.

    Editorial
  • We reflect on five years of Nature Machine Intelligence and on providing a venue for discussions in AI.

    Editorial
  • For our fifth anniversary, we reconnected with authors of recent Comments and Perspectives in Nature Machine Intelligence and asked them how the topic they wrote about developed. We also wanted to know what other topics in AI they found exciting, surprising or worrying, and what their hopes and expectations are for AI in 2024—and the next five years. A recurring theme is the ongoing developments in large language models and generative AI, their transformative effect on the scientific process and concerns about ethical implications.

    • Noelia Ferruz
    • Marinka Zitnik
    • Francesco Stella
    Feature