News & Comment

Filter By:

  • In the current wave of excitement about applying large vision–language models and generative AI to robotics, expectations are running high, but conquering real-world complexities remains challenging for robots.

    Editorial
  • Personalized LLMs built with the capacity for emulating empathy are right around the corner. The effects on individual users need careful consideration.

    Editorial
  • Most research efforts in machine learning focus on performance and are detached from an explanation of the behaviour of the model. We call for going back to basics of machine learning methods, with more focus on the development of a basic understanding grounded in statistical theory.

    • Diego Marcondes
    • Adilson Simonis
    • Junior Barrera
    Comment
  • Research papers can make a long-lasting impact when the code and software tools supporting the findings are made readily available and can be reused and built on. Our reusability reports explore and highlight examples of good code sharing practices.

    Editorial
  • Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.

    • Mike Horia Mihail Teodorescu
    • Mingang K. Geiger
    • Lily Morse
    Comment
  • The area under the receiver operating characteristic curve (AUROC) of the test set is used throughout machine learning (ML) for assessing a model’s performance. However, when concordance is not the only ambition, this gives only a partial insight into performance, masking distribution shifts of model outputs and model instability.

    • Michael Roberts
    • Alon Hazan
    • Carola-Bibiane Schönlieb
    Comment
  • After several decades of developments in AI, has the inspiration that can be drawn from neuroscience been exhausted? Recent initiatives make the case for taking a fresh look at the intersection between the two fields.

    Editorial
  • Although federated learning is often seen as a promising solution to allow AI innovation while addressing privacy concerns, we argue that this technology does not fix all underlying data ethics concerns. Benefiting from federated learning in digital health requires acknowledgement of its limitations.

    • Marieke Bak
    • Vince I. Madai
    • Stuart McLennan
    Comment
  • Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance.

    • Eva Erman
    • Markus Furendal
    Comment
  • One of the most successful areas for deep learning in scientific discovery has been protein predictions and engineering. We take a closer look at four studies in this issue that advance protein science with innovative deep learning approaches.

    Editorial
  • We reflect on five years of Nature Machine Intelligence and on providing a venue for discussions in AI.

    Editorial
  • For our fifth anniversary, we reconnected with authors of recent Comments and Perspectives in Nature Machine Intelligence and asked them how the topic they wrote about developed. We also wanted to know what other topics in AI they found exciting, surprising or worrying, and what their hopes and expectations are for AI in 2024—and the next five years. A recurring theme is the ongoing developments in large language models and generative AI, their transformative effect on the scientific process and concerns about ethical implications.

    • Noelia Ferruz
    • Marinka Zitnik
    • Francesco Stella
    Feature
  • Borrowing the format of public competitions from engineering and computer science, a new type of challenge in 2023 tested real-world AI applications with legal assessments based on the EU AI Act.

    • Thomas Burri
    Challenge Accepted
  • Further progress in AI may require learning algorithms to generate their own data rather than assimilate static datasets. A Perspective in this issue proposes that they could do so by interacting with other learning agents in a socially structured way.

    Editorial