News & Comment

Filter By:

  • Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.

    • Carina Prunkl
    Comment
  • Growing criticisms of datasets that were built from user-generated data scraped from the web have led to the retirement or redaction of many popular benchmarks. Their afterlife, as copies or subsets that continue to be used, is a cause for concern.

    Editorial
  • For a third year in a row, we followed up with authors of several recent Comments and Perspectives in Nature Machine Intelligence about what happened after their article was published: how did the topic they wrote about develop, did they gain new insights, and what are their hopes and expectations for AI in 2022?

    • Cameron Buckner
    • Risto Miikkulainen
    • Vidushi Marda
    Feature
  • A well-known internet truth is that if the product is free, you are the product being sold. But with a growing range of regulations and web content tools, users can gain more control over the data they interact with.

    Editorial
  • Although the initial inspiration of neural networks came from biology, insights from physics have helped neural networks to become usable. New connections between physics and machine learning produce powerful computational methods.

    Editorial
  • Can the human brain cope with controlling an extra robotic arm or digit added to the body?

    Editorial
  • In the AlphaPilot Challenge, teams compete to fly autonomous drones through an obstacle course as fast as possible. The 2019 winning team MAVLab reflects on the challenge of beating human pilots.

    • C. De Wagter
    • F. Paredes-Vallés
    • G. de Croon
    Challenge Accepted
  • Very large neural network models such as GPT-3, which have many billions of parameters, are on the rise, but so far only big tech has the resources to train, deploy and study such models. This needs to change, say Stanford AI researchers, who call for an investment in academic collaborations to build and study large neural networks.

    Editorial
  • The regulatory landscape for artificial intelligence (AI) is shaping up on both sides of the Atlantic, urgently awaited by the scientific and industrial community. Commonalities and differences start to crystallize in the approaches to AI in medicine.

    • Kerstin N. Vokinger
    • Urs Gasser
    Comment
  • Health disparities need to be addressed so that the benefits of medical progress are not limited to selected groups. Big data and machine learning approaches are transformative tools for public and population health, but need ongoing support from insights in algorithmic fairness.

    Editorial
  • The COVID-19 pandemic is not over and the future is uncertain, but there has lately been a semblance of what life was like before. As thoughts turn to the possibility of a summer holiday, we offer suggestions for books and podcasts on AI to refresh the mind.

    Editorial
  • A new international competition aims to speed up the development of AI models that can assist radiologists in detecting suspicious lesions from hundreds of millions of pixels in 3D mammograms. The top three winning teams compare notes.

    • Jungkyu Park
    • Yoel Shoshan
    • Krzysztof J. Geras
    Challenge Accepted
  • Accurate and fair medical machine learning requires large amounts and diverse data to train on. Privacy-preserving methods such as federated learning can help improve machine learning models by making use of datasets in different hospitals and institutes while the data stays where it is collected.

    Editorial
  • Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.

    • Abubakar Abid
    • Maheen Farooqi
    • James Zou
    Comment