Comment

Filter By:

Article Type
  • Indigenous peoples are under-represented in genomic datasets, which can lead to limited accuracy and utility of machine learning models in precision health. While open data sharing undermines rights of Indigenous communities to govern data decisions, federated learning may facilitate secure and community-consented data sharing.

    • Nima Boscarino
    • Reed A. Cartwright
    • Krystal S. Tsosie
    Comment
  • To deliver value in healthcare, artificial intelligence and machine learning models must be integrated not only into technology platforms but also into local human and organizational ecosystems and workflows. To realize the promised benefits of applying these models at scale, a roadmap of the challenges and potential solutions to sociotechnical transferability is needed.

    • Batia Mishan Wiesenfeld
    • Yin Aphinyanaphongs
    • Oded Nov
    Comment
  • There is a tendency among AI researchers to use the concepts of democracy and democratization in ways that are only loosely connected to their political and historical meanings. We argue that it is important to take the concept more seriously in AI research by engaging with political philosophy.

    • Henrik Skaug Sætra
    • Harald Borgebund
    • Mark Coeckelbergh
    Comment
  • China is pushing ahead of the European Union and the United States with its new synthetic content regulations. New draft provisions would place more responsibility on platforms to preserve social stability, with potential costs for online freedoms. They show that the Chinese Communist Party is prepared to protect itself against the unique threats of emerging technologies.

    • Emmie Hine
    • Luciano Floridi
    Comment
  • Artificial intelligence (AI) can support managers by effectively delegating management decisions to AI. There are, however, many organizational and technical hurdles that need to be overcome, and we offer a first step on this journey by unpacking the core factors that may hinder or foster effective decision delegation to AI.

    • Stefan Feuerriegel
    • Yash Raj Shrestha
    • Ce Zhang
    Comment
  • Common-sense reasoning has recently emerged as an important test for artificial general intelligence, especially given the much-publicized successes of language representation models such as T5, BERT and GPT-3. Currently, typical benchmarks involve question answering tasks, but to test the full complexity of common-sense reasoning, more comprehensive evaluation methods that are grounded in theory should be developed.

    • Mayank Kejriwal
    • Henrique Santos
    • Deborah L. McGuinness
    Comment
  • An international security conference explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons. A thought experiment evolved into a computational proof.

    • Fabio Urbina
    • Filippa Lentzos
    • Sean Ekins
    Comment
  • Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.

    • Carina Prunkl
    Comment
  • The regulatory landscape for artificial intelligence (AI) is shaping up on both sides of the Atlantic, urgently awaited by the scientific and industrial community. Commonalities and differences start to crystallize in the approaches to AI in medicine.

    • Kerstin N. Vokinger
    • Urs Gasser
    Comment
  • Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.

    • Abubakar Abid
    • Maheen Farooqi
    • James Zou
    Comment
  • The COVID-19 pandemic has highlighted key challenges for patient care and health provider safety. Adaptable robotic systems, with enhanced sensing, manipulation and autonomy capabilities could help address these challenges in future infectious disease outbreaks.

    • Hao Su
    • Antonio Di Lallo
    • Axel Krieger
    Comment
  • To truly understand the societal impact of AI, we need to look beyond the exclusive focus on quantitative methods, and focus on qualitative methods like ethnography, which shed light on the actors and institutions that wield power through the use of these technologies.

    • Vidushi Marda
    • Shivangi Narayan
    Comment
  • Synthesizing robots via physical artificial intelligence is a multidisciplinary challenge for future robotics research. An education methodology is needed for researchers to develop a combination of skills in physical artificial intelligence.

    • Aslan Miriyev
    • Mirko Kovač
    Comment
  • Addressing the problems caused by AI applications in society with ethics frameworks is futile until we confront the political structure of such applications.

    • Jathan Sadowski
    • Mark Andrejevic
    Comment
  • For machine learning developers, the use of prediction tools in real-world clinical settings can be a distant goal. Recently published guidelines for reporting clinical research that involves machine learning will help connect clinical and computer science communities, and realize the full potential of machine learning tools.

    • Bilal A. Mateen
    • James Liley
    • Sebastian J. Vollmer
    Comment
  • There is a need to consider how AI developers can be practically assisted in identifying and addressing ethical issues. In this Comment, a group of AI engineers, ethicists and social scientists suggest embedding ethicists into the development team as one way of improving the consideration of ethical issues during AI development.

    • Stuart McLennan
    • Amelia Fiske
    • Alena Buyx
    Comment
  • As robot swarms move from the laboratory to real-world applications, a routine checklist of questions could help ensure their safe operation.

    • Edmund R. Hunt
    • Sabine Hauert
    Comment