Comment

Filter By:

Article Type
  • Metaverse-enabled healthcare is no longer hypothetical. Developers must now contend with ethical, legal and social hazards if they are to overcome the systematic inefficiencies and inequities that exist for patients who seek care in the real world.

    • Kristin Kostick-Quenet
    • Vasiliki Rahimzadeh
    Comment
  • Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.

    • Sebastian Porsdam Mann
    • Brian D. Earp
    • Julian Savulescu
    Comment
  • Fairness approaches in machine learning should involve more than an assessment of performance metrics across groups. Shifting the focus away from model metrics, we reframe fairness through the lens of intersectionality, a Black feminist theoretical framework that contextualizes individuals in interacting systems of power and oppression.

    • Elle Lett
    • William G. La Cava
    Comment
  • We explore the intersection between algorithms and the State from the perspectives of legislative action, public perception and the use of AI in public administration. Taking India as a case study, we discuss the potential fallout from the absence of rigorous scholarship on such questions for countries in the Global South.

    • Nandana Sengupta
    • Vidya Subramanian
    • Arul George Scaria
    Comment
  • Despite the promise of medical artificial intelligence applications, their acceptance in real-world clinical settings is low, with lack of transparency and trust being barriers that need to be overcome. We discuss the importance of the collaborative process in medical artificial intelligence, whereby experts from various fields work together and tackle transparency issues and build trust over time.

    • Annamaria Carusi
    • Peter D. Winter
    • Andy Swift
    Comment
  • To fully leverage big data, they need to be shared across institutions in a manner compliant with privacy considerations and the EU General Data Protection Regulation (GDPR). Federated machine learning is a promising option.

    • Alissa Brauneck
    • Louisa Schmalhorst
    • Gabriele Buchholtz
    Comment
  • The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.

    • Kohitij Kar
    • Simon Kornblith
    • Evelina Fedorenko
    Comment
  • The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

    • Madhulika Srikumar
    • Rebecca Finlay
    • Joelle Pineau
    Comment
  • Artificial intelligence systems are used for an increasing range of intellectual tasks, but can they invent, or will they be able to do so soon? A recent series of patent applications for two inventions that are claimed to have been made by an artificial intelligence program are bringing these questions to the fore.

    • Alexandra George
    • Toby Walsh
    Comment
  • The use of decision-support systems based on artificial intelligence approaches in antimicrobial prescribing raises important moral questions. Adopting ethical frameworks alongside such systems can aid the consideration of infection-specific complexities and support moral decision-making to tackle antimicrobial resistance.

    • William J. Bolton
    • Cosmin Badea
    • Timothy M. Rawson
    Comment
  • Indigenous peoples are under-represented in genomic datasets, which can lead to limited accuracy and utility of machine learning models in precision health. While open data sharing undermines rights of Indigenous communities to govern data decisions, federated learning may facilitate secure and community-consented data sharing.

    • Nima Boscarino
    • Reed A. Cartwright
    • Krystal S. Tsosie
    Comment
  • To deliver value in healthcare, artificial intelligence and machine learning models must be integrated not only into technology platforms but also into local human and organizational ecosystems and workflows. To realize the promised benefits of applying these models at scale, a roadmap of the challenges and potential solutions to sociotechnical transferability is needed.

    • Batia Mishan Wiesenfeld
    • Yin Aphinyanaphongs
    • Oded Nov
    Comment
  • There is a tendency among AI researchers to use the concepts of democracy and democratization in ways that are only loosely connected to their political and historical meanings. We argue that it is important to take the concept more seriously in AI research by engaging with political philosophy.

    • Henrik Skaug Sætra
    • Harald Borgebund
    • Mark Coeckelbergh
    Comment
  • China is pushing ahead of the European Union and the United States with its new synthetic content regulations. New draft provisions would place more responsibility on platforms to preserve social stability, with potential costs for online freedoms. They show that the Chinese Communist Party is prepared to protect itself against the unique threats of emerging technologies.

    • Emmie Hine
    • Luciano Floridi
    Comment
  • Artificial intelligence (AI) can support managers by effectively delegating management decisions to AI. There are, however, many organizational and technical hurdles that need to be overcome, and we offer a first step on this journey by unpacking the core factors that may hinder or foster effective decision delegation to AI.

    • Stefan Feuerriegel
    • Yash Raj Shrestha
    • Ce Zhang
    Comment
  • Common-sense reasoning has recently emerged as an important test for artificial general intelligence, especially given the much-publicized successes of language representation models such as T5, BERT and GPT-3. Currently, typical benchmarks involve question answering tasks, but to test the full complexity of common-sense reasoning, more comprehensive evaluation methods that are grounded in theory should be developed.

    • Mayank Kejriwal
    • Henrique Santos
    • Deborah L. McGuinness
    Comment
  • An international security conference explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons. A thought experiment evolved into a computational proof.

    • Fabio Urbina
    • Filippa Lentzos
    • Sean Ekins
    Comment
  • Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.

    • Carina Prunkl
    Comment