Comment

Filter By:

Article Type
  • Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.

    • Mike Horia Mihail Teodorescu
    • Mingang K. Geiger
    • Lily Morse
    Comment
  • The area under the receiver operating characteristic curve (AUROC) of the test set is used throughout machine learning (ML) for assessing a model’s performance. However, when concordance is not the only ambition, this gives only a partial insight into performance, masking distribution shifts of model outputs and model instability.

    • Michael Roberts
    • Alon Hazan
    • Carola-Bibiane Schönlieb
    Comment
  • Although federated learning is often seen as a promising solution to allow AI innovation while addressing privacy concerns, we argue that this technology does not fix all underlying data ethics concerns. Benefiting from federated learning in digital health requires acknowledgement of its limitations.

    • Marieke Bak
    • Vince I. Madai
    • Stuart McLennan
    Comment
  • Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance.

    • Eva Erman
    • Markus Furendal
    Comment
  • The rise of artificial intelligence (AI) has relied on an increasing demand for energy, which threatens to outweigh its promised positive effects. To steer AI onto a more sustainable path, quantifying and comparing its energy consumption is key.

    • Charlotte Debus
    • Marie Piraud
    • Markus Götz
    Comment
  • Medical artificial intelligence needs governance to ensure safety and effectiveness, not just centrally (for example, by the US Food and Drug Administration) but also locally to account for differences in care, patients and system performance. Practical collaborative governance will enable health systems to carry out these challenging governance tasks, supported by central regulators.

    • W. Nicholson Price II
    • Mark Sendak
    • Karandeep Singh
    Comment
  • To protect the integrity of knowledge production, the training procedures of foundation models such as GPT-4 need to be made accessible to regulators and researchers. Foundation models must become open and public, and those are not the same thing.

    • Fabian Ferrari
    • José van Dijck
    • Antal van den Bosch
    Comment
  • There are repeated calls in the AI community to prioritize data work — collecting, curating, analysing and otherwise considering the quality of data. But this is not practised as much as advocates would like, often because of a lack of institutional and cultural incentives. One way to encourage data work would be to reframe it as more technically rigorous, and thereby integrate it into more-valued lines of research such as model innovation.

    • Katy Ilonka Gero
    • Payel Das
    • Kush R. Varshney
    Comment
  • We show that large language models (LLMs), such as ChatGPT, can guide the robot design process, on both the conceptual and technical level, and we propose new human–AI co-design strategies and their societal implications.

    • Francesco Stella
    • Cosimo Della Santina
    • Josie Hughes
    Comment
  • Metaverse-enabled healthcare is no longer hypothetical. Developers must now contend with ethical, legal and social hazards if they are to overcome the systematic inefficiencies and inequities that exist for patients who seek care in the real world.

    • Kristin Kostick-Quenet
    • Vasiliki Rahimzadeh
    Comment
  • Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.

    • Sebastian Porsdam Mann
    • Brian D. Earp
    • Julian Savulescu
    Comment
  • Fairness approaches in machine learning should involve more than an assessment of performance metrics across groups. Shifting the focus away from model metrics, we reframe fairness through the lens of intersectionality, a Black feminist theoretical framework that contextualizes individuals in interacting systems of power and oppression.

    • Elle Lett
    • William G. La Cava
    Comment
  • We explore the intersection between algorithms and the State from the perspectives of legislative action, public perception and the use of AI in public administration. Taking India as a case study, we discuss the potential fallout from the absence of rigorous scholarship on such questions for countries in the Global South.

    • Nandana Sengupta
    • Vidya Subramanian
    • Arul George Scaria
    Comment
  • Despite the promise of medical artificial intelligence applications, their acceptance in real-world clinical settings is low, with lack of transparency and trust being barriers that need to be overcome. We discuss the importance of the collaborative process in medical artificial intelligence, whereby experts from various fields work together and tackle transparency issues and build trust over time.

    • Annamaria Carusi
    • Peter D. Winter
    • Andy Swift
    Comment
  • To fully leverage big data, they need to be shared across institutions in a manner compliant with privacy considerations and the EU General Data Protection Regulation (GDPR). Federated machine learning is a promising option.

    • Alissa Brauneck
    • Louisa Schmalhorst
    • Gabriele Buchholtz
    Comment
  • The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.

    • Kohitij Kar
    • Simon Kornblith
    • Evelina Fedorenko
    Comment
  • The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

    • Madhulika Srikumar
    • Rebecca Finlay
    • Joelle Pineau
    Comment
  • Artificial intelligence systems are used for an increasing range of intellectual tasks, but can they invent, or will they be able to do so soon? A recent series of patent applications for two inventions that are claimed to have been made by an artificial intelligence program are bringing these questions to the fore.

    • Alexandra George
    • Toby Walsh
    Comment
  • The use of decision-support systems based on artificial intelligence approaches in antimicrobial prescribing raises important moral questions. Adopting ethical frameworks alongside such systems can aid the consideration of infection-specific complexities and support moral decision-making to tackle antimicrobial resistance.

    • William J. Bolton
    • Cosmin Badea
    • Timothy M. Rawson
    Comment