Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.
The area under the receiver operating characteristic curve (AUROC) of the test set is used throughout machine learning (ML) for assessing a model’s performance. However, when concordance is not the only ambition, this gives only a partial insight into performance, masking distribution shifts of model outputs and model instability.
Although federated learning is often seen as a promising solution to allow AI innovation while addressing privacy concerns, we argue that this technology does not fix all underlying data ethics concerns. Benefiting from federated learning in digital health requires acknowledgement of its limitations.
Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance.
The rise of artificial intelligence (AI) has relied on an increasing demand for energy, which threatens to outweigh its promised positive effects. To steer AI onto a more sustainable path, quantifying and comparing its energy consumption is key.
Medical artificial intelligence needs governance to ensure safety and effectiveness, not just centrally (for example, by the US Food and Drug Administration) but also locally to account for differences in care, patients and system performance. Practical collaborative governance will enable health systems to carry out these challenging governance tasks, supported by central regulators.
To protect the integrity of knowledge production, the training procedures of foundation models such as GPT-4 need to be made accessible to regulators and researchers. Foundation models must become open and public, and those are not the same thing.
There are repeated calls in the AI community to prioritize data work — collecting, curating, analysing and otherwise considering the quality of data. But this is not practised as much as advocates would like, often because of a lack of institutional and cultural incentives. One way to encourage data work would be to reframe it as more technically rigorous, and thereby integrate it into more-valued lines of research such as model innovation.
We show that large language models (LLMs), such as ChatGPT, can guide the robot design process, on both the conceptual and technical level, and we propose new human–AI co-design strategies and their societal implications.
Metaverse-enabled healthcare is no longer hypothetical. Developers must now contend with ethical, legal and social hazards if they are to overcome the systematic inefficiencies and inequities that exist for patients who seek care in the real world.
Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
Fairness approaches in machine learning should involve more than an assessment of performance metrics across groups. Shifting the focus away from model metrics, we reframe fairness through the lens of intersectionality, a Black feminist theoretical framework that contextualizes individuals in interacting systems of power and oppression.
We explore the intersection between algorithms and the State from the perspectives of legislative action, public perception and the use of AI in public administration. Taking India as a case study, we discuss the potential fallout from the absence of rigorous scholarship on such questions for countries in the Global South.
Despite the promise of medical artificial intelligence applications, their acceptance in real-world clinical settings is low, with lack of transparency and trust being barriers that need to be overcome. We discuss the importance of the collaborative process in medical artificial intelligence, whereby experts from various fields work together and tackle transparency issues and build trust over time.
To fully leverage big data, they need to be shared across institutions in a manner compliant with privacy considerations and the EU General Data Protection Regulation (GDPR). Federated machine learning is a promising option.
A recent case of a flawed medical AI system that was backed by public funding provides an opportunity to discuss the impact of government policies and regulation in AI.
The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.
The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.
Artificial intelligence systems are used for an increasing range of intellectual tasks, but can they invent, or will they be able to do so soon? A recent series of patent applications for two inventions that are claimed to have been made by an artificial intelligence program are bringing these questions to the fore.
The use of decision-support systems based on artificial intelligence approaches in antimicrobial prescribing raises important moral questions. Adopting ethical frameworks alongside such systems can aid the consideration of infection-specific complexities and support moral decision-making to tackle antimicrobial resistance.