Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Indigenous peoples are under-represented in genomic datasets, which can lead to limited accuracy and utility of machine learning models in precision health. While open data sharing undermines rights of Indigenous communities to govern data decisions, federated learning may facilitate secure and community-consented data sharing.
To deliver value in healthcare, artificial intelligence and machine learning models must be integrated not only into technology platforms but also into local human and organizational ecosystems and workflows. To realize the promised benefits of applying these models at scale, a roadmap of the challenges and potential solutions to sociotechnical transferability is needed.
There is a tendency among AI researchers to use the concepts of democracy and democratization in ways that are only loosely connected to their political and historical meanings. We argue that it is important to take the concept more seriously in AI research by engaging with political philosophy.
Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.
China is pushing ahead of the European Union and the United States with its new synthetic content regulations. New draft provisions would place more responsibility on platforms to preserve social stability, with potential costs for online freedoms. They show that the Chinese Communist Party is prepared to protect itself against the unique threats of emerging technologies.
Artificial intelligence (AI) can support managers by effectively delegating management decisions to AI. There are, however, many organizational and technical hurdles that need to be overcome, and we offer a first step on this journey by unpacking the core factors that may hinder or foster effective decision delegation to AI.
Common-sense reasoning has recently emerged as an important test for artificial general intelligence, especially given the much-publicized successes of language representation models such as T5, BERT and GPT-3. Currently, typical benchmarks involve question answering tasks, but to test the full complexity of common-sense reasoning, more comprehensive evaluation methods that are grounded in theory should be developed.
An international security conference explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons. A thought experiment evolved into a computational proof.
Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.
The provisioning of information about product attributes in e-commerce environments is today left entirely to owners of online platforms. Product transparency in online stores can be increased by client-side enrichment of retailer Web pages.
As service and industrial robots enter our lives, new types of cybersecurity issues emerge that involve the manipulation of a robot’s behaviour. Now is the time to develop countermeasures.
The regulatory landscape for artificial intelligence (AI) is shaping up on both sides of the Atlantic, urgently awaited by the scientific and industrial community. Commonalities and differences start to crystallize in the approaches to AI in medicine.
Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.
The COVID-19 pandemic has highlighted key challenges for patient care and health provider safety. Adaptable robotic systems, with enhanced sensing, manipulation and autonomy capabilities could help address these challenges in future infectious disease outbreaks.
To truly understand the societal impact of AI, we need to look beyond the exclusive focus on quantitative methods, and focus on qualitative methods like ethnography, which shed light on the actors and institutions that wield power through the use of these technologies.
Synthesizing robots via physical artificial intelligence is a multidisciplinary challenge for future robotics research. An education methodology is needed for researchers to develop a combination of skills in physical artificial intelligence.
Addressing the problems caused by AI applications in society with ethics frameworks is futile until we confront the political structure of such applications.
For machine learning developers, the use of prediction tools in real-world clinical settings can be a distant goal. Recently published guidelines for reporting clinical research that involves machine learning will help connect clinical and computer science communities, and realize the full potential of machine learning tools.
There is a need to consider how AI developers can be practically assisted in identifying and addressing ethical issues. In this Comment, a group of AI engineers, ethicists and social scientists suggest embedding ethicists into the development team as one way of improving the consideration of ethical issues during AI development.