Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
2022 has seen eye-catching developments in AI applications. Work is needed to ensure that ethical reflection and responsible publication practices are keeping pace.
The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.
The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.
Artificial intelligence systems are used for an increasing range of intellectual tasks, but can they invent, or will they be able to do so soon? A recent series of patent applications for two inventions that are claimed to have been made by an artificial intelligence program are bringing these questions to the fore.
AI promises to bring many benefits to healthcare and research, but mistrust has built up owing to many instances of harm to under-represented communities. To amend this, participatory approaches can directly involve communities in AI research that will impact them. An important element of such approaches is ensuring that communities can take control over their own data and how they are shared.
The use of decision-support systems based on artificial intelligence approaches in antimicrobial prescribing raises important moral questions. Adopting ethical frameworks alongside such systems can aid the consideration of infection-specific complexities and support moral decision-making to tackle antimicrobial resistance.
Indigenous peoples are under-represented in genomic datasets, which can lead to limited accuracy and utility of machine learning models in precision health. While open data sharing undermines rights of Indigenous communities to govern data decisions, federated learning may facilitate secure and community-consented data sharing.
We introduced reusability reports, an article type to highlight code reusability, almost two years ago. On the basis of the results and positive feedback from authors and referees, we remain enthusiastic about the format.
To deliver value in healthcare, artificial intelligence and machine learning models must be integrated not only into technology platforms but also into local human and organizational ecosystems and workflows. To realize the promised benefits of applying these models at scale, a roadmap of the challenges and potential solutions to sociotechnical transferability is needed.
There is a tendency among AI researchers to use the concepts of democracy and democratization in ways that are only loosely connected to their political and historical meanings. We argue that it is important to take the concept more seriously in AI research by engaging with political philosophy.
The public release of ‘Stable Diffusion’, a high-quality image generation tool, sets new standards in open-source AI development and raises new questions.
Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.
There is growing interest in using machine learning to mitigate climate change. But as avoiding catastrophic temperature rises becomes more urgent, action is also needed to understand the environmental impact of machine learning research.
As with last summer, COVID-19 is still with us, but there is a semblance of what life was like before the pandemic. Here, we recommend AI podcasts from the past year that may inform, inspire or entertain, as we get an opportunity to travel or take time away from regular activities.
China is pushing ahead of the European Union and the United States with its new synthetic content regulations. New draft provisions would place more responsibility on platforms to preserve social stability, with potential costs for online freedoms. They show that the Chinese Communist Party is prepared to protect itself against the unique threats of emerging technologies.
Artificial intelligence (AI) can support managers by effectively delegating management decisions to AI. There are, however, many organizational and technical hurdles that need to be overcome, and we offer a first step on this journey by unpacking the core factors that may hinder or foster effective decision delegation to AI.
Soon into the COVID-19 pandemic, civil-rights groups raised the alarm over the increase in digital surveillance infringing on individual rights. But there are other potential harms as tech companies accelerate their expansion into new areas essential to public-service provision.