Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Current national cybersecurity and defence strategies of several governments mention explicitly the use of AI. However, it will be important to develop standards and certification procedures, which involves continuous monitoring and assessment of threats. The focus should be on the reliability of AI-based systems, rather than on eliciting users’ trust in AI.
AI ethics initiatives have seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite this, Brent Mittelstadt highlights important differences between medical practice and AI development that suggest a principled approach may not work in the case of AI.
Robots and machines are generally designed to perform specific tasks. Unlike humans, they lack the ability to generate feelings based on interactions with the world. The authors propose a new class of machines with evaluation processes akin to feelings, based on the principles of homeostasis and developments in soft robotics and multisensory integration.
As AI technology develops rapidly, it is widely recognized that ethical guidelines are required for safe and fair implementation in society. But is it possible to agree on what is ‘ethical AI’? A detailed analysis of 84 AI ethics reports around the world, from national and international organizations, companies and institutes, explores this question, finding a convergence around core principles but substantial divergence on practical implementation.
Traditional robotic grasping focuses on manipulating an object, often without considering the goal or task involved in the movement. The authors propose a new metric for success in manipulation that is based on the task itself.
There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.
Artificial intelligence and machine learning systems may reproduce or amplify biases. The authors discuss the literature on biases in human learning and decision-making, and propose that researchers, policymakers and the public should be aware of such biases when evaluating the output and decisions made by machines.
A bibliometric analysis of the past and present of AI research suggests a consolidation of research influence. This may present challenges for the exchange of ideas between AI and the social sciences.
A survey of 300 fictional and non-fictional works featuring artificial intelligence reveals that imaginings of intelligent machines may be grouped in four categories, each comprising a hope and a parallel fear. These perceptions are decoupled from what is realistically possible with current technology, yet influence scientific goals, public understanding and regulation of AI.
Arguably one of the most promising as well as critical applications of deep learning is in supporting medical sciences and decision making. It is time to develop methods for systematically quantifying uncertainty underlying deep learning processes, which would lead to increased confidence in practical applicability of these approaches.
A new vision for robot engineering, building on advances in computational materials techniques, additive and subtractive manufacturing as well as evolutionary computing, describes how to design a range of specialized robots uniquely suited to specific tasks and environmental conditions.