Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Human–robot interaction studies are essential to understand the complexities of human responses to robots and our willingness to cooperate with robots in daily activities. In a paper in this issue, Fatimah Ishowo-Oloko et al. report results from a behavioural experiment: they asked participants to play cooperative games with opponents that are either human or an algorithm and found that humans tend to trust algorithms less. The reinforcement algorithm, designed to learn effective behaviour within a few rounds, outperformed human opponents in inducing cooperation, but this advantage was lost if they disclosed their non-human nature. The question of transparency in human–robot interaction is further explored in a News & Views and the Editorial.
Robots are making a transition into human environments, where they can directly interact with us, in shops, hospitals, schools and more. Transparency about robots’ capabilities and level of autonomy should be integrated into the design from the start.
Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is widely acknowledged and studied. But is current research of gender bias in natural language processing actually moving towards a resolution, asks Marta R. Costa-jussà.
In cooperative games, humans are biased against AI systems even when such systems behave better than our human counterparts. This raises a question: should AI systems ever be allowed to conceal their true nature and lie to us for our own benefit?
Adversarial attacks make imperceptible changes to a neural network’s inputs so that it recognizes it as something entirely different. This flaw can give us insight into how these networks work and how to make them more robust.
AI ethics initiatives have seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite this, Brent Mittelstadt highlights important differences between medical practice and AI development that suggest a principled approach may not work in the case of AI.
Deep neural networks can be led to misclassify an image when minute changes that are imperceptible to humans are introduced. While for some networks this ability can cast doubt on the reliability of the model, it also offers explainability for networks that use more robust regularization.
Algorithms and bots are capable of performing some behaviours at human or super-human levels. Humans, however, tend to trust algorithms less than they trust other humans. The authors find that bots do better than humans at inducing cooperation in certain human–machine interactions, but only if the bots do not disclose their true nature as artificial.
Human face recognition is robust to changes in viewpoint, illumination, facial expression and appearance. The authors investigated face recognition in deep convolutional neural networks by manipulating the strength of identity information in a face by caricaturing. They found that networks create a highly organized face similarity structure in which identities and images coexist.
Photonic computing devices have been proposed as a high-speed and energy-efficient approach to implementing neural networks. Using off-the-shelf components, Antonik et al. demonstrate a reservoir computer that recognizes different forms of human action from video streams using photonic neural networks.
Deep learning is currently transforming digital pathology, helping to make more reliable and faster clinical diagnoses. A promising application is in the recognition of malignant white blood cells—an essential step for detecting acute myeloid leukaemia that is challenging even for trained human examiners. An annotated image dataset of over 18,000 white blood cells is compiled and used to train a convolutional neural network for leukocyte classification. The network classifies the most important cell types with high accuracy and can answer clinically relevant binary questions with human-level performance.
The first Smart Cities Robotics Challenge, organized by the European Robotics League, took place from 18–21 September at the Centre:MK shopping centre in Milton Keynes. The competition tested the ability of robots to interact with humans in everyday tasks as well as with the digital infrastructure of a smart city.