Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 1 Issue 11, November 2019

Cooperative attitudes

Human–robot interaction studies are essential to understand the complexities of human responses to robots and our willingness to cooperate with robots in daily activities. In a paper in this issue, Fatimah Ishowo-Oloko et al. report results from a behavioural experiment: they asked participants to play cooperative games with opponents that are either human or an algorithm and found that humans tend to trust algorithms less. The reinforcement algorithm, designed to learn effective behaviour within a few rounds, outperformed human opponents in inducing cooperation, but this advantage was lost if they disclosed their non-human nature. The question of transparency in human–robot interaction is further explored in a News & Views and the Editorial.

See Ishowo-Oloko et al., Rovatsos and Editorial

Image adapted from Fanatic Studio / Gary Waters / Alamy Stock Photo. Cover design: Karen Moore


  • Robots are making a transition into human environments, where they can directly interact with us, in shops, hospitals, schools and more. Transparency about robots’ capabilities and level of autonomy should be integrated into the design from the start.



Top of page ⤴


Top of page ⤴

Comment & Opinion

  • Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is widely acknowledged and studied. But is current research of gender bias in natural language processing actually moving towards a resolution, asks Marta R. Costa-jussà.

    • Marta R. Costa-jussà
Top of page ⤴

News & Views

  • In cooperative games, humans are biased against AI systems even when such systems behave better than our human counterparts. This raises a question: should AI systems ever be allowed to conceal their true nature and lie to us for our own benefit?

    • Michael Rovatsos
    News & Views
  • Adversarial attacks make imperceptible changes to a neural network’s inputs so that it recognizes it as something entirely different. This flaw can give us insight into how these networks work and how to make them more robust.

    • Gean T. Pereira
    • André C. P. L. F. de Carvalho
    News & Views
Top of page ⤴


  • AI ethics initiatives have seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite this, Brent Mittelstadt highlights important differences between medical practice and AI development that suggest a principled approach may not work in the case of AI.

    • Brent Mittelstadt
Top of page ⤴


  • Algorithms and bots are capable of performing some behaviours at human or super-human levels. Humans, however, tend to trust algorithms less than they trust other humans. The authors find that bots do better than humans at inducing cooperation in certain human–machine interactions, but only if the bots do not disclose their true nature as artificial.

    • Fatimah Ishowo-Oloko
    • Jean-François Bonnefon
    • Talal Rahwan
  • Human face recognition is robust to changes in viewpoint, illumination, facial expression and appearance. The authors investigated face recognition in deep convolutional neural networks by manipulating the strength of identity information in a face by caricaturing. They found that networks create a highly organized face similarity structure in which identities and images coexist.

    • Matthew Q. Hill
    • Connor J. Parde
    • Alice J. O’Toole
  • Photonic computing devices have been proposed as a high-speed and energy-efficient approach to implementing neural networks. Using off-the-shelf components, Antonik et al. demonstrate a reservoir computer that recognizes different forms of human action from video streams using photonic neural networks.

    • Piotr Antonik
    • Nicolas Marsal
    • Damien Rontani
  • Deep learning is currently transforming digital pathology, helping to make more reliable and faster clinical diagnoses. A promising application is in the recognition of malignant white blood cells—an essential step for detecting acute myeloid leukaemia that is challenging even for trained human examiners. An annotated image dataset of over 18,000 white blood cells is compiled and used to train a convolutional neural network for leukocyte classification. The network classifies the most important cell types with high accuracy and can answer clinically relevant binary questions with human-level performance.

    • Christian Matek
    • Simone Schwarz
    • Carsten Marr
Top of page ⤴

Challenge Accepted

  • The first Smart Cities Robotics Challenge, organized by the European Robotics League, took place from 18–21 September at the Centre:MK shopping centre in Milton Keynes. The competition tested the ability of robots to interact with humans in everyday tasks as well as with the digital infrastructure of a smart city.

    • Jacob Huth
    Challenge Accepted
Top of page ⤴


Quick links