Volume 1

  • No. 12 December 2019

    Portable haptics with foldable robotics

    Robotic devices can be made more compact and lightweight by using origami techniques. In this issue, Stefano Mintchev et al. present an origami device that can be used as a portable haptic interface that can sense and initiate motion in three dimensions. Named Foldaway, the device folds flat and can be used as an intuitive haptic 3D joystick in different virtual-reality or augmented-reality applications.

    See Mintchev et al.

  • No. 11 November 2019

    Cooperative attitudes

    Human–robot interaction studies are essential to understand the complexities of human responses to robots and our willingness to cooperate with robots in daily activities. In a paper in this issue, Fatimah Ishowo-Oloko et al. report results from a behavioural experiment: they asked participants to play cooperative games with opponents that are either human or an algorithm and found that humans tend to trust algorithms less. The reinforcement algorithm, designed to learn effective behaviour within a few rounds, outperformed human opponents in inducing cooperation, but this advantage was lost if they disclosed their non-human nature. The question of transparency in human–robot interaction is further explored in a News & Views and the Editorial.

    See Ishowo-Oloko et al., Rovatsos and Editorial

  • No. 10 October 2019

    Designing machines with feeling analogues

    Living organisms evaluate their own goals and behaviour in a dynamic world by homeostasis: the regulation of internal body states. Man and Damasio propose to design machines with something akin to this physiological process, so that they have an internal guidance for making decisions and controlling behaviours. The authors consider the possibility of constructing robots with bodies that, in a process that mimics homeostasis, need to be maintained within a narrow range of viability states. Examining advances in the area of soft robotics, the authors raise the possibility of building machines with sensors and effectors that provide them with multimodal homeostatic data - or feeling analogues.

    See Man et al.

  • No. 9 September 2019

    Balancing user and robotic control

    Brain–machine interfaces can augment human capabilities and restore functions. In the past decade, advances in materials engineering, robotics and machine learning are opening up new possibilities in this area. In work by Katy Z. Zhuang et al. a robotic hand prosthesis is developed that allows not only user-controlled movement but also assisted grasping in a shared control scheme. This is accomplished by first decoding myoelectric signals with a machine learning method to control individual fingers. This proportional control of fine movements is combined with an algorithmic controller to assist stable grasping by maximizing the area of contact between a prosthetic hand and an object. Elsewhere in this issue, Musa Mahmood et al. demonstrate a portable, wireless, flexible scalp electroencephalography system, implementing state-of-the-art flexible electronics approaches and convolutional neural networks for real-time neural signal classification. In our Editorial, we look at some of the history of brain–machine interfaces, going back to Norbert Wiener’s cybernetics.

    See Zhuang et al., Mahmood et al. and Editorial

  • No. 8 August 2019

    A metric for task-oriented grasping

    Robots that can work together safely with humans in unstructured environments need to be able to effectively manipulate objects for a variety of tasks. Much attention has been focused on the challenge of grasping different objects with robotic hands. However, Ortenzi et al. argue that grasping depends on the task. They develop a new metric that measures the success of robot manipulation focused on the goal of the task itself, for instance handing over an object, rather that the action of grasping.

    See Ortenzi et al.

  • No. 7 July 2019

    Feature activation maps for digital pathology

    Machine learning is often described as a tool that sacrifices transparency and simplicity in return for higher predictive power. Our cover image features progress by Faust et al. towards increased transparency of machine-learned models for digital pathology, by demonstrating how learned features in histopathologic images correlate to human-understandable patterns and groupings.

    See Faust et al.

  • No. 6 June 2019

    Learning an atlas for the brain

    Deep learning methods provide a powerful tool for the processing of biological and medical images. In this month’s issue, a deep neural network is used by Iqbal et al. for robust registration of brain images across different stages of brain development, and by Shan et al. to accurately reconstruct medical computerized tomography scans performed under low radiation doses. This issue also features an interview with Effy Vayena, who discusses the UK National Health Service’s recent code of conduct for using such AI-based systems in healthcare.

    See Iqbal et al., Shan et al. and Q&A with Vayena

  • No. 5 May 2019

    Going against the current

    An autonomous underwater vehicle is designed with a distributed sensory system over its surface, inspired by fish lateral line sensing, so that it can measure hydrodynamic forces and compensate for disturbances immediately. The vehicle manoeuvres by using jet thrusters, a mechanism that is inspired by jet propulsion employed by squid. Such underwater robots can instantly feel and respond to disturbances, and have improved position tracking.

    See Michael Krieg, Kevin Nelson and Kamran Mohseni

  • No. 4 April 2019

    Learning from human decision making

    Artificial intelligence and machine learning systems may surpass human performance on a variety of tasks, but they may also mimic or amplify human errors or biases. This issue of Nature Machine Intelligence features a Perspective describing decades of research by psychologists on the development and prevention of errors and biases in human judgment and decision making. The authors provide connections between the psychology and machine learning literatures, and offer guideposts for the development and improvement of machine learning algorithms.

    See Alexander S. Rich and Todd M. Gureckis

  • No. 3 March 2019

    Walk this way

    A tendon-driven robotic limb learns movements autonomously from sparse experience, by a short period of ‘motor babbling’ (that is, repeated exploratory movements), followed by a phase of reinforcement learning. In the photo, the limb is learning to make cyclic movements to propel the treadmill. The approach is a step towards designing robots with the versatility and robustness of vertebrates, which can adapt quickly to everyday environments.

    See Ali Marjaninejad et al.

  • No. 2 February 2019

    Looking for the right questions

    Machine learning offers a powerful tool to scientists for probing data. But these tools must be developed with the right questions in mind. This issue features a Perspective exploring the challenges for social sciences to connect to AI research, a Comment from conservation ecologists urging a focus on the right metrics and ethical approach for applying machine learning ‘in the wild’, and the next instalment of our Challenge Accepted series, highlighting the challenge of finding the right question — and prize — when organizing data science competitions.

  • No. 1 January 2019

    Intelligent collaboration within reach

    As robots are becoming skilled at performing complex tasks, the next step is to enable useful and safe interactions with humans. To effectively collaborate with and assist us, robots need to be able to understand human actions and intent. This issue of Nature Machine Intelligence features an Article describing a game theoretic approach for adaptive human–robot collaboration, as well as a Comment that considers how several trends in robotics and AI research are merging for a fresh take on collaborative robotics.

    See Li et al., News & Views by Drnach & Ting and Comment by Goldberg