Artificial intelligence

Robots with instincts

An evolutionary algorithm has been developed that allows robots to adapt to unforeseen change. The robots learn behaviours quickly and instinctively by mining the memory of their past achievements. See Letter p.503

Intelligence, by some accounts, is synonymous with the ability to predict the future. Because doing so quickly can often mean the difference between life and death, our brains have evolved to be able to search the vast number of potential futures easily. How is such a feat accomplished? On page 503 of this issue, Cully et al.1 attempt to answer this question by demonstrating that robots can learn to recover quickly and robustly from physical damage — a sudden event that requires them to adopt a new behavioural strategy to continue functioning. The robots (a six-legged mobile robot and a robotic arm; Fig. 1) use a trial-and-error algorithm that lets them tap into the experiences they have accumulated over a simulated lifetime, to quickly find optimal compensating behaviours as if by instinct (see Supplementary Video 1 in ref. 1).

Figure 1: Adaptive machines.
figure1

Antoine Cully/UPMC

Cully et al.1 have designed an algorithm that allows robots to develop strategies for overcoming the effects of damaged limbs. Two robots were used: a, a hexapod (width 50 centimetres); b, a robotic arm (length 62 cm).

Accurate prediction of events in complex environments requires experience, an understanding of 'how the world works', and the capacity to evaluate one's own actions in the context of those of others. It can be argued that the further out in time an organism or a machine can make accurate predictions of the future, the more intelligent it is. Using this definition, even simple organisms have some intelligence: microbes such as Escherichia coli, for example, make predictions about where they must move to find higher concentrations of sugars, and squirrels anticipate the winter by stashing away nuts.

Among animals, humans have arguably the highest level of intelligence, because we can anticipate events hundreds, thousands or even millions of years in the future — albeit largely in domains that do not involve the actions of people, such as planetary orbital dynamics. How can we begin to understand the cognitive underpinnings of a predictive capability that is, to a smaller or larger extent, inherent in all forms of life on Earth? One way, following Richard Feynman's dictum “What I cannot create I do not understand”, is to recreate intelligence in a machine or robot.

Attempts to create robot intelligence have come and gone with limited success in the past half-century, and it seems as if the goal of creating a machine with human-like intelligence remains elusive — even as great strides are being made. Notwithstanding the successes of chess-playing programs, IBM's artificially intelligent computer Watson, and the advent of algorithms for self-driving cars, true robot intelligence still eludes us.

Previous studies in the field of robot cognition2,3 have suggested that the ability to plan future actions hinges on the ability to recreate a model of the world inside the robot brain — an abstract version, but one that is accurate enough for mental trials and errors to quickly reveal the best strategy to adopt. But even supposing that these model representations4 can be generated, how can the vast 'space' of likely future actions be searched quickly and efficiently?

Cully et al. subjected their robots to several different unforeseen changes in the machines' morphology (akin to damage), and then asked them to find movement strategies that would compensate for the injury. Before being injured, the robots used an algorithm to establish a baseline of possible actions, which they used after injury to try out moves that were likely to be successful before deciding on any particular compensatory behaviour. Even though the range of possible behaviours (the behaviour space) for a robot might theoretically be infinite, this baseline can be established because, in reality, a robot's actions are constrained by its morphology.

A hexapod robot such as that studied by the authors is controlled by 36 parameters, but most of the strategies (sequences of motor activations in a 36-dimensional space) make no sense. Within the robot's 'embodiment'5 — the way in which the robot's body is realized — only a small subset of activations can follow any particular prior activation. In other words, the robot's embodiment dramatically reduces the number of potential strategies, so that sensible actions occupy a severely reduced behaviour space (think of a line instead of a sphere). This reduced space is actually searchable in real time.

The authors created the set of all possible behaviours by having each robot perform many thousands of motions (sequences of motor activations) and recording the 'fitness values' of each sequence. The fitness could be as simple as the distance travelled by the robot. Collating this database is time-intensive, but it is analogous to what happens in the natural world, in which living organisms have a lifetime to acquire such information. The robots synthesized new behaviours from this data set using a set of special-purpose machine-learning algorithms that assume that — even in changed circumstances — the actions that are most likely to succeed are 'close' to other such actions in a suitably defined behaviour space.

Although these machine-learning algorithms are unlikely to be similar to those used, for example, by mammalian cognitive systems, they share a common premise: that a behaviour space that is dramatically reduced through embodiment, and that is learned from experience, can be searched quickly through trial and error. If we return to the analogy of a one-dimensional line as opposed to a sphere of possible strategies, only two directions have to be attempted for the line before the preferred direction is clear, whereas in a sphere six directions must be sampled. Given that the robot's behaviour space is 36-dimensional, it is clear that the 'flattening' of the space of options can have dramatic effects.

Could these intuitive trial-and-error strategies be used to discover more-general problem-solving methodologies, of the kind that require planning in uncertain environments? It is difficult to imagine that the method could easily be scaled up to such a level; this particular algorithm was hand-designed by the authors, whereas the 'algorithm' our brains use is the result of millions of years of Darwinian tinkering and pruning.

Given the failure of past efforts to design robots that display the quick, intuitive and situation-appropriate behaviour of even the smallest rodents, perhaps it is time to give up on the idea that we can design brains, and instead place our hopes in the power of adaptive and evolutionary algorithms. Indeed, the core algorithm that generates the map of possible high-performance behaviours in Cully and colleagues' study is inherently evolutionary, because good strategies are improved on by replication with variation, and selection.

We may never understand our brains in terms of information-processing concepts, but we do understand how to harness the power of evolution. We should therefore let evolution create for us what we do not understand, one more time.Footnote 1

Notes

  1. 1.

    See all news & views

References

  1. 1

    Cully, A., Clune, J., Tarapore, D. & Mouret, J.-B. Nature 521, 503–507 (2015).

    CAS  Article  ADS  Google Scholar 

  2. 2

    Bongard, J., Zykov, V. & Lipson, H. Science 314, 1118–1121 (2006).

    CAS  Article  ADS  Google Scholar 

  3. 3

    Adami, C. Science 314, 1093–1094 (2006).

    CAS  Article  Google Scholar 

  4. 4

    Marstaller, L., Hintze, A. & Adami, C. Neur. Comput. 25, 2079–2107 (2013).

    Article  Google Scholar 

  5. 5

    Pfeifer, R. & Bongard, J. How the Body Shapes the Way We Think: A New View of Intelligence (MIT Press, 2007).

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Christoph Adami.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Adami, C. Robots with instincts. Nature 521, 426–427 (2015). https://doi.org/10.1038/521426a

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Nature Briefing

Sign up for the Nature Briefing newsletter for a daily update on COVID-19 science.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing