Perspective | Published:

Evolving embodied intelligence from materials to machines


Natural lifeforms specialize to their environmental niches across many levels, from low-level features such as DNA and proteins, through to higher-level artefacts including eyes, limbs and overarching body plans. We propose ‘multi-level evolution’, a bottom-up automatic process that designs robots across multiple levels and niches them to tasks and environmental conditions. Multi-level evolution concurrently explores constituent molecular and material building blocks, as well as their possible assemblies into specialized morphological and sensorimotor configurations. Multi-level evolution provides a route to fully harness a recent explosion in available candidate materials and ongoing advances in rapid manufacturing processes. We outline a feasible architecture that realizes this vision, highlight the main roadblocks and how they may be overcome, and show robotic applications to which multi-level evolution is particularly suited. By forming a research agenda to stimulate discussion between researchers in related fields, we hope to inspire the pursuit of multi-level robotic design all the way from material to machine.

Robots are on the rise, and seen with increasing ubiquity in what are known as ‘structured’ environments. Pick and place machines are a good example—their interactions with the environment are predictable and easily controllable. As a counterpoint, robots consistently struggle in complex, unpredictable ‘unstructured’ environments1,2. Cataloguing biodiversity in remote areas, searching destroyed buildings for survivors following an earthquake, and exploring labyrinthine cave systems are good examples.

The challenge of embodied intelligence

Natural life thrives in unstructured environments through a specific brand of intelligence known as ‘embodied cognition’3. Intelligent behaviour emerges from tight coupling between an agent’s body and brain and the environment, not solely from the brain. In the taxonomy of philosophy, it opposes the ‘I think therefore I am’ of Descartes, and a plethora of research to date has shown that the form and function of an agent's physical presence plays an important role in learning, development and the generation of suitable in-environment behaviour4.

Life’s ability to produce useful embodiments comes from a free-form evolutionary process where variance occurs across multiple levels5. Generally speaking, mutations in low-level DNA lead to changes in protein expression, facilitating an emergent process defining the structure and composition of higher-level features including eyes, hands and limbs, and also their placement in body plans. Making robot design similarly free-form and level-based might herald a new wave of capable embodiments to finally tackle challenging unstructured environments.

Embodied cognition and its artificial analogue, embodied artificial intelligence (AI)4,6, have long known that complex environments can only be tackled by sufficiently capable combinations of body and brain. In robots, body design has lagged behind due to inherent manufacturing complexities (through cell division, nature gets this ability ‘for free’). As such, including a variety of materials into robot design has been a long-standing ‘holy grail’ for embodied AI4 and robotics in general. We present a straightforward, reasonably scalable path towards incorporating material search and selection into robot design: a more free-form and specializing process than any currently available method.

We call this algorithmic framework ‘multi-level evolution’, or MLE. Here we consider a three-level architecture, although as many levels as necessary may be instantiated depending on problem demands. Three levels is a natural split, based on long-standing delineations across the large, active and well-established fields of materials science, robotics and component design.

At the lowest level, materials are discovered. Components are created by selecting one or more materials into a geometry. Finally, robots are created by integrating components into template ‘body plans’, and evaluating them on a task in an environment. New candidate materials and components are discovered during the process, continually increasing the range of possible robot designs.

A key takeaway is that MLE directly contrasts with conventional engineering approaches, which, because of time and cost constraints, often search for versatile generalist robots that do a bit of everything at a reasonable level of performance. MLE is a universal designer (across a wider design space, and for any task–environment niche) that generates specialists by harnessing diversity across all levels, and the emergence of useful artefacts and artefact combinations.

In this Perspective, we outline how new types of artificial evolution, which are ready to exploit the same wave of ubiquitous computing resources that powered the rise of deep learning, and have been shown to achieve a corresponding leap in performance7, can harness the recent explosion of available materials and manufacturing techniques (see section ‘Enabling technologies’) to create powerfully embodied robots. We review the main roadblocks to the realization of this vision. We define the key features of MLE architectures, and, using examples of cutting-edge evolutionary algorithms, propose a simple implementation. We highlight use cases to which MLE is particularly suited. Finally, we sketch out a path towards increasingly capable MLE implementations, and discuss the implications of MLE for the field of robotics.

Enabling technologies

MLE is underpinned by a rapidly expanding range of materials and increasingly capable advanced manufacturing technology.


According to the laws of chemistry, the number of materials available for search is ~10100 (ref. 8; for comparison, there are ‘only’ 1082 atoms in the universe). This provides an almost infinite toolbox for designing bespoke, functional materials for robots—including new sensing, actuation and power materials9—that we are increasingly able to design, characterize and synthesize. Accelerating the development cycle of bespoke materials is key as the space of possibilities is so vast (using, for example, robotic materials synthesis10 or combinatorial materials libraries11).

Semi- or fully autonomous closed-loop systems use robots to efficiently perform experiments with reduced reliance on humans, and this naturally couples with techniques that automatically plan optimal sets of experiments12,13 to reach desired material properties14. Robots can perform multiple simultaneous experiments, vastly reducing time and human effort while increasing the providence of experimental data for computational, AI and machine learning methods, which are now mature enough to reliably predict properties of new materials and allow a vast set of previously physical experiments to be conducted (cheaper and faster) virtually8. High-throughput computational techniques exploit massive datasets and advanced modelling to the same effect15,16.

In combination, these advances provide unprecedented opportunities to design and manufacture ‘smarter’ and more specialized robotic materials that integrate sensing, actuation and other properties to create powerful embodiment options17.

Advanced manufacturing

MLE is poised to exploit additive18 and subtractive manufacturing of free-form structures and complex geometries19, printing intricate multi-part components from multiple materials in situ.

Behavioural diversity can be embedded during the manufacturing process through functional gradation20,21 to vary material properties (for example, stiffness and elasticity). Voxel blending gives fine-resolution, gradual tuning of build properties through continuous mixing of multiple materials. Production devices tuned for an ever-expanding range of feedstock increases the diversity observed in recent composite and multi-material robots, for example22.

MLE is iterative, so streamlining construction and reducing reliance on humans is a priority. Sensors, actuators and power systems are readily printable in various configurations23, and continue to close the performance gap with their traditional counterparts while being increasingly integrated into multi-function materials during construction17. MLE is poised to benefit from (semi-)autonomous robot construction, including prototype generate-and-test systems24,25, culminating with whole robots constructed without human intervention26.

Inspiration and characteristics of multi-level architectures

MLE is a natural extension of evolutionary robotics; a field that harnesses iterative, population-based algorithms to generate robot bodies, brains or both27. A typical evolutionary robotics experiment defines a representation—how the genotype (string of numbers) maps to a phenotype (physical robot). To capture sufficient complexity, these representations are typically indirect. Simple genotypes define more complex robot phenotypes, potentially incorporating naturally observed features including gene reuse (for example, to encode two identical eyes), radial and bilaterial symmetry (seen across nature in body plans) and scaling factors (across the five fingers of a hand)28. It randomly initializes a population using the representation, and tests their task–environment performance against a user-defined ‘fitness function’. Analogies of genetic mutation and recombination induce variance in the genotypes to create a new generation, with a preference to select high-fitness parents. This process iterates until some acceptable level of performance is met.

Evolutionary robotics provides environmental adaptation29, and explores a wider design space than other approaches, locating unconventional short-cut designs that may otherwise be missed30. Owing to a dearth of versatile, affordable manufacturing processes, evolutionary robotics has traditionally focused on controller generation for fixed morphologies31. Signalled by the first three-dimensional (3D) printed evolved robot in 200032, we now find ourselves in the era of the ‘evolution of things’33,34, where complex physical artefacts are evolved and physically instantiated35.

From the MLE perspective, classic evolutionary robotics is the top-level level that finds environment- and task-specific controllers and body plans—arrangements of structure, sensing and actuation that together comprise a robot. Materials are not typically considered as part of the robot’s ‘genome’ (although idealized materials properties appear sporadically in simulation36). We posit that the missing link to unlocking richer embodiments is to discover, model and select real (and newly discovered) materials, and make them available in a holistic design process. MLE architectures are characterized by the following:

  1. 1.

    Three vertically stacked levels (robot, component, material). Robots are arrangements of components, where a component is a combination of a geometry and one or more materials that occupy sections of the geometry.

  2. 2.

    At least one search process per level, which is responsible for finding new artefacts within a given level. In the component level, we could run search processes for actuators, sensors and structural elements.

  3. 3.

    Hybridization, a novel concept that enforces that real and virtual genomes are identical for either physical or virtual instantiation. This means we can easily ‘cross-breed’ between simulated and real artefacts.

We suggest evolutionary algorithms as bias-free and domain-agnostic37 default algorithms, with a track record of success in discovering molecules and materials8, components and structures38, and robots39, while being relatively efficient across all of these levels40. As each level is independent, we can use domain-specific algorithms/representations as required. For example, the materials level may benefit from capturing the underlying phenomena relating a materials structure to its behaviour41.

With the grand vision sketched out, let us now consider how emerging technologies can build simple prototype MLE architectures in the near future.

A conceptual MLE architecture

Our conceptual prototype uses evolutionary illumination algorithms42 (also called ‘quality diversity’ algorithms43) to produce diverse libraries of high-performing potential solutions across three levels (Fig. 1a). Libraries are n-dimensional grids of possible combinations of physical properties, discretized into bins44. For example, all actuators transmit force and consume energy, all materials possess weight, rigidity, elasticity and compliance that can be exploited to generate robots that are adapted to specific environments. By measuring these properties we can assign to the appropriate bin.

Fig. 1: Sample MLE architecture, creation of solutions, creation of a robot and hierarchical genotype.

a, A sample MLE architecture incorporating a single robot search process, three component search processes (for example, sensors, actuators and body segments), and four materials search processes (for example, polymers). For clarity only two dimensions of each process are shown, discretized into bins. Colour indicates the highest fitness solution found per bin; darker represents a fitter solution and white squares indicate no current solution. Over time, more bins are filled, and bin fitness is improved, which improves the quality and diversity of options available to the upper levels. An asterisk denotes an individual created physically; other individuals are virtual. b, Creating a diversity of high-quality solutions. At every iteration, the illumination algorithm (1) randomly selects an occupied bin, (2) adds random mutations to the current best solution of the bin, (3) evaluates the quality (fitness) and the features of the newly generated solution, (4) compares the quality of the newly generated solution with the current best of the bin that corresponds to its features, and keeps the best solution. These four steps are repeated until all the bins are occupied with satisfying solutions. c, Creating a robot. At the top level, a compositional pattern producing network (CPPN) defines the body plan. Once generated, an appropriate number of pointers into the components layer are set based on the number of component slots generated, and the corresponding components fused via post-processing to create the final robot. Pointers address a specific member of the library, in this case two integers for a two-dimensional library. In this case components are segregated into actuation (red diamonds), body structure (gold squares) and sensing (grey circles). A further CPPN per component defines component geometry, and subsequently the number of material pointers required for that component. Coloured grids (red, green, purple) abstractly represent the larger libraries seen in a and b. d, A hierarchical genotype that uses pointers to fully and efficiently define the robot. The robot controller (‘Ctrl’) is defined at the top level. Background colours in a, c and d signify different levels.

For clarity, Fig. 1 visualizes two properties per process, resulting in 2D grids. Practically, there will be many more properties (Table 1). It is critical that MLE generates a diverse set of solutions, rather than a single optimal solution as in traditional optimization and classic evolutionary robotics. Libraries allow each level to be explored independently and provide diversity to upper levels. Each level may be subdivided; for example, components can be subdivided into sensors, actuators and body structure. Each search process can also have its own solution representation, feature dimensions and search operators.

Table 1 Sample search processes, tunable variables and desired properties that may be found in each level (non-exhaustive)

To begin an experiment, we bootstrap the lowest level with known materials, either from the literature or from previous MLE experiments. Each material is placed in a bin based on its physical properties. The components level then defines geometries and selects an appropriate number of materials into those geometries to create body segments, sensors and actuators, filling some component bins. At the highest level, we search for controllers and body plans—templates that define arrangements of components. Here, geometries and body plan layouts are defined using compositional pattern producing networks (CPPNs), specialized neural networks evolved to output geometric patterns displaying modularity, regularity and symmetry (see, for example, ref. 45).

As well as belonging to a bin, each material, component and robot has an associated fitness. For materials and components, we suggest fitness based on the universally beneficial property of cost; therefore the cheapest example that fulfils certain physical property requirements will be passed to the next level to reduce the manufacturing burden. More specific fitness measures, for example, efficiency for an actuator or signal-to-noise for a sensor, will be mediated by the environment, so we lose transferrability for potential gains in performance. Robot fitness is based on its behaviour—how well it completes the task.

The grids at each level progressively fill out as new material, component and robot designs are discovered (Fig. 1b). An illumination search specifically encourages diversity42 through pressure to discover new combinations of physical properties, providing larger libraries and thus more opportunities to exploit materials and components in interesting ways, facilitating emergent behaviour. As lower levels focus on creating a diversity of options, significant opportunities arise for the spontaneous emergence of component–material combinations that facilitate useful behaviour, which will probably result in a good fitness score for the robot, with no constraints on exactly how that behaviour emerges. These behaviours are a holistic combination of the search efforts at every level and the in-environment performance of the resulting robot. Counterintuitively, illumination search is known to discover more ‘optimal’ outcomes than pure optimization approaches42; hence we expect high-performance artefacts. Cascading improvements may percolate through the levels; when a new material is found it could improve the fitness of the solution in a populated bin (replacing the previous best), or it may expand the number of filled bins in its level, and potentially the number of reachable bins at any level above it.

To instantiate a robot (Fig. 1c), we query the corresponding CPPN and seamlessly integrate the relevant components into the resulting body plan using post-processing. Accompanying the CPPN are a number of ‘pointers’ to bins in the components level, which selects specific components into the body plan. Each pointer addresses a specific bin in the level below. Similarly, a component consists of a geometry-defining CPPN and pointers to materials. Either the CPPN or the indices of the pointers may be altered during the evolutionary search, which changes the shape or composition of the affected artefacts. Materials may be represented and searched in a similar way. Once instantiated, the robot is evaluated based on desired mission performance to ascertain its fitness.

Unlike natural genotypes, which are defined at the DNA level, the genotype of a robot produced by MLE can be thought of as a hierarchy (Fig. 1d), where the robot body plan and controller are defined at the top level. Following the pointers from robot to components allows us to fully define the components used, and following each component’s pointers allows us to fully define the materials that comprise each component.

The only necessary inter-level communication is the passing of candidate solutions upwards for use by the next level. For efficiency, and for ease of integration into higher-level simulators/models, only phenotypic properties (that is, of the physical solution created) are passed between levels. The representation of a solution, plus details of experimental procedures, models/simulators used, learning algorithms and evaluation tests are stored in a database by the relevant layer as required, so results are repeatable.

Not all physical properties will be relevant in all situations, and can be safely ignored. Sparse feature selection methods46,47, applied as automatic dimensionality filters, give more weight to the features most relevant for fitness and function within the niche, and minimize combinatorial issues. In our example, this may select the physical properties in which we encourage diversity. Similarly, not all options will be required from lower levels. Each level can also design its own library from lower-level libraries according to its own objectives48. Combined, these processes reduce the required computational effort and the extent of physical characterization required.

Learning and behaviour

The focus of this Perspective is in improving the bodies of embodied robots. However, we need some way of generating useful behaviour from these bodies. In some cases, this can result solely from the interactions of materials and components in the robot’s body36, or through an automatic response to stresses experienced between the body and environment49. This morphological computing offloads the computation of behaviour from a controller onto the robot’s body50, reducing the required controller complexity. We expect MLE to greatly benefit from morphological computing, owing to the vast range of physical behavioural responses it can instantiate.

For more complex tasks, learning will be required to overtly direct the body–environment interactions our robots produce51, creating a controller (‘Ctrl’ in the robot genome in Fig. 1d). Software provides a wealth of options including neural networks, central pattern generators, behaviour trees and modular architectures27, which can be optimized through reinforcement learning, evolutionary algorithms and imitation. Post-deployment online learning offers the possibility to adapt controllers following hardware failures52 or in response to changing environmental conditions. Ultimately, the choice of controller and learning is a design decision; key requirements are that the body is controllable, its material and morphological composition properly exploited, and its behaviour suitable for the task.

Physical and virtual testing provides the best of both worlds

Manufacturing and testing each new material, component and robot in reality would be prohibitively expensive in terms of time and cost. The success of MLE hinges on the effective use of simulation and modelling, and blurring the lines between real and virtual.

MLE introduces the novel concept of ‘hybridization’, such that the representation describing a material (or component or robot) is identical, regardless of whether it is real or virtual. Bins in each level may be filled through physical experimentation or through the results of a simulation or predictive model. Simulated evolution runs concurrently with physical experimentation, and cross-breeding allows physical or virtual materials (or components or robots) to parent a child that may exist in the real world, in the virtual world or in both. The advantages of hybridization are significant; physical evolution is accelerated by the virtual component, which can run faster to find good robot features with less time and fewer resources, whereas simulated evolution benefits from the influx of ‘genes’ that are tested favourably in the real world.

Physical experimentation provides necessary ‘ground truth’ data, the burden of which may be reduced through smart algorithmic design53. Evaluating robot performance in reality is particularly difficult (repeatability and physical damage are key problems), but increasingly feasible thanks to custom-designed test arenas54,55 and proof-of-concept ‘generate and test’ facilities25.

We must be able to simulate the performance of constituent materials and components in the top-level robot. Where possible, conducting all evaluations in the same simulator guarantees interoperability; nearly all simulators allow various materials properties to be defined and directly specified from lower levels. Multiscale modelling can enhance veracity. Some properties, for example hyperelasticity, can be tricky to model and may require specialist tools. In this case, co-simulation can be used to link specialist simulators together, allowing materials to be simulated, and their results shared with a dedicated component simulator to determine overall performance. Such approaches integrate with techniques that automatically validate the material models for use in simulators56. Directly representing complex micro-level behaviours in higher-level models/simulators is difficult, but increasingly feasible as it receives ongoing research attention in multiple fields in materials science.

A key issue is the reality gap, where necessary abstractions lead to degraded performance when simulator-designed artefacts are transferred to reality. MLE heavily exploits techniques to reduce the gap. Gathering data on real designs and using a learning algorithm (for example, a neural network57 or Gaussian processes58) to create surrogate models of the performance59 improves accuracy and speed, and has been successful at the material level60, the design level61 and for robot controllers62. Physical testing can improve an existing simulator to more closely match reality, either tuning simulator parameters63,64 and/or combining the predictions of the simulator with those of a data-driven model65. In between these two ideas it is possible to learn a ‘transferability function’ that predicts the accuracy of the simulator for a given design66.

The benefits of MLE

MLE is primarily designed to harness materials to provide diverse, specialized robot designs. Other main benefits include the following:

  • Scalability: promoting scalability is necessary due to combinatorial issues67 brought about by embedding multiple search processes across three levels. Distributing the ‘full genome’ of a robot across multiple independent automatic design processes allows each level to be searched in parallel, using the specialist tools of each field where applicable to improve efficiency. Hybridization shifts the majority of the search effort into relatively cheap, parallelizable simulations and models.

  • Self-optimization: although the early stages of MLE are likely to be slow, with few options available, we envisage the system as somewhat self-optimizing; the longer it runs, the better our models become, and the more options are discovered in every layer.

  • Re-use: focusing on physical properties allows materials and components to transfer between MLE architectures. Processes can be swapped in or out of a level with relative ease.

  • Collaboration: MLE architectures will probably be distributed across multiple institutions depending on the availability of hardware and specialists, leading to an inherently collaborative effort integrating multiple research groups and the architecture itself, which promotes standardized, readily available experimental information and the cross-fertilization of ideas68. We may look to the Materials Genome Initiative for inspiration on standardizing MLE, encouraging collaboration and reducing barriers to entry69.

Opportunities for MLE architectures

As a new paradigm for designing robots, MLE will naturally gravitate towards certain applications. Consider the rapidly advancing field of soft robotics70,71: compliant, deformable robots that survive crushing, burning and other hazards that are characteristic of the unstructured environments into which we want to put robots. Integration of sensing, actuation and deformation are fundamental to soft robotics, and MLE most simply permits this using a single library of multi-function components, rather than dedicated sensing, actuation and so on.

Soft robotics currently lacks a codified design methodology, as deformable soft materials are not amenable to conventional approaches72. Designers often settle on a (frequently bio-inspired) preconceived design, for example an octopus or a jellyfish73, which instantly places heavy limitations on the designs considered. Rather than design a fish, MLE lets us ask a different question—what ‘creatures’ might evolution devise if its building blocks were not protein, muscle and bone, but rather polymers and composites? MLE is a perfect fit for the role of soft robot designer, harnessing diversity to comprehensively explore soft robot design spaces.

Soft robots have particularly interesting and powerful embodiments, which emerge through interacting arrangements of morphological and material properties74,75. MLE provides a continuous stream of new materials and increasingly capable componentry76,77, offering a pathway towards designing for embodiment: discovering specialized soft materials, fully leveraging those materials through the emergent generation of components and bodies, and creating controllers to strongly couple the resulting embodiments with the environment.

For our second design opportunity, let us cast our minds forward 20 to 30 years. Imagine that we want to perform basic environment monitoring with robots: to traverse terrain in a zone, gather some data and fully degrade after a while. This might sound simple at first, but critically depends on the environment. The Sahara is very hot, dry and sunny (during the day), but Antarctica is cold and icy. Creepers and other low-lying foliage in the Amazon present a markedly different challenge to rolling desert sand dunes.

Designing robots for each niche with classic engineering would require an army of engineers for each environment, and the engineering cost would sky-rocket. This is why most of engineering is about standardization and not specialization. The alternative is MLE (Fig. 2), which could automatically design suitable robots (unique combinations of materials, morphology and behaviour) for each environment. They might resemble insects: relatively simple, small, highly integrated, highly specialized and fit for function. Note that the same MLE architecture, with shared materials and identical objectives, could adapt robots to account for seasonal differences within a biome or could design for each of the following environments, and provide the following features:

  • Antarctica: wind-powered, sliding locomotion, water-resistant, degrades with heat (in the summer)

  • The Amazon: crawling locomotion, degrades with humidity, biomass powered

  • Sahara: solar-powered, sliding locomotion, heat-resistant, degrades with ultraviolet light

Fig. 2: Showing how MLE can provide a diversity of robots for a diversity of environmental niches.

Christopher Michel (Antarctica); Jean-Baptiste Mouret (Amazon); and Dimitry B. (Sahara)

In this alternative MLE architecture, each level consists of only one heterogeneous library of solutions. Such architectures are likely to be possible in the further future, where boundaries between sensing, actuation and structure are collapsed to promote emergence and integration, at the cost of increased computational costs and combinatorial effort.

Towards a new era of embodied intelligence

The convergence in materials, manufacturing and design paves the way towards radically new ways of producing robots. The main thesis of this Perspective is that MLE architectures can integrate different technologies and levels under an evolutionary umbrella. Considering that natural evolution succeeded in filling practically all environmental niches on Earth with highly adapted lifeforms, this approach holds great promise as a robotic design technique. MLE is admittedly ambitious, and as such we have identified four key challenges to be overcome during an MLE research programme:

  1. 1.

    Initial designs will be constrained to materials that are easily created, characterized and modelled. MLE materials search, together with high-throughput efforts globally and advances in materials modelling, will gradually alleviate this issue.

  2. 2.

    In an attempt to create emergent embodiments by filling as many bins as possible, our conceptual prototype trades efficiency for diversity. To counter this, imagine an ‘overseer’ program that greedily searches for good embodiments, allowing pointers to any material properties rather than experimentally confirmed or modelled properties, and subsequently skewing the search process to find materials with those properties if a promising embodiment is found. This leads to another issue around balancing the tension between any top-down influence and a bottom-up robot design process.

  3. 3.

    Resources must be allocated across levels. This may be achieved, for example, by including enough physical experimentation to keep simulations approximate to reality, which may be quantified and managed using Gaussian processes to identify areas of uncertainty in our models. Bottlenecks are another issue; insufficient resource allocation, for example to actuators, may limit the range of final robot design. Using discretized libraries lets us estimate coverage as a percentage of filled bins, and allocate more resources to searches lacking coverage.

  4. 4.

    Ideally, MLE would be fully autonomous. However, the human designer will play a significant role for the foreseeable future, setting up (designing suitable measures of robot fitness, suitably discretizing libraries, identifying suitable materials and so on) and running experiments (characterizing materials, assembling and evaluating robots). Ongoing developments in automated characterization, testing and construction facilities will reduce this burden.

As well as challenges, we wish to highlight a significant opportunity: representations (recall that representations are mappings from genotype to phenotype). The main historical event in evolutionary robotics was a move from direct (one-to-one mappings) to indirect representations, a response to the increased phenotypic complexity required in real-world artefacts. Hierarchical representation has received scant consideration to date; we see a huge opportunity for intelligent level-spanning representations, to describe complex artefacts that are built from other artefacts and their interactions.

Looking to the future, we envision a staged development of MLE systems:

  • Stage 1: within 5 years, prototype MLE systems will come online, spread across multiple research institutions. They will produce evolved robots in controlled laboratory settings, with heavy human intervention.

  • Stage 2: in about a decade, MLE systems will be able to generate robots for a suitably constrained real-world mission. Increasingly integrated construction techniques will speed up evolution and reduce the amount of human intervention required.

  • Stage 3: after around 20–30 years, we may see deployments in real-world environments. As models become more sophisticated, and computing power more available, monolithic search processes will begin to merge, heightening the interplay between material and morphology and encouraging emergence (for example, Fig. 2).

Somewhat counterintuitively for an architecture based on segregated levels, MLE is about collapsing boundaries—between research institutions, between scientific disciplines, between reality and virtuality and between robots and their constituent materials. In doing so, we hope to create a holistic design process for a new type of robot, specialized all the way from material to machine.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


  1. 1.

    Carlson, J. & Murphy, R. R. How UGVs physically fail in the field. IEEE Trans. Robot. 21, 423–437 (2005).

  2. 2.

    Atkeson, C. G. et al. in The DARPA Robotics Challenge Finals: Humanoid Robots to the Rescue (eds Spenko, M., Buerger, S. & Iagnemma, K.) 667–684 (Springer Tracts in Advanced Robotics, Springer, Basel, 2018).

  3. 3.

    Barrett, L. Beyond the Brain: How Body and Environment Shape Animal and Human minds (Princeton Univ. Press, Princeton, 2011).

  4. 4.

    Pfeifer, R. & Bongard, J. How the Body Shapes the Way We Think: A New View of Intelligence (MIT Press, Cambridge, 2006).

  5. 5.

    Carroll, S. B., Grenier, J. K. & Weatherbee, S. D. From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design (Wiley, New York, 2013).

  6. 6.

    Brooks, R. A. Intelligence without representation. Artif. Intell. 47, 139–159 (1991).

  7. 7.

    Salimans, T., Ho, J., Chen, X., Sidor, S. & Sutskever, I. Evolution strategies as a scalable alternative to reinforcement learning. Preprint at (2017).

  8. 8.

    Le, T. C. & Winkler, D. A. Discovery and optimization of materials using evolutionary approaches. Chem. Rev. 116, 6107–6132 (2016).

  9. 9.

    Fischer, P., Nelson, B. & Yang, G.-Z. New materials for next-generation robots.Sci. Robot. 3, eaau0448 (2018).

  10. 10.

    Soldatova, L. N., Clare, A., Sparkes, A. & King, R. D. An ontology for a robot scientist. Bioinformatics 22, E464–E471 (2006).

  11. 11.

    Maier, W. F., Stoewe, K. & Sieg, S. Combinatorial and high-throughput materials science. Angew. Chem. Int. Ed. 46, 6016–6067 (2007).

  12. 12.

    Sans, V. & Cronin, L. Towards dial-a-molecule by integrating continuous flow, analytics and self-optimisation. Chem. Soc. Rev. 45, 2032–2043 (2016).

  13. 13.

    King, R. D. in KI 2015: Advances in Artificial Intelligence (eds Hölldobler, S., Krötzsch, M., Peñaloza, R. & Rudolph, S.) xiv–xv (Springer, Cham, 2015).

  14. 14.

    Granda, J. M., Donina, L., Dragone, V., Long, D. L. & Cronin, L. Controlling an organic synthesis robot with machine learning to search for new reactivity. Nature 559, 377–381 (2018).

  15. 15.

    Curtarolo, S. et al. The high-throughput highway to computational materials design. Nat. Mater. 12, 191–201 (2013).

  16. 16.

    Pyzer-Knapp, E. O., Suh, C., Gomez-Bombarelli, R., Aguilera-Iparraguirre, J. & Aspuru-Guzik, A. What is high-throughput virtual screening? A perspective from organic materials discovery. Annu. Rev. Mater. Res. 45, 195–216 (2015).

  17. 17.

    Meng, Y., Correll, N., Kramer, R. & Paik, J. Will robots be bodies with brains or brains with bodies? Sci. Robot. 2, eaar4527 (2017).

  18. 18.

    Calignano, F. et al. Overview on additive manufacturing technologies. Proc. IEEE 105, 593–612 (2017).

  19. 19.

    Li, L., Haghighi, A. & Yang, Y. A novel 6-axis hybrid additive-subtractive manufacturing process: design and case studies. J. Manuf. Process. 33, 150–160 (2018).

  20. 20.

    Eujin, P. et al. A study of 4D printing and functionally graded additive manufacturing. Assem. Autom. 37, 147–153 (2017).

  21. 21.

    Martínez, J., Hornus, S., Song, H. & Lefebvre, S. Polyhedral Voronoi diagrams for additive manufacturing. ACM Trans. Graph. 37, 129 (2018).

  22. 22.

    Chen, T., Mueller, J. & Shea, K. Integrated design and simulation of tunable, multi-state structures fabricated monolithically with multi-material 3D printing. Sci. Rep. 7, 45671 (2017).

  23. 23.

    Haghiashtiani, G. et al. 3D printed electrically-driven soft actuators. Extreme Mech. Lett. 21, 1–8 (2018).

  24. 24.

    Rosendo, A., Von Atzigen, M. & Iida, F. The trade-off between morphology and control in the co-optimized design of robots. PLoS One 12, e0186107 (2017).

  25. 25.

    Brodbeck, L., Hauser, S. & Iida, F. Morphological evolution of physical robots through model-free phenotype development. PLoS One 10, e0128444 (2015).

  26. 26.

    Wehner, M. et al. An integrated design and fabrication strategy for entirely soft, autonomous robots. Nature 536, 451–455 (2016).

  27. 27.

    Silva, F., Duarte, M., Correia, L., Oliveira, S. M. & Christensen, A. L. Open issues in evolutionary robotics. Evol. Comput. 24, 205–236 (2016).

  28. 28.

    Hornby, G. S., Lohn, J. D. & Linden, D. S. Computer-automated evolution of an X-band antenna for NASA’s Space Technology 5 mission. Evol. Comput. 19, 1–23 (2011).

  29. 29.

    Auerbach, J. E. & Bongard, J. C. Environmental influence on the evolution of morphological complexity in machines. PLoS Comput. Biol. 10, e1003399 (2014).

  30. 30.

    Lehman, J. et al. The surprising creativity of digital evolution: a collection of anecdotes from the evolutionary computation and artificial life research communities. Preprint at (2018).

  31. 31.

    Nolfi, S. & Floreano, D. Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines (MIT Press, Cambridge, 2000).

  32. 32.

    Lipson, H. & Pollack, J. B. Automatic design and manufacture of robotic lifeforms. Nature 406, 974–978 (2000).

  33. 33.

    Eiben, A. E., Kernbach, S. & Haasdijk, E. Embodied artificial evolution—artificial evolutionary systems in the 21st century. Evolut. Intell. 5, 261–272 (2012).

  34. 34.

    Eiben, A. E. & Smith, J. From evolutionary computation to the evolution of things. Nature 521, 476–482 (2015).

  35. 35.

    Rieffel, J., Mouret, J.-B., Bredeche, N. & Haasdijk, E. Introduction to the evolution of physical systems special issue. Artif. Life 23, 119–123 (2017).

  36. 36.

    Cheney, N., MacCurdey, R., Clune, J. & Lipson, H. Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding. In Proc. GECCO’13 (ed. Blum, C.) 167–174 (ACM, 2013).

  37. 37.

    Eiben, A. E. & Smith, J. E. Introduction to Evolutionary Computing (Springer, Berlin, 2003).

  38. 38.

    Stanley, K. O. Compositional pattern producing networks: a novel abstraction of development. Genet. Program. Evolv. Mach. 8, 131–162 (2007).

  39. 39.

    Doncieux, S., Bredeche, N., Mouret, J.-B. & Eiben, A. E. G. Evolutionary robotics: what, why, and where to. Front. Robot. AI 2, 10.3389/frobt.2015.00004 (2015).

  40. 40.

    Rabitz, H. Control in the sciences over vast length and time scales. Quant. Phys. Lett. 1, 1–19 (2012).

  41. 41.

    Hansch, C et al. Exploring QSAR: Fundamentals and Applications in Chemistry and Biology (American Chemical Society, Washington DC, 1995).

  42. 42.

    Mouret, J.-B. & Clune, J. Illuminating search spaces by mapping elites. Preprint at (2015).

  43. 43.

    Pugh, J. K., Soros, L. B. & Stanley, K. O. Quality diversity: a new frontier for evolutionary computation. Front. Robot. AI 3, 40 (2016).

  44. 44.

    Vassiliades, V., Chatzilygeroudis, K. & Mouret, J.-B. Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. IEEE Trans. Evolut. Comput. 22, 623–630 (2018).

  45. 45.

    Stanley, K. O., Clune, J., Lehman, J. & Miikkulainen, R. Designing neural networks through neuroevolution. Nat. Mach. Intell. (2019).

  46. 46.

    Figueiredo, M. A. et al. Adaptive sparseness for supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1150–1159 (2003).

  47. 47.

    Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B 58, 267–288 (1996).

  48. 48.

    Cully, A. & Demiris, Y. Hierarchical behavioral repertoires with unsupervised descriptors. In Proc. GECCO'18 (ed. Aguirre, H.) 69–76 (ACM, 2018).

  49. 49.

    Kriegman, S., Cheney, N., Corucci, F. & Bongard, J. C. Interoceptive robustness through environment-mediated morphological development. Preprint at (2018).

  50. 50.

    Hauser, H., Ijspeert, A. J., Füchslin, R. M., Pfeifer, R. & Maass, W. Towards a theoretical foundation for morphological computation with compliant bodies. Biol. Cybern. 105, 355–370 (2011).

  51. 51.

    Eiben, A. E. et al. The triangle of life: evolving robots in real-time and real-space. In Proc. ECAL 2013 (eds Liò, P., Miglino, O., Nicosia, G., Nolfi, S. & Pavone, M.) 1056–1063 (MIT Press, Cambridge, 2013).

  52. 52.

    Cully, A., Clune, J., Tarapore, D. & Mouret, J.-B. Robots that can adapt like animals. Nature 521, 503–507 (2015).

  53. 53.

    Bhattacharya, M. Evolutionary approaches to expensive optimisation. Preprint at (2013).

  54. 54.

    Howard, D. A platform that directly evolves multirotor controllers. IEEE Trans. Evol. Comput. 21, 943–955 (2017).

  55. 55.

    Heijnen, H., Howard, D. & Kottege, N. A testbed that evolves hexapod controllers in hardware. In 2017 IEEE International Conference on Robotics and Automation (ICRA) 1065–1071 (IEEE, 2017).

  56. 56.

    Bäck, T., Keßler, L. & Heinle, I. Evolutionary strategies for identification and validation of material model parameters for forming simulations. In Proc. GECCO'11 (ed. Lanzi, P. L.) 1779–1786 (ACM, 2011).

  57. 57.

    Hüsken, M., Jin, Y. & Sendhoff, B. Structure optimization of neural networks for evolutionary design optimization. Soft Comput. 9, 21–28 (2005).

  58. 58.

    Rasmussen, C. E. & Williams, C. K. I. Gaussian Processes for Machine Learning (MIT Press, Cambridge, 2006)

  59. 59.

    Jin, Y. Surrogate-assisted evolutionary computation: recent advances and future challenges. Swarm Evolut. Comput. 1, 61–70 (2011).

  60. 60.

    Winkler, D. A. & Le, T. C. Performance of deep and shallow neural networks, the universal approximation theorem, activity cliffs, and QSAR. Mol. Inform. 36, 1600118 (2017).

  61. 61.

    Gaier, A., Asteroth, A. & Mouret, J.-B. Data-efficient design exploration through surrogate-assisted illumination. Evol. Comput. 26, 381–410 (2018).

  62. 62.

    Chatzilygeroudis, K. et al. Black-box data-efficient policy search for robotics. In Proc. IROS 2017 51–58 (IEEE, 2017).

  63. 63.

    Bongard, J., Zykov, V. & Lipson, H. Resilient machines through continuous self-modeling. Science 314, 1118–1121 (2006).

  64. 64.

    Zagal, J. C. & Ruiz-Del-Solar, J. Combining simulation and reality in evolutionary robotics. J. Intell. Robot. Syst. 50, 19–39 (2007).

  65. 65.

    Chatzilygeroudis, K. & Mouret, J.-B. Using parameterized black-box priors to scale up model-based policy search for robotics. In Proc. ICRA 2018 1–9 (IEEE, 2018).

  66. 66.

    Koos, S., Mouret, J.-B. & Doncieux, S. The transferability approach: crossing the reality gap in evolutionary robotics. IEEE Trans. Evolut. Comput. 17, 122–145 (2013).

  67. 67.

    Donoho, D. L. High-dimensional data analysis: the curses and blessings of dimensionality. AMS Math. Chall. Lect. 1, 32 (2000).

  68. 68.

    Wagy, M. D. & Bongard, J. C. Combining computational and social effort for collaborative problem solving. PLoS One 10, e0142524 (2015).

  69. 69.

    Jain, A. et al. Commentary: the materials project: a materials genome approach to accelerating materials innovation. APL Mater. 1, 011002 (2013).

  70. 70.

    Tolley, M. T. et al. A resilient, untethered soft robot. Soft Robot. 1, 213–223 (2014).

  71. 71.

    Rus, D. & Tolley, M. T. Design, fabrication and control of soft robots. Nature 521, 467–475 (2015).

  72. 72.

    Lipson, H. Challenges and opportunities for design, simulation, and fabrication of soft robots. Soft Robot. 1, 21–27 (2014).

  73. 73.

    Yeom, S.-W. & Oh, I.-K. A biomimetic jellyfish robot based on ionic polymer metal composite actuators. Smart Mater. Struct. 18, 085002 (2009).

  74. 74.

    Manti, M., Cacucciolo, V. & Cianchetti, M. Stiffening in soft robotics a review of the state of the art. IEEE Robot. Autom. Mag. 23, 93–106 (2016).

  75. 75.

    Bauer, S. et al. 25th anniversary article: A soft future: from robots and sensor skin to energy harvesters. Adv. Mater. 26, 149–162 (2014).

  76. 76.

    Han, S.-T. et al. An overview of the development of flexible sensors. Adv. Mater. 29, 1700375 (2017).

  77. 77.

    Miriyev, A., Stack, K. & Lipson, H. Soft material for soft actuators. Nat. Commun. 8, 596 (2017).

Download references


D.H., D.F.K., P.V. and D.W. would like to acknowledge Active Integrated Matter, one of CSIRO’s Future Science Platforms, for funding this research. J.-B.M. is funded by the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 637972, project ‘ResiBots’).

Author information

All authors contributed in forming the concept of multi-level evolution, and to the writing of the Perspective. D.H., J.-B.M., P.V., D.W. and A.E contributed on the evolutionary side, and D.F.K. and D.W. on the materials side.

Competing interests

The authors declare no competing interests.

Correspondence to David Howard.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Further reading