Homeostasis and soft robotics in the design of feeling machines

Abstract

Attempts to create machines that behave intelligently often conceptualize intelligence as the ability to achieve goals, leaving unanswered a crucial question: whose goals? In a dynamic and unpredictable world, an intelligent agent should hold its own meta-goal of self-preservation, like living organisms whose survival relies on homeostasis: the regulation of body states aimed at maintaining conditions compatible with life. In organisms capable of mental states, feelings are a mental expression of the state of life in the body and play a critical role in regulating behaviour. Our goal here is to inquire about conditions that would potentially allow machines to care about what they do or think. Under certain conditions, machines capable of implementing a process resembling homeostasis might also acquire a source of motivation and a new means to evaluate behaviour, akin to that of feelings in living organisms. Drawing on recent developments in soft robotics and multisensory abstraction, we propose a new class of machines inspired by the principles of homeostasis. The resulting machines would (1) exhibit equivalents to feeling; (2) improve their functionality across a range of environments; and (3) constitute a platform for investigating consciousness, intelligence and the feeling process itself.

Main

We propose the design and construction of a new class of machines organized according to the principles of life regulation, or homeostasis. These machines have physical constructions—bodies—that must be maintained within a narrow range of viability states and thus share some essential traits with all living systems. The fundamental innovation of these machines is the introduction of risk-to-self. Rather than up-armouring or adding raw processing power to achieve resilience, we begin the design of these robots by, paradoxically, introducing vulnerability.

Living organisms capable of mentation are fragile vessels of pain, pleasure and points in between. It is by virtue of that fragility that they gain access to the realm of feeling. The main motivation for this project is a set of theoretical contributions to the understanding of biological systems endowed with feeling. Damasio1 has provided a rationale for the emergence of feelings from the physiology of life regulation. Feelings are intrinsically about something: making it possible for an organism to gravitate towards states of at least good and preferably optimal life regulation, thus maintaining life and extending it into the future. We must add that, in our conceptualization, feelings are of necessity conscious, and play a critical role in the machinery of consciousness.

For the homeostatic machines we envision, behaviours can carry real consequences. The world affords risks and opportunities, not in relation to an arbitrary reward or loss function, but in relation to the continued existence of the machine itself and, more to the point, to the quality of feeling that is the harbinger of the good or bad outcome relative to survival. Rewards are not rewarding and losses do not hurt unless they are rooted in life and death. True agency arises when the machine can take a side in this dichotomy, when it acts with a preference for (or, seen from a different angle, makes a reliable prediction of2) existence over dissolution. A robot engineered to participate in its own homeostasis would become its own locus of concern. This elementary concern would infuse meaning into its particular information processing3. A robot operating on intrinsically meaningful representations might seek especially intelligent solutions to the tasks set before it—that is, augment the reach of its cognitive skills.

We bring a biological perspective to the effort to produce machines with an artificial equivalent of feeling. For this effort, we will rely on recent developments from two fields: materials science and computer science. Turning first to new materials, we note that the past decade has witnessed the birth of a sub-discipline, soft robotics. This was enabled by new discoveries in the design and construction of soft ‘tissues’ embedded with electronics, sensors and actuators. These artificial tissues are flexible, stretchable, compressible and bounce back resiliently—in short, they are naturally compliant with their environments. Combined with conventional parameters such as temperature and energy level, soft materials potentially provide a rich source of information on body and environment.

The second development concerns statistical machine learning algorithms for the creation of abstract representations. New computational techniques may allow us to bring maps of the inner and outer worlds into register. There has been enormous attention paid to the capabilities of deep learning, but here we focus on one particular application of the technology: its ability to bridge across sensory modalities, including not only exteroception but also the modalities concerned with internal organism states—interoception and proprioception. This advance provides a crucial piece of the puzzle of how to intertwine a system’s internal homeostatic states with its external perceptions and behaviour.

Self-interest as fount of creativity

Today’s robots lack feelings. They are not designed to represent the internal state of their operations in a way that would permit them to experience that state in a mental space. They also lack selfhood and ‘aboutness’. All these shortcomings are related. It is true that present-day intelligent machines perform extremely well in narrow domains, but they fare poorly in unconstrained interactions with the real world. Our approach diverges from traditional conceptions of intelligence that emphasize outward-directed perception and abstract problem solving. We regard high-level cognition as an outgrowth of resources that originated to solve the ancient biological problem of homeostasis. Homeostasis manifests as self-interest and inspires creative behaviour in complex environments, natural and social. Current machines exhibit some intelligence but no sense-making, defined as an agent’s “meaningful relation to the environment”4. We propose that meaning begins to emerge when information processing carries homeostatic consequences.

In Shannon’s5 original formalization of information, as reduction in uncertainty of the contents of a message, the problem of the message’s meaning was neatly set aside. Recently, Kolchinsky amd Wolpert3 have proposed a formal definition of semantic, or meaningful, information as that subset of Shannon information that is related to a system’s future viability states. They take a causal-counterfactual approach to identify semantic information by calculating how the system’s future viability would have been affected, had that information been different. As the authors note, the success or failure of their definition hinges on the selected measure of viability—which in their case is negative entropy, chosen for its thermodynamic, if not necessarily biologic, interpretability.

Unlike other physical systems, living bodies are subject to perennial risk and decay, resulting from their own regular operations of life. But nothing equivalent holds for a disembodied algorithm or current robots whose physical existence is a given and, for practical purposes, guaranteed. In our view, sense data become meaningful when the data can be connected to the maintenance and integrity of the sensing agent—that is, to the organism’s package of regulatory operations that contributes to homeostasis. Without a biological framework, sensory processing that is not attached to a vulnerable body ‘makes no sense’.

Moving beyond embodiment

Our approach to building a machine with something akin to feeling takes place in a historical context of autonomous embodied systems (reviewed in refs. 6,7). Norbert Wiener’s cybernetics placed great emphasis on feedback-based control to produce and maintain states within a desired range. W. Ross Ashby’s felicitously named homeostat demonstrated the emergence of self-restoring stability. Ashby’s device coupled electrical and magnetic sensors and effectors in such a way that, when disturbed, it executed a random parameter search until equilibrium was restored (Fig. 1; see refs. 8,9). Behaviour-based robots, perhaps originating with Grey Walter’s tortoises10,11 and brought to the fore by Rodney Brooks’s subsumption architecture12, relied on the embodiment of the agents—the fact that the AI had a physical body in continuous interaction with the environment—as a crucial source of their ability to behave intelligently. Lipson, Bongard and colleagues have since extended this line of research into the evolution of robots that model their own morphology in order to execute behavioural goals13. Other work has produced agents that can regulate their own susceptibility to environmental cues based on abstract internal variables14. The Cyber Rodent project15 has explored the evolution of neural network controllers to support ‘mating’ and ‘foraging’ behaviour of robots that seek out conspecifics and battery packs in the environment. In simulation experiments making explicit reference to homeostasis, phototactic robots used ‘neural plasticity’ to restore adaptive behaviour following visual field inversion16.

Fig. 1: Ashby’s homeostat of 1954 exhibited some self-restoring stability.
figure1

Composed of four identical electrical-magnetic modules, each exerting effects on the others, the system executed a search for a globally stable state when the voltage (V) of one module exceeded some critical value of error (e) from the null state. Reproduced from ref. 9, Taylor & Francis.

Some preliminary steps have been taken to develop ‘emotional circuits’ to influence robotic goal selection17. Others have constructed robots capable of emotional expressions (typically through facial movements) to facilitate human–robot interaction18, but these motivation schedules and emotional performances have not been rooted in the machine’s own welfare, let alone well-being. Homeostatic-like features, if at all present, were implemented from the outside in: agents were instructed to maximize, or keep within a set range, certain arbitrary values. Unappreciated by the robot itself was the fact that if these values veered to an extreme then its own existence would be jeopardized. The ‘emotions’ and ‘values’ underlying behaviours were not relevant to the continuance of the system itself. Ultimately, these systems lacked a viability constraint. All robots of this class would be described by the philosopher Hans Jonas as biologically indifferent19 and, correspondingly, affectless. Despite behaving with seeming purpose, what these machines did—and this is of the essence—did not matter to the systems themselves.

In brief, the presence of a body serving as an aid or scaffold to problem-solving does not suffice to generate meaning. Nor does calculating an abstract internal parameter and labelling it ‘emotion’ elevate the parameter to this suggestive title. Di Paolo20 has criticized the programme of embodied robotics as still missing an organism-level logic: “emotions don’t come in boxes.” That is why we advocate a transition from ‘embodied artificial intelligence’ to ‘homeostatically motivated artificial intelligence’. Intelligence has been defined as21 “an agent’s ability to achieve goals in a wide range of environments.” But this definition prompts a follow-up question: whose goals? Does an agent that myopically follows orders to the extent that it endangers itself and compromises its ability to carry out future orders deserve to be called intelligent?

Living systems, on the other hand, have the property of selfhood. They continuously construct and maintain themselves against the natural tendency toward dissolution and decay. “This world is at once inviting and threatening”, as Jonas puts it19. “Appetition is the form which the basic self-concern of all life assumes”. Selves, as a condition of existence, must continuously enforce and mend the boundary between self and environment. In the closely related concept of autopoiesis22, systems continuously construct themselves and define their own relations to the environment. Damasio has traced a gradual progression of self-processes, from protoself to core self to autobiographical self, with each advancing stage explained by specific brain–body architectures and the processes they execute. Running throughout the progression of self-processes is the theme of homeostatic life regulation (see, for example, ref. 1).

Ultimately, we aim to produce machines that make decisions and control behaviours under the guidance of feeling equivalents. We envision these machines achieving a level of adaptiveness and resilience beyond today’s ‘autonomous’ robots. Rather than having to hard-code a robot for every eventuality or equip it with a limited set of behavioural policies, a robot concerned with its own survival might creatively solve the challenges that it encounters. These robots would interact with and learn about the environment from an internally generated perspective of self-preservation. Basic goals and values would be organically discovered, rather than being extrinsically designed. These survival-related behaviours could then be yoked to useful human purposes.

Enabling technologies

In order to realize this proposal for a new class of homeostatic machines endowed with equivalents to feelings, we will integrate recent developments from two fields: soft robotics and multisensory abstraction.

Soft robotics

The component structures of living organisms are themselves living and carry their own homeostatic imperatives. Living organisms are composed of living organs and tissues that are in turn composed of living cells. Each level participates in its own self-maintenance, sensing and signalling the state of its life process. These nested levels of material self-concern have not yet found expression in machines. At the intermediate level of tissues, however, the latest developments in soft materials should allow us to design, to some degree, ‘imitations’ of nature.

Why soft materials? The majority of today’s robots are constructed with materials of convenience. What is so different about blob-bot as compared to metal-bot? Consider the ‘life’ of a piece of sheet metal bent into a boxy robot. In general, metal is so durable in comparison to its niche—our human environment—that its integrity and viability is not a consideration. This is why metals and hard plastics are ubiquitous in robotics: precisely so that, in most cases, material integrity can be safely assumed.

Yet durability comes at a cost. An invulnerable material has nothing to say about its well-being. It rarely encounters existential threats. If we imagine strain gauges embedded throughout a hard surface, they would spend most of their time reporting ‘no change’. The hard knocks of life accumulate until finally a catastrophic failure occurs, and the sensors cry out in unison. The rigid robot presents a monolithic and implacable face to the world, unfeeling by design, its function decoupled from its constitution.

Soft robots, on the other hand, more readily enter into a graceful and sensitive coupling with the environment (Fig. 2a; see reviews23,24,25,26,27,28). Beginning with vulnerability as a design principle for robots, we propose to extend it down to the very stuff out of which the robot is made. Continuing the example we advanced earlier, the same strain gauges embedded in the volume of a soft material can localize forces and signal graded disruptions in body surface continuity, such as those caused by punctures and tears. As a realized example, Markvicka et al.29 fabricated a soft electronic ‘skin’ that localizes and can trigger responses to damage. They impregnated an elastomer base with droplets of liquid metal that, on rupture, cause changes in electrical conductivity across the damaged surface.

Fig. 2: Artificial and natural soft materials.
figure2

National Cancer Institute (c).

a, Soft electronics can be embedded on flexible and stretchable substrates. LM, liquid metal. b, Soft robotic effectors grip by conforming to the object. c, Human skin contains dense embeddings of sensors and effectors for the maintenance of its own integrity. Reproduced from ref. 23, AAAS (a, top three rows); ref. 29, Wiley (a, bottom row); and ref. 24, Elsevier (b).

This is not to say that soft materials are necessarily weaker or less resistant to mechanical damage than hard materials30. Soft matter admits of greater complexity and of more ways to regulate and be regulated. Soft materials accommodate themselves to objects rather than shoving objects aside. Under stress, they deform without breaking, then enter dysfunction or gradual decline instead of suffering sudden catastrophic failures. In many cases, soft materials can self-heal, regaining much, if not all, of their pre-injury structural and electrical properties (reviewed in refs. 31,32). In a coup of engineering, Cao et al.33 demonstrated an electronic gel skin that can self-heal after a cut (via ionic interactions); can sense touch, pressure, and strain; and function in wet or dry conditions!

Soft materials have continuously varying morphology, with more points of contact, control and force dispersion. They densely sample the environment across multiple modalities, including pressure, stretch, temperature and energy level, and return rich information about the evolving interaction. While not sufficient to generate feeling on its own, soft matter is more likely to naturally create the kind of relationship that, we expect, admits of an approximation to feeling.

As an example, consider the robotic octopus arm constructed of silicone and actuated by tendons34 (Fig. 2b). It executes an enveloping, coiled grip on an irregular surface not by calculating an analytical solution of applied torques to each of its microscopic ‘joints’, but rather by conforming itself to the object. To some extent, it holds by allowing itself to be held. Its grip on the world is achieved not by high-level object cognition but by its own material properties. To a considerable extent, so is feeling in biological creatures.

The sensors of a living organism are themselves alive and vulnerable to the conditions they are sensing. The retina is not an indifferent piece of silicon, but a curved sheet of photoreceptor cells resting on a bed of capillaries, all bathed in a saline jelly, surrounded by pain sensors and defended by an ultra-rapidly deploying physical barrier: the eyelid.

To take another example from biology, consider the near-miraculous material called skin (Fig. 2c). In its totality, the skin is the largest of the viscera in the human organism, and contains dense embeddings of sensory, motor, nutrient exchange and self-repair systems. Furthermore, the individual cells comprising the skin not only contain their own life-maintenance systems, as all other viscera do, but skin cells also register itch, pain, temperature, stretch, vibration and pressure, thus constituting the interface between self and world. And yet the skin is exquisitely vulnerable. A tiny insect’s jaws can breach the skin and create a large disturbance to the organism. In fact, this literal hair-trigger sensitivity is one of the ‘purposes’ of skin. The skin’s registration and amplification of signals of attack, a loved one’s caresses or the sun’s rays, provides critical information regarding the ongoing governance of life. Things can go well or very badly for soft materials, in more and more interesting ways than for hard matter.

Softness can be computationally modelled as a mesh or lattice of small enough components interacting in large enough numbers. The main challenge is to simulate the material at a high enough resolution for softness to be able to emerge while remaining computationally tractable. Efficient algorithms have been developed to model the dynamics of soft robots35,36, even when composed of heterogeneous materials37. Evolutionary algorithms have also been applied to the problem of generating soft robot morphologies and the corresponding motor patterns38. The vast expansion of degrees of movement freedom in soft robots parallels a vast expansion of sensitivity and control. The ensuing complexity of simulation is well worth it because soft materials present a larger opportunity space for maintenance and upkeep—a larger stage for a model of homeostasis and feelings to play out on.

We would also call attention to the interesting case of biohybrid systems, which integrate conventional or soft robotic materials with engineered cells (reviewed in ref. 39). Muscle tissues may be integrated into miniature robots for small-scale actuation, or bacterial cells into hydrogels for long-term sensing and computing40. The caveat, however, is that adding living cells or tissues to more conventional materials may muddy the waters in the goal of understanding the principles behind feeling machines. Putting chunks of biological matter in machines may very well get us some softness and feeling ‘for free’, without explicitly modelling them.

Computing cross-modal associations

The dream of building a robot with a homeostatic self-representation would present a complex exercise in machine learning but it could draw on neuroscience facts and theory—for example, on a neuro-architectural framework originally proposed in 198941,42. According to this framework, sensory inputs coalesce into abstract concepts by being progressively remapped in a neural hierarchical fashion, with each higher level registering more complex features. Nodes in each level also re-instantiate their lower-level constituent features by top-down projections. This convergence–divergence architecture can form representations that bridge across the sensory modalities43,44.

There is an intriguing correspondence between the biologically implemented convergence–divergence architecture and some variants of deep neural networks. Deep Boltzmann machines45 (DBMs; introduced in refs. 46,47), for example, learn hierarchical representations of sensory inputs in a stepwise manner, with increasingly complex internal features acquired as the hierarchy is climbed. DBMs are also generative in that they attempt to reconstruct, at each level back down the hierarchy, the learned features and ultimately the original pattern of sensory energy corresponding to the stimulus.

Visual recognition of written digits was one of the earliest practical uses of neural networks48, with auditory speech recognition of digits following somewhat later49. Today’s networks can learn representations that bridge across the auditory and visual modalities to perform cross-modal recognition. Ngiam et al.50 used DBMs and autoencoders to perform audiovisual speech recognition, training on auditory data to recognize the corresponding videos of lip movements. Recognition of objects across the auditory and visual modalities followed soon after51. Curiously, audiovisual-invariant object representations were discovered in the human brain around the same period52. Other modalities have since joined the algorithmic fray, including the significant combination of vision with motor and touch modalities53. Keeping pace with the algorithms, the human brain has been mapped for cross-modal correspondences among vision and touch54, and among vision, hearing and touch55.

Crucially, cross-modal processing is not limited to combinations of external (exteroceptive) sensory modalities, but can also accommodate the internal (proprioceptive and interoceptive) modalities. Damasio56 has proposed that a key to generating a self-perspective is the integration of exteroceptive information (from visual cortices, for example) with information from sensory portals (such as the frontal eye fields) and the musculoskeletal frame, which provides a stable anchor for the evolving sensory processing.

We suggest that deep neural networks are poised to tackle the next great challenge of building correspondences between inner space and outer space, between internal homeostatic data and external sense data. A machine constructed of soft and sensitive tissues and in charge of its own self-regulation will have a wealth of internal data on which to draw to inform its plans and perceptions. A proposal has been made that the feeling of existence itself, or of conscious presence, may be due to predictive coding of internal sensations57. The homeostatic robot will process information with the aid of something akin to feeling. How does the colour, taste and texture, say, of an apple, systematically associate with changes to the ongoing management of life? All of which is to say that the question ‘how does this make you feel?’ might be asked of machines.

Questions and objections

We already have our hands full teaching robots to drive our cars and sweep our floors. Why add new failure modes? Why worry about the Roomba catching a cold? The prospect of adding vulnerability and self-interest to robots provokes a set of common concerns. We attempt to address them here.

Reward, reinforcement and overhead costs of homeostasis

The addition of physical vulnerability opens the robot’s behaviour to new richness. We use the body to implicitly compute a high-dimensional reward function, for which an explicit analytical solution is out of reach. Reward functions are employed in reinforcement learning (RL), a computational framework that originated from the behaviourist tradition of psychology. In computational studies, RL can be used to train a system to perform a complex, multi-step behaviour by designing an appropriate reward function for it to maximize. A chief difficulty is to define the reward function with enough specificity to bring about the intended goal state. Another difficulty is the requirement of enormous amounts of experiential data—on the order of millions of trials and errors—to learn complex behaviour. Machines can acquire vast experience in accelerated computer simulations, but this is not possible for organisms constrained by the material and temporal limits of physical reality.

We regard RL as a powerful tool for certain classes of problems, but RL in general should not be confused or identified with a homeostatic system architecture. We specify a particular target for optimization (homeostatic well-being) and build in a necessary linkage to the physical integrity of the body. In so doing, we hope to reframe terms used by RL practitioners such as reward, punishment and motivation, which, for the most part, lack grounding in biological and phenomenological reality.

On this point, we are encouraged by a strain of computational work58,59,60 that builds bridges between organism homeostasis, emotions and RL (see ref. 61). Keramati and Gutkin60 mathematically model RL as the traversal of a multidimensional homeostatic space, in which each dimension corresponds to a physiological parameter and has an optimal value. Reward, in this perspective, is not identified with bonbons or dollars or videogame points, but rather with anything that moves the agent towards the location in homeostatic space that minimizes distance to the various optima. As a reviewer of Keramati and Gutkin’s work put it, this makes “reinforcement learning accountable to homeostatic imperatives.” Recent work has extended this homeostatic RL framework to address high-level cognitive, social and economic behaviours62, and considered its relation to active inference63.

The well-behaved robot

We aim to build robots with a sense of self-preservation. What could possibly go wrong with such an endeavour? Stories about robots often end poorly for their human creators. We seem to be unsettled by the moral status of artefacts imbued with lifelike qualities. If a genuinely feeling machine’s existence would be threatened by humans, would it necessarily respond in violent self-defence? We suggest not, provided, for example, that in addition to having access to its own feelings, it would be able to know about the feelings of others—that is, if it would be endowed with empathy. For some of the problems arising from giving some feelings to robots, we might attempt to solve them by giving robots more feelings rather than by suppressing them.

We subscribe to a naturalistic account of morality in which behaviours are guided by moral deliberation. Johnson64 argues that moral deliberation involves imagining the consequences of our actions on ourselves and others, and consciously feeling those consequences. Levy65 goes further, arguing that consciousness is required for moral responsibility. A necessary precondition for the attribution of moral responsibility is awareness of the facts and conditions giving our actions their moral significance. Feelings are responsible for introducing in the mind the relevant facts of the body. Assuming a robot already capable of genuine feeling, an obligatory link between its feelings and those of others would result in its ethical and sociable behaviour. As a starting point, we propose two provisional rules for a well-behaved robot: (1) feel good; (2) feel empathy.

The machine is capable of feeling well or ill, but in accordance with rule 1 pursues homeostatic well-being—that is, feeling good. The second rule, enforcing empathy, would make a robot feel the pleasure and pain of others as its own (though not necessarily at full strength). The two rules cycle into and reinforce each other. Empathy acts as a governor on self-interest and as a reinforcer of pro-social behaviour. Actions that harm others will be felt as if harm occurred to the self, whereas actions that improve the well-being of others will benefit the self.

We are certain that our two rules would eventually get caught in unexpected tangles, but such is the nature of moral decision-making, which is characterized by difficult choices, contingency on circumstances, resistance to rational maximization and guidance by strong yet inarticulable or hidden feelings. What people do not typically do is get stuck in infinite loops of ‘moral’ reasoning, or cause havoc by following strict rules to a bloody end.

Sufficiently advanced AIs have been characterized, unfairly we think, as necessarily devious, paranoid and acquisitive. Their “basic AI drives”66 of self-protection would entail a will to power. But this is not an empirically justified entailment of intelligence. We can just as easily imagine a fully enlightened and withdrawn ‘ascetic’ AI, as we can a demonic maximizer. Our present understanding of the relationship between intelligence and personality cannot yet adjudicate these positions67.

We hope for this section to serve as a preliminary prompt to necessary future discussions on robot ethics. Feeling robots should implement moral deliberation in the way that humans do: as feeling-oriented problem-solving, taking into account the feelings of others. Robots might have even fewer impediments to moral behaviour than humans do—for example, those impediments we inherit from history, culture or biological impulses, which can be engineered out of a robot. Morally perfect behaviour, in robots or otherwise, is likely to be incoherent or unattainable, but moral progress is possible.

But is it the real thing?

Would a homeostatic robot be, at best, a simulacrum, replicating some of the behaviours and mechanisms of living organisms but missing a key ingredient of the real thing? Is the ‘wet’ biochemistry of cellular tissue required for authentic homeostasis and for the mental experience we call feeling? These are important and open questions (Box 1). Can all mental phenomena be reduced to information processing, implementable on any arbitrary computing medium? A computer simulation of a hurricane won’t get us wet68, but might simulation of thought, itself being information processing, result in real thinking?

Here we must entertain the possibility that true feeling—the sort of mental state that humans experience when we feel—may indeed be restricted to wet biological tissue and may not be realizable on non-living artefacts. The wetness hypothesis predicts that the potentially crucial mechanisms behind feeling are impossible to realize in alternative materials, due to their lack of the physicochemical properties necessary to replicate the causal chain of events. Models and simulations of the crucial mechanism would be useful maps of the territory, but would not usually replicate the causal structure in a way that is grounded in reality.

Just as importantly, the possession of genuine feeling, however defined or verified, may not be necessary to the practical goal of enhancing robot behaviour. At present, the wetness hypothesis for genuine feelings is untested and untestable. The realness of a thing is tricky to establish when the thing is subjective. Testability may ultimately be a distraction from the task currently at hand, which is to model the thing better and gain an additional understanding of it. As models continue to improve, it is conceivable that they would become instantiations of the modelled phenomena. At some level of detail the map might become indistinguishable from the territory.

Conclusion

We suggest that the artificial agent’s survival should be implemented as a problem to be solved in its own right. The machine’s constitution and viability states have not yet been exploited as an internal source of rich and highly relevant data. The incorporation of soft tissues embedded with sensors and effectors into robots will provide a source of multimodal homeostatic data. Cross-modal algorithms will build abstract associations between the objects of the world and their multidimensional effects on homeostasis. Homeostatic robots might reap behavioural benefits by acting as if they have feeling. Even if they would never achieve full-blown inner experience in the human sense, their properly motivated behaviour would result in expanded intelligence and better-behaved autonomy.

References

  1. 1.

    Damasio, A. The Strange Order of Things: Life, Feeling, and the Making of Cultures (Pantheon, 2018).

  2. 2.

    Friston, K. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–38 (2010).

    Article  Google Scholar 

  3. 3.

    Kolchinsky, A. & Wolpert, D. H. Semantic information, autonomous agency and non-equilibrium statistical physics. Interface Focus 8, 20180041 (2018).

    Article  Google Scholar 

  4. 4.

    Kiverstein, J. D. & Rietveld, E. Reconceiving representation-hungry cognition: an ecological-enactive proposal. Adapt. Behav. 26, 147–163 (2018).

    Article  Google Scholar 

  5. 5.

    Shannon, C. E. The mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).

    MathSciNet  Article  Google Scholar 

  6. 6.

    Anderson, M. L. Embodied cognition: a field guide. Artif. Intell. 149, 91–130 (2003).

    Article  Google Scholar 

  7. 7.

    Froese, T. & Ziemke, T. Enactive artificial intelligence: investigating the systemic organization of life and mind. Artif. Intell. 173, 466–500 (2009).

    Article  Google Scholar 

  8. 8.

    Seth, A. K. & Tsakiris, M. Being a beast machine: the somatic basis of selfhood. Trends Cogn. Sci. 969–981 (2018).

  9. 9.

    Cariani, P. A. The homeostat as embodiment of adaptive control. Int. J. Gen. Syst. 38, 139–154 (2009).

    Article  Google Scholar 

  10. 10.

    Walter, W. G. An imitation of life. Sci. Am. 182, 42–45 (1950).

    Article  Google Scholar 

  11. 11.

    Holland, O. E. in Artificial Life V: Proceedings of the 5th International Workshop on the Synthesis and Simulation of Living Systems (eds Langton, C. G. & Shimohara, K.) 34–44 (MIT Press, 1997).

  12. 12.

    Brooks, R. A. New approaches to robotics. Science 253, 1227–1232 (1991).

    Article  Google Scholar 

  13. 13.

    Bongard, J. & Lipson, H. Evolved machines shed light on robustness and resilience. Proc. IEEE 102, 899–914 (2014).

    Article  Google Scholar 

  14. 14.

    Parisi, D. Internal robotics. Conn. Sci. 16, 325–338 (2004).

    Article  Google Scholar 

  15. 15.

    Doya, K. & Uchibe, E. The cyber rodent project: exploration of adaptive mechanisms for self-preservation and self-reproduction. Adapt. Behav. 13, 149–160 (2005).

    Article  Google Scholar 

  16. 16.

    Di Paolo, E. Homeostatic adaptation to inversion of the visual field and other sensorimotor disruptions. Proc. Simul. Adapt. Behav. 440–449 (2000).

  17. 17.

    Parisi, D. & Petrosino, G. Robots that have emotions. Adapt. Behav. 18, 453–469 (2010).

    Article  Google Scholar 

  18. 18.

    Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 59, 119–155 (2003).

    Article  Google Scholar 

  19. 19.

    Jonas, H. The Phenomenon of Life: Toward a Philosophical Biology (Northwestern Univ. Press, 1966).

  20. 20.

    Di Paolo, E. in Dynamical Systems Approach to Embodiment and Sociality (eds Murase, K. & Asakura, T.) 19–42 (Advanced Knowledge International, 2003).

  21. 21.

    Legg, S. & Hutter, M. Universal intelligence: a definition of machine intelligence. Minds Mach. 17, 391–444 (2007).

    Article  Google Scholar 

  22. 22.

    Maturana, H. R. & Varela, F. J. Autopoiesis and Cognition: The Realization of the Living (Springer, 1991).

  23. 23.

    Rogers, J. A., Someya, T. & Huang, Y. Materials and mechanics for stretchable electronics. Science 327, 1603–1607 (2010).

    Article  Google Scholar 

  24. 24.

    Kim, S., Laschi, C. & Trimmer, B. Soft robotics: a bioinspired evolution in robotics. Trends Biotechnol. 31, 287–294 (2013).

    Article  Google Scholar 

  25. 25.

    Majidi, C. Soft robotics: a perspective—current trends and prospects for the future. Soft Robot. 1, 5–11 (2014).

    Article  Google Scholar 

  26. 26.

    Lu, N. & Kim, D.-H. Flexible and stretchable electronics paving the way for soft robotics. Soft Robot. 1, 53–62 (2014).

    Article  Google Scholar 

  27. 27.

    Pfeifer, R., Iida, F. & Lungarella, M. Cognition from the bottom up: on biological inspiration, body morphology, and soft materials. Trends Cogn. Sci. 18, 404–413 (2014).

    Article  Google Scholar 

  28. 28.

    Rus, D. & Tolley, M. T. Design, fabrication and control of soft robots. Nature 521, 467–475 (2015).

    Article  Google Scholar 

  29. 29.

    Markvicka, E. J., Tutika, R., Bartlett, M. D. & Majidi, C. Soft electronic skin for multi‐site damage detection and localization. Adv. Funct. Mater. 29, 1900160 (2019).

    Article  Google Scholar 

  30. 30.

    Martinez, R. V., Glavan, A. C., Keplinger, C., Oyetibo, A. I. & Whitesides, G. M. Soft actuators and robots that are resistant to mechanical damage. Adv. Funct. Mater. 24, 3003–3010 (2014).

    Article  Google Scholar 

  31. 31.

    Kang, J., Tok, J. B. H. & Bao, Z. Self-healing soft electronics. Nat. Electron. 2, 144–150 (2019).

    Article  Google Scholar 

  32. 32.

    Bartlett, M. D., Dickey, M. D. & Majidi, C. Self-healing materials for soft-matter machines and electronics. npg Asia Mater. 11, 19–22 (2019).

    Article  Google Scholar 

  33. 33.

    Cao, Y. et al. Self-healing electronic skins for aquatic environments. Nat. Electron. 2, 75–82 (2019).

    Article  Google Scholar 

  34. 34.

    Laschi, C. et al. Soft robot arm inspired by the octopus. Adv. Robot. 26, 709–727 (2012).

    Article  Google Scholar 

  35. 35.

    Duriez, C. in Proc. IEEE International Conference on Robotics and Automation 3982–3987 (IEEE, 2013).

  36. 36.

    Goldberg, N. N. et al. On planar discrete elastic rod models for the locomotion of soft robots. Soft Robot. https://doi.org/10.1089/soro.2018.0104 (2019).

  37. 37.

    Hiller, J. & Lipson, H. Dynamic simulation of soft multimaterial 3D-printed objects. Soft Robot. 1, 88–101 (2014).

    Article  Google Scholar 

  38. 38.

    Rieffel, J., Knox, D., Smith, S. & Trimmer, B. Growing and evolving soft robots. Artif. Life 20, 143–162 (2014).

    Article  Google Scholar 

  39. 39.

    Ricotti, L. et al. Biohybrid actuators for robotics: a review of devices actuated by living cells. Sci. Robot. 2, eaaq0495 (2017).

    Article  Google Scholar 

  40. 40.

    Liu, X. et al. Stretchable living materials and devices with hydrogel–elastomer hybrids hosting programmed cells. Proc. Natl Acad. Sci. USA 114, 2200–2205 (2017).

    Article  Google Scholar 

  41. 41.

    Damasio, A. The brain binds entities and events by multiregional activation from convergence zones. Neural Comput. 1, 123–132 (1989).

    Article  Google Scholar 

  42. 42.

    Damasio, A. Time-locked multiregional retroactivation: a systems-level proposal for the neural substrates of recall and recognition. Cognition 33, 25–62 (1989).

    Article  Google Scholar 

  43. 43.

    Meyer, K. & Damasio, A. Convergence and divergence in a neural architecture for recognition and memory. Trends Neurosci. 32, 376–82 (2009).

    Article  Google Scholar 

  44. 44.

    Man, K., Kaplan, J., Damasio, H. & Damasio, A. Neural convergence and divergence in the mammalian cerebral cortex: from experimental neuroanatomy to functional neuroimaging. J. Comp. Neurol. 521, 4097–4111 (2013).

    Article  Google Scholar 

  45. 45.

    Salakhutdinov, R. & Hinton, G. Deep Boltzmann machines. Artif. Intell. Stat. 5, 448–455 (2009).

    MATH  Google Scholar 

  46. 46.

    Hinton, G. E. & Sejnowski, T. J. Optimal perceptual inference. in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition 448–453 (IEEE, 1983).

  47. 47.

    Ackley, D., Hinton, G. & Sejnowski, T. A learning algorithm for Boltzmann machines. Cogn. Sci. 9, 147–169 (1985).

    Article  Google Scholar 

  48. 48.

    LeCun, Y. et al. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 541–551 (1989).

    Article  Google Scholar 

  49. 49.

    Graves, A., Eck, D., Beringer, N. & Schmidhuber, J. in Biologically Inspired Approaches to Advanced Information Technology (eds Ijspeert, A. J., Murata, M. & Wakamiya, N.) 127–136 (Springer, 2003).

  50. 50.

    Ngiam, J., Khosla, A. & Kim, M. Multimodal deep learning. In Proc. 28th International Conference on Maching Learning (eds Getoor, L. & Scheffer, T.) 689–696 (2011).

  51. 51.

    Aytar, Y., Vondrick, C. & Torralba, A. SoundNet: learning sound representations from unlabeled video. In Proc. 30th International Conference on Neural Information Processing Systems 892–900 (NIPS, 2016).

  52. 52.

    Man, K., Kaplan, J. T., Damasio, A. & Meyer, K. Sight and sound converge to form modality-invariant representations in temporoparietal cortex. J. Neurosci. 32, 16629–36 (2012).

    Article  Google Scholar 

  53. 53.

    Lenz, I., Lee, H. & Saxena, A. Deep learning for detecting robotic grasps. Int. J. Rob. Res. 34, 705–724 (2015).

    Article  Google Scholar 

  54. 54.

    Oosterhof, N. N., Wiggett, A. J., Diedrichsen, J., Tipper, S. P. & Downing, P. E. Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex. J. Neurophysiol. 104, 1077–89 (2010).

    Article  Google Scholar 

  55. 55.

    Man, K., Damasio, A., Meyer, K. & Kaplan, J. T. Convergent and invariant object representations for sight, sound, and touch. Hum. Brain Mapp. 36, 3629–3640 (2015).

    Article  Google Scholar 

  56. 56.

    Damasio, A. Self Comes to Mind (Pantheon, 2010).

  57. 57.

    Seth, A. K., Suzuki, K. & Critchley, H. D. An interoceptive predictive coding model of conscious presence. Front. Psychol. 2, 395 (2012).

    Article  Google Scholar 

  58. 58.

    Bersini, H. in Proc. Third International Conference on Simulation of Adaptive Behaviour 325–333 (MIT Press-Bradford Books, 1994).

  59. 59.

    Konidaris, G. & Barto, A. in From Animals to Animats 9 (ed. Nolfi, S.) 346–356 (Springer, 2006).

  60. 60.

    Keramati, M. & Gutkin, B. Homeostatic reinforcement learning for integrating reward collection and physiological stability. eLife 3, e04811 (2014).

    Article  Google Scholar 

  61. 61.

    Moerland, T. M., Broekens, J. & Jonker, C. M. Emotion in reinforcement learning agents and robots: a survey. Mach. Learning 107, 443–480 (2018).

    MathSciNet  Article  Google Scholar 

  62. 62.

    Juechems, K. & Summerfield, C. Where does value come from? Preprint at https://doi.org/10.31234/osf.io/rxf7e (2019).

  63. 63.

    Morville, T., Friston, K., Burdakov, D., Siebner, H. R. & Hulme, O. J. The homeostatic logic of reward. Preprint at https://doi.org/10.1101/242974 (2018).

  64. 64.

    Johnson, M. Morality for Humans (Univ. Chicago Press, 2014).

  65. 65.

    Levy, N. Consciousness and Moral Responsibility (Oxford Univ. Press, 2014).

  66. 66.

    Omohundro, S. M. The basic AI drives. In Proc. 2008 Conference on Artificial General Intelligence 483–492 (ACM, 2008).

  67. 67.

    DeYoung, C. G. in The Cambridge Handbook of Intelligence 711–737 (Cambridge Univ. Press, 2012).

  68. 68.

    Searle, J. R. Minds, brains and programs. Behav. Brain Sci. 3, 417–457 (1980).

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to H. Damasio for comments on this Perspective. This work was supported by grants from the Berggruen Foundation and the Templeton World Charity Foundation to A.D.

Author information

Affiliations

Authors

Corresponding authors

Correspondence to Kingson Man or Antonio Damasio.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Man, K., Damasio, A. Homeostasis and soft robotics in the design of feeling machines. Nat Mach Intell 1, 446–452 (2019). https://doi.org/10.1038/s42256-019-0103-7

Download citation

Further reading

Search

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing