A growing group of researchers believes that the ‘intelligence’ in what we currently call artificial intelligence (AI) is limited: it is on the one hand too attached to data-driven trends in deep learning and on the other hand too attached to what humans think of as intelligence, which often reflects loose thinking about human cognitive capacities. They promote a different direction, taking inspiration from the complex behaviours and capabilities of biological organisms, and focussing on how they interact with the world. Aslan Miriyev and Mirko Kovač describe this view and the need for a new interdisciplinary approach to enable what they call physical artificial intelligence (PAI) in a Comment in this issue.

There are many possible definitions of AI. The authors of the 1955 Proposal for the Dartmouth Summer Research Project on Artificial Intelligence1 tried to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” These complex and mainly human forms of intelligence were initially tackled by methods based on the manipulation of symbols, sometimes referred to as ‘good old-fashioned artificial intelligence (GOFAI)’. In contrast, neuroscience-inspired approaches such as connectionism and neural networks focused on learning and representations, which led to the data-driven deep learning methods that are prominent today.

Indeed, many people now consider AI to be synonymous with deep learning, given the impact of deep learning on applications such as image recognition, language processing and speech recognition, and in its utility for recognizing patterns in data from science and industry. However, deep learning in its current state has limitations: it tends to be data hungry, compute heavy, sensory-based, prone to unexpected mistakes as shown by adversarial examples, and inefficacious for much of cognition and behaviour. Indeed, it has turned out to be possible to beat world champions at Go and chess, but much more challenging to learn the basic cognitive and motor skills of a toddler.

An observation that is sometimes referred to as Moravec’s paradox, originating in the 1980s by researchers in AI and robotics, holds that phenomena considered to be high-level intelligence, such as reasoning, occurred late in evolution and require relatively little computation. By contrast, sensorimotor skills and forms of body regulation such as homeostasis, which are generally considered less intelligent or even unintelligent, are highly evolved, often unconscious, and require much greater computational resources. Moravec’s paradox may explain why it is easier to design an AI system to find the best move in a game of chess than to create a dexterous robotic hand that can pick up the pieces and place them on the board.

In the 1980s, the GOFAI approach was not only overtaken by neural networks, it also came under fire from another movement that pointed to the importance of physical grounding. According to this approach, an intelligent system should have its representations grounded in the physical world. Thus, rather than having an internal model of the world, a robot should use its body and sensors to update its control systems and goal-directed behaviours. These views are famously described by Rodney Brooks in his paper, ‘Elephants don’t play chess’2, who pointed out that the world is its own best model and that “the trick is to sense it appropriately and often enough”.

Since then, many advances have been made in robotics to address this challenge. In the past decade, several directions have come together, in bio-inspired design, materials, actuation and sensing, and control, as well as data-driven approaches, which has led Miriyev and Kovač to propose PAI as a new interdisciplinary approach. They define it as “the theory and practice of creating physical systems capable of performing tasks that are typically associated with intelligent organisms.” It is noteworthy that the authors write ‘performing tasks’ and ‘intelligent organisms’ (rather than humans), referring to the multitude of examples from nature of complex problem-solving features and behaviour, such as honey bees that use optic flow and stereovision to approach landing surfaces, and octopuses that demonstrate extraordinary ingenuity in manoeuvring through challenging spaces.

The PAI approach offers the opportunity to incorporate homeostasis, which is seen as an important process for organisms to regulate their behaviour and adapt to different environments. In a Perspective last year, Man and Damasio observed that the field of soft robotics has advanced to a stage where a process resembling homeostasis could be integrated with intelligent machines3. This integration of a body, internal regulatory mechanisms and control could lead to a new class of machines that have intrinsic goals.

Given the multidisciplinary nature of PAI, the authors propose that a structure for education and training is required for researchers to develop the necessary skills. In particular, they describe PAI as consisting of five main disciplines: mechanical engineering, computer science, biology, chemistry and materials science. They further discuss changes that need to be made at the institution and community levels, in order to encourage and support PAI research.

We can expect that what is generally considered as intelligence and ‘artificial intelligence’ will remain in flux. By integrating advances from different disciplines, there is an opportunity to create intelligent machines with ever greater complexity.