Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Artificial intelligence can be manifested in corporeal and non-corporeal forms. In this issue, Miriyev and Kovač introduce the concept of physical artificial intelligence, which refers to the emerging trend in robotics to create physical systems by co-evolving the body, control, morphology, and actuation and sensing. To support their vision, the authors provide a blueprint for training researchers and establishing institutional environments. In our Editorial, we take a closer look at the history and promise of physical artificial intelligence.
Artificial intelligence can be defined as intelligence demonstrated by machines. But what counts as intelligence, and how intelligence is implemented in different kinds of machines, robots and software varies across disciplines and over time.
Addressing the problems caused by AI applications in society with ethics frameworks is futile until we confront the political structure of such applications.
Synthesizing robots via physical artificial intelligence is a multidisciplinary challenge for future robotics research. An education methodology is needed for researchers to develop a combination of skills in physical artificial intelligence.
Autonomous driving technology is improving, although doubts about their reliability remain. Controllers based on compact neural architectures could help improve their interpretability and robustness.
Microrobots can interact intelligently with their environment and complete specific tasks by well-designed incorporation of responsive materials. Recent work demonstrates how swarms of microbots with specifically tuned surface chemistry can remove a hormone pollutant from a solution by coalescing it into a web.
Deep learning has resulted in impressive achievements, but under what circumstances does it fail, and why? The authors propose that its failures are a consequence of shortcut learning, a common characteristic across biological and artificial systems in which strategies that appear to have solved a problem fail unexpectedly under different circumstances.
Across disciplines, there is a rising interest in interpreting machine learning models to derive scientific knowledge from data. Genkin and Engel show that models optimized for predicting data can disagree with the ground truth and propose a new model selection principle to prioritize accurate interpretation.
Neural network models can predict the socioeconomic wealth of an area from aerial views, but fall short of explaining how visual features trigger a given prediction. The authors develop a pipeline for projecting class activation maps onto the underlying urban topology, to help interpret such predictions.
The wealth of data gathered from single-cell RNA sequencing can be processed with deep learning techniques, but often those methods are too opaque to reveal why a single cell is labelled to be a certain cell type. Lifei Wang and colleagues present an RNA-sequencing analysis method that uses capsule networks and is interpretable enough to allow for identification of cell-type-specific genes.
Metal–organic frameworks (MOFs) are attractive materials for gas capture, separation, sensing and catalysis. Determining their water stability is important, but time-intensive. Batra et al. use machine learning to screen water-stable MOFs and identify chemical features supporting their stability.
Microrobots are usually too small to contain traditional computing substrates that could control their behaviour. Dekanovsky and colleagues have developed a microrobot swarm that removes hormonal pollutants when it senses a chemical signal in its environment.
The thickness of the retina is an important medical indicator for diabetic retinopathy. Holmberg and colleagues present a self-supervised deep-learning method that uses cross-modal data to predict retinal thickness maps from easily obtainable fundus images.