Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
Bridging the gap between artificial vision and touch
Object manipulation using an innovative glove allows large databases of detailed pressure maps to be obtained. Such data could lead to advances in robotic sensing and in our understanding of the role of touch in manipulation.
The study and replication of human sensory abilities, such as visual, auditory and tactile (touch-based) perception, depend on the availability of suitable data. Generally, the larger and richer the data set, the more closely models can mimic these functions. Advances in artificial visual and speech systems rely on powerful models, known as deep-learning models, and have been fuelled by the ubiquity of databases of digital images and spoken audio (see, for example, go.nature.com/2w7nc0q). By contrast, progress in the development of tactile sensors — devices that convert a stimulus of physical contact into a measurable signal — has been limited, mainly because of the difficulty of integrating electronics into flexible materials1. In a paper in Nature, Sundaram et al.2 report their use of a low-cost tactile glove that addresses this issue.