Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Getting a sense of touch

Wearable sensors and machine learning can be a powerful combination for capturing human gestures and tactile interactions.

Gesture recognition is important for both the study of human behaviour and the development of applications in healthcare and robotics. It is typically achieved with the help of images or videos, but radar is another option (see our Research Highlight in this issue of Nature Electronics on the latest device from Google and Infineon Technologies) and the increasing sophistication of flexible devices is also creating new approaches and possibilities. Last year, for instance, it was shown that yarn-based stretchable sensors that are worn over the hand can — with the help of machine learning — recognize various hand gestures1. Similarly, it was also reported then that data from skin-like stretchable strain sensors based on single-walled carbon nanotubes can be combined with visual data to achieve — and again, with the help of machine learning — accurate gesture recognition2.

Photograph of functional fibres that can be used to create tactile textiles (described in the Article by Luo and colleagues), embedded in conventional yarn. Credit: Wan Shou

Our sense of touch is crucial to how we understand and interact with the physical world, and being able to capture such tactile interactions is, like gesture recognition, important in behavioural studies, as well as in healthcare and robotics applications. Ideally, large-area — and even full-body — sensory interfaces would be used to collect the necessary data. But conventional flexible and wearable electronics struggle to provide such capabilities. In an Article in this issue, Yiyue Luo and colleagues at the Massachusetts Institute of Technology show that textiles made from functional fibres can be used to monitor and recognize tactile interactions.

The fibres have a coaxial structure in which conductive stainless-steel threads are coated with a piezoresistive nanocomposite; when orthogonally overlapped, a pair of these fibres can act as a sensing unit that translates pressure stimuli into electrical signals. The fibres are converted into wearable sensing garments of different sizes — including gloves, vests and socks — using digital machine knitting. Fabricating such large-scale systems with a flawless array of consistent sensors is problematic. Thus, the researchers employ machine learning — and a self-supervised learning approach, in particular — for sensor calibration and correction. This relaxes the required fabrication precision while still allowing accurate sensing to be achieved.

The team use the garments to capture large datasets of various human activities. They show, for instance, that the sensing vests — together with a neural-network-based classifier — can distinguish between different postures (sitting, laying down, leaning), as well as the texture of the contacted surface. And they show that the sensing socks can generate pressure maps of the feet that— together with a convolutional neural network — can be used to estimate a person’s precise pose; a result that suggests such socks could potentially replace traditional full-body motion capture systems.

As Jun Chen and colleagues at the University of California, Los Angeles point out in an accompanying News & Views article, the use of machine learning is increasingly popular in the analysis of the massive datasets that are generated with wearable sensing devices. However, they also note that, “such systems typically rely on outsourced algorithms on physically separated computing systems or cloud servers.” This can lead to problems related to energy consumption, time latency, network congestion and security, and systems that can implement machine learning locally would be preferable.

New capabilities are though emerging. Ali Moin and colleagues at the University of California at Berkeley, ETH Zürich and the University of Bologna have, for instance, recently reported a flexible device that can measure muscle contraction patterns and is equipped with in-sensor adaptative machine learning3. The system, which is worn on the forearm, can be used to recognize up to 21 different hand gestures in real time and with an accuracy of around 93%.

References

  1. 1.

    Zhou, Z. et al. Nat. Electron. 3, 571–578 (2020).

    Article  Google Scholar 

  2. 2.

    Wang, M. et al. Nat. Electron. 3, 563–570 (2020).

    Article  Google Scholar 

  3. 3.

    Moin, A. et al. Nat. Electron. 4, 54–63 (2021).

    Article  Google Scholar 

Download references

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Getting a sense of touch. Nat Electron 4, 171 (2021). https://doi.org/10.1038/s41928-021-00567-z

Download citation

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing