Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Stretchable e-skin and transformer enable high-resolution morphological reconstruction for soft robots


Many robotic tasks require knowledge of the exact 3D robot geometry. However, this remains extremely challenging in soft robotics because of the infinite degrees of freedom of soft bodies deriving from their continuum characteristics. Previous studies have achieved only low proprioceptive geometry resolution (PGR), thus suffering from loss of geometric details (for example, local deformation and surface information) and limited applicability. Here we report an intelligent stretchable capacitive e-skin to endow soft robots with high PGR (3,900) bodily awareness. We demonstrate that the proposed e-skin can finely capture a wide range of complex 3D deformations across the entire soft body through multi-position capacitance measurements. The e-skin signals can be directly translated to high-density point clouds portraying the complete geometry via a deep architecture based on transformer. This high PGR proprioception system providing millimetre-scale, local and global geometry reconstruction (2.322 ± 0.687 mm error on a 20 × 20 × 200 mm soft manipulator) can assist in solving fundamental problems in soft robotics, such as precise closed-loop control and digital twin modelling.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Get just this article for as long as you need it


Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Design of the SCAS and the pipeline for full-geometry, high PGR 3D deformation reconstruction of soft robots.
Fig. 2: High PGR 3D deformation reconstruction based on the virtual dataset.
Fig. 3: Characterization of the SCAS.
Fig. 4: Real-world high PGR proprioception.

Data availability

All data are publicly available in Edinburgh DataShare with the identifier

Code availability

Codes for the implementation of the C2DT are available in Edinburgh DataShare with the identifier


  1. Tuthill, J. C. & Azim, E. Proprioception. Curr. Biol. 28, R194–R203 (2018).

    Article  Google Scholar 

  2. Shih, B. et al. Electronic skins and machine learning for intelligent soft robots. Sci. Robot. 5, eaaz9239 (2020).

    Article  Google Scholar 

  3. Truby, R. L., Della Santina, C. & Rus, D. Distributed proprioception of 3D configuration in soft, sensorized robots via deep learning. IEEE Robot. Autom. Lett. 5, 3299–3306 (2020).

    Article  Google Scholar 

  4. Wang, H., Totaro, M. & Beccai, L. Toward perceptive soft robots: progress and challenges. Adv. Sci. 5, 1800541 (2018).

    Article  Google Scholar 

  5. Rus, D. & Tolley, M. T. Design, fabrication and control of soft robots. Nature 521, 467–475 (2015).

    Article  Google Scholar 

  6. Cianchetti, M., Laschi, C., Menciassi, A. & Dario, P. Biomedical applications of soft robotics. Nat. Rev. Mater. 3, 143–153 (2018).

    Article  Google Scholar 

  7. Cheng, N. et al. Prosthetic jamming terminal device: a case study of untethered soft robotics. Soft Robot. 3, 205–212 (2016).

    Article  Google Scholar 

  8. Park, Y. L. et al. Design and control of a bio-inspired soft wearable robotic device for ankle-foot rehabilitation. Bioinspir. Biomim. 9, 016007 (2014).

    Article  Google Scholar 

  9. Arnold, T. & Scheutz, M. The tactile ethics of soft robotics: designing wisely for human-robot interaction. Soft Robot. 4, 81–87 (2017).

    Article  Google Scholar 

  10. Moin, A. et al. A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nat. Electron. 4, 54–63 (2021).

    Article  Google Scholar 

  11. Yu, X. et al. Skin-integrated wireless haptic interfaces for virtual and augmented reality. Nature 575, 473–479 (2019).

    Article  Google Scholar 

  12. Thuruthel, T. G., Shih, B., Laschi, C. & Tolley, M. T. Soft robot perception using embedded soft sensors and recurrent neural networks. Science Robot. 4, eaav1488 (2019).

    Article  Google Scholar 

  13. To, C., Hellebrekers, T. L. & Park, Y. L. Highly stretchable optical sensors for pressure, strain, and curvature measurement. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015).

  14. Van Meerbeek, I. M., De Sa, C. M. & Shepherd, R. F. Soft optoelectronic sensory foams with proprioception. Sci. Robot. 3, eaau2489 (2018).

    Article  Google Scholar 

  15. Kim, T. et al. Heterogeneous sensing in a multifunctional soft sensor for human-robot interfaces. Sci. Robot. 5, eabc6878 (2020).

    Article  Google Scholar 

  16. Zhao, Y. et al. Somatosensory actuator based on stretchable conductive photothermally responsive hydrogel. Sci. Robot. 6, eabd5483 (2021).

    Article  Google Scholar 

  17. Scharff, R. B. et al. Sensing and reconstruction of 3-D deformation on pneumatic soft robots. IEEE/ASME Trans. Mechatron. 26, 1877–1885 (2021).

    Article  Google Scholar 

  18. Glauser, O., Panozzo, D., Hilliges, O. & Sorkine-Hornung, O. Deformation capture via soft and stretchable sensor arrays. ACM Trans. Graphics 38, 1–16 (2019).

    Google Scholar 

  19. Leber, A. et al. Soft and stretchable liquid metal transmission lines as distributed probes of multimodal deformations. Nat. Electron. 3, 316–326 (2020).

    Article  Google Scholar 

  20. Armanini, C. et al. Flagellate underwater robotics at macroscale: design, modeling, and characterization. IEEE Trans. Robot. 38, 731–747 (2021).

    Article  Google Scholar 

  21. Marashdeh, Q. M., Teixeira, F. L. & Fan, L. S. Adaptive electrical capacitance volume tomography. IEEE Sensors J. 14, 1253–1259 (2014).

    Article  Google Scholar 

  22. Vaswani, A. et al. Attention is all you need. Proc. Adv. Neural Inf. Process. Syst. 5998–6008 (NeurIPS, 2017).

  23. Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. Annual Conf. North American Chapter of the Association for Computational Linguistics (ACL, 2019)

  24. Dai, Z. et al. Transformer-XL: attentive language models beyond a fixed-length context. In Proc. of the 57th Annual Meeting of the Association for Computational Linguistics (ACL, 2019).

  25. Zhao, H., Jia, J. & Koltun, V. Exploring self-attention for image recognition. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020).

  26. Yuan, L. et al. Tokens-to-token ViT: training vision transformers from scratch on ImageNet. In Proc. IEEE/CVF International Conference on Computer Vision (IEEE, 2021).

  27. Hu, D., Lu, K. & Yang, Y. Image reconstruction for electrical impedance tomography based on spatial invariant feature maps and convolutional neural network. In Proc. IEEE International Conference on Imaging Systems and Techniques (IEEE, 2019).

  28. Fan, H., Su, H. & Guibas, L. J. A point set generation network for 3D object reconstruction from a single image. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017).

  29. Wang, R. et al. Real-time soft body 3D proprioception via deep vision-based sensing. IEEE Robot. Autom. Lett. 5, 3382–3389 (2020).

    Article  Google Scholar 

  30. Van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).

    MATH  Google Scholar 

  31. Araromi, O. A., Rosset, S. & Shea, H. R. High-resolution, large-area fabrication of compliant electrodes via laser ablation for robust, stretchable dielectric elastomer actuators and sensors. ACS Appl. Mater. Interfaces 7, 18046–18053 (2015).

    Article  Google Scholar 

  32. Yoon, S. H., Paredes, L., Huo, K. & Ramani, K. MultiSoft: soft sensor enabling real-time multimodal sensing with contact localization and deformation classification. In Proc. of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (ACM, 2018).

  33. Zhu, Z., Park, H. S. & McAlpine, M. C. 3D printed deformable sensors. Sci. Adv. 6, eaba5575 (2020).

    Article  Google Scholar 

  34. Yang, Y., Peng, L. & Jia, J. A novel multi-electrode sensing strategy for electrical capacitance tomography with ultra-low dynamic range. Flow Meas. Instrum. 53, 67–79 (2017).

    Article  Google Scholar 

  35. Edelsbrunner, H., Kirkpatrick, D. & Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 29, 551–559 (1983).

    Article  MathSciNet  MATH  Google Scholar 

  36. Tang, L., Shang, J. & Jiang, X. Multilayered electronic transfer tattoo that can enable the crease amplification effect. Sci. Adv. 7, eabe3778 (2021).

    Article  Google Scholar 

  37. Jinkins, K. R. et al. Thermally switchable, crystallizable oil and silicone composite adhesives for skin-interfaced wearable devices. Sci. Adv. 8, eabo0537 (2022).

    Article  Google Scholar 

  38. Hang, C. et al. A soft and absorbable temporary epicardial pacing wire. Adv. Mater. 33, 2101447 (2021).

    Article  Google Scholar 

  39. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Proc. Advances in Neural Information Processing Systems (NeurIPS, 2019).

  40. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at arXiv (2014).

  41. Tolgyessy, M., Dekan, M., Chovanec, L. & Hubinsky, P. Evaluation of the azure Kinect and its comparison to Kinect V1 and Kinect V2. Sensors 21, 413 (2021).

    Article  Google Scholar 

  42. Geiger, A., Moosmann, F., Car, O. & Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proc. IEEE International Conference on Robotics and Automation (IEEE, 2012).

  43. Soleimani, V., Mirmehdi, M., Damen, D., Hannuna, S. & Camplani, M. 3D data acquisition and registration using two opposing Kinects. In Proc. IEEE International Conference on 3D Vision (IEEE, 2016).

  44. Qi, C. R., Yi, L., Su, H. & Guibas, L. J. Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Proc. Advances in Neural Information Processing Systems (NeurIPS, 2017).

  45. Hu, D., Giorgio-Serchi, F., Zhang, S. & Yang, Y. Stretchable e-skin and transformer enable high-resolution morphological reconstruction for soft robots. Edinburgh DataShare (2022).

Download references


Y.Y. and F.G.-S. acknowledge the support of the Data Driven Innovation Chancellor’s Fellowship at The University of Edinburgh. D.H. acknowledges the support of the studentship from the School of Engineering, The University of Edinburgh. S.Z. acknowledges the support of the Seed Funding for Strategic Interdisciplinary Research Scheme (SIRS) from The University of Hong Kong and the Germany/Hong Kong Joint Research Scheme (G-HKU707/22) from the Research Grants Council.

Author information

Authors and Affiliations



Y.Y., F.G.-S., D.H. and S.Z. conceived the concept. Y.Y. and F.G.-S. supervised the project and acquired funding. D.H. and Y.Y. carried out the simulation, fabrication and experiments. S.Z. guided the material fabrication and characterization. Y.Y. designed the measurement electronics and software. D.H. designed the machine learning algorithm. D.H. and Y.Y. analysed the data. D.H., F.G.-S. and Y.Y. wrote the manuscript. All authors reviewed and revised the manuscript.

Corresponding author

Correspondence to Yunjie Yang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–11 and Tables 1–4.

Reporting Summary

Examples of high PGR geometry reconstruction on the virtual proprioception dataset.

Examples of high PGR geometry reconstruction on the real-world dataset.

Observation of examples of high PGR geometry reconstruction on the real-world dataset from multiple viewpoints.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hu, D., Giorgio-Serchi, F., Zhang, S. et al. Stretchable e-skin and transformer enable high-resolution morphological reconstruction for soft robots. Nat Mach Intell 5, 261–272 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing