Abstract
Many robotic tasks require knowledge of the exact 3D robot geometry. However, this remains extremely challenging in soft robotics because of the infinite degrees of freedom of soft bodies deriving from their continuum characteristics. Previous studies have achieved only low proprioceptive geometry resolution (PGR), thus suffering from loss of geometric details (for example, local deformation and surface information) and limited applicability. Here we report an intelligent stretchable capacitive e-skin to endow soft robots with high PGR (3,900) bodily awareness. We demonstrate that the proposed e-skin can finely capture a wide range of complex 3D deformations across the entire soft body through multi-position capacitance measurements. The e-skin signals can be directly translated to high-density point clouds portraying the complete geometry via a deep architecture based on transformer. This high PGR proprioception system providing millimetre-scale, local and global geometry reconstruction (2.322 ± 0.687 mm error on a 20 × 20 × 200 mm soft manipulator) can assist in solving fundamental problems in soft robotics, such as precise closed-loop control and digital twin modelling.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 per month
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Rent or buy this article
Get just this article for as long as you need it
$39.95
Prices may be subject to local taxes which are calculated during checkout




Data availability
All data are publicly available in Edinburgh DataShare with the identifier https://doi.org/10.7488/ds/377345.
Code availability
Codes for the implementation of the C2DT are available in Edinburgh DataShare with the identifier https://doi.org/10.7488/ds/377345.
References
Tuthill, J. C. & Azim, E. Proprioception. Curr. Biol. 28, R194–R203 (2018).
Shih, B. et al. Electronic skins and machine learning for intelligent soft robots. Sci. Robot. 5, eaaz9239 (2020).
Truby, R. L., Della Santina, C. & Rus, D. Distributed proprioception of 3D configuration in soft, sensorized robots via deep learning. IEEE Robot. Autom. Lett. 5, 3299–3306 (2020).
Wang, H., Totaro, M. & Beccai, L. Toward perceptive soft robots: progress and challenges. Adv. Sci. 5, 1800541 (2018).
Rus, D. & Tolley, M. T. Design, fabrication and control of soft robots. Nature 521, 467–475 (2015).
Cianchetti, M., Laschi, C., Menciassi, A. & Dario, P. Biomedical applications of soft robotics. Nat. Rev. Mater. 3, 143–153 (2018).
Cheng, N. et al. Prosthetic jamming terminal device: a case study of untethered soft robotics. Soft Robot. 3, 205–212 (2016).
Park, Y. L. et al. Design and control of a bio-inspired soft wearable robotic device for ankle-foot rehabilitation. Bioinspir. Biomim. 9, 016007 (2014).
Arnold, T. & Scheutz, M. The tactile ethics of soft robotics: designing wisely for human-robot interaction. Soft Robot. 4, 81–87 (2017).
Moin, A. et al. A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nat. Electron. 4, 54–63 (2021).
Yu, X. et al. Skin-integrated wireless haptic interfaces for virtual and augmented reality. Nature 575, 473–479 (2019).
Thuruthel, T. G., Shih, B., Laschi, C. & Tolley, M. T. Soft robot perception using embedded soft sensors and recurrent neural networks. Science Robot. 4, eaav1488 (2019).
To, C., Hellebrekers, T. L. & Park, Y. L. Highly stretchable optical sensors for pressure, strain, and curvature measurement. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2015).
Van Meerbeek, I. M., De Sa, C. M. & Shepherd, R. F. Soft optoelectronic sensory foams with proprioception. Sci. Robot. 3, eaau2489 (2018).
Kim, T. et al. Heterogeneous sensing in a multifunctional soft sensor for human-robot interfaces. Sci. Robot. 5, eabc6878 (2020).
Zhao, Y. et al. Somatosensory actuator based on stretchable conductive photothermally responsive hydrogel. Sci. Robot. 6, eabd5483 (2021).
Scharff, R. B. et al. Sensing and reconstruction of 3-D deformation on pneumatic soft robots. IEEE/ASME Trans. Mechatron. 26, 1877–1885 (2021).
Glauser, O., Panozzo, D., Hilliges, O. & Sorkine-Hornung, O. Deformation capture via soft and stretchable sensor arrays. ACM Trans. Graphics 38, 1–16 (2019).
Leber, A. et al. Soft and stretchable liquid metal transmission lines as distributed probes of multimodal deformations. Nat. Electron. 3, 316–326 (2020).
Armanini, C. et al. Flagellate underwater robotics at macroscale: design, modeling, and characterization. IEEE Trans. Robot. 38, 731–747 (2021).
Marashdeh, Q. M., Teixeira, F. L. & Fan, L. S. Adaptive electrical capacitance volume tomography. IEEE Sensors J. 14, 1253–1259 (2014).
Vaswani, A. et al. Attention is all you need. Proc. Adv. Neural Inf. Process. Syst. 5998–6008 (NeurIPS, 2017).
Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. Annual Conf. North American Chapter of the Association for Computational Linguistics (ACL, 2019)
Dai, Z. et al. Transformer-XL: attentive language models beyond a fixed-length context. In Proc. of the 57th Annual Meeting of the Association for Computational Linguistics (ACL, 2019).
Zhao, H., Jia, J. & Koltun, V. Exploring self-attention for image recognition. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2020).
Yuan, L. et al. Tokens-to-token ViT: training vision transformers from scratch on ImageNet. In Proc. IEEE/CVF International Conference on Computer Vision (IEEE, 2021).
Hu, D., Lu, K. & Yang, Y. Image reconstruction for electrical impedance tomography based on spatial invariant feature maps and convolutional neural network. In Proc. IEEE International Conference on Imaging Systems and Techniques (IEEE, 2019).
Fan, H., Su, H. & Guibas, L. J. A point set generation network for 3D object reconstruction from a single image. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017).
Wang, R. et al. Real-time soft body 3D proprioception via deep vision-based sensing. IEEE Robot. Autom. Lett. 5, 3382–3389 (2020).
Van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).
Araromi, O. A., Rosset, S. & Shea, H. R. High-resolution, large-area fabrication of compliant electrodes via laser ablation for robust, stretchable dielectric elastomer actuators and sensors. ACS Appl. Mater. Interfaces 7, 18046–18053 (2015).
Yoon, S. H., Paredes, L., Huo, K. & Ramani, K. MultiSoft: soft sensor enabling real-time multimodal sensing with contact localization and deformation classification. In Proc. of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (ACM, 2018).
Zhu, Z., Park, H. S. & McAlpine, M. C. 3D printed deformable sensors. Sci. Adv. 6, eaba5575 (2020).
Yang, Y., Peng, L. & Jia, J. A novel multi-electrode sensing strategy for electrical capacitance tomography with ultra-low dynamic range. Flow Meas. Instrum. 53, 67–79 (2017).
Edelsbrunner, H., Kirkpatrick, D. & Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 29, 551–559 (1983).
Tang, L., Shang, J. & Jiang, X. Multilayered electronic transfer tattoo that can enable the crease amplification effect. Sci. Adv. 7, eabe3778 (2021).
Jinkins, K. R. et al. Thermally switchable, crystallizable oil and silicone composite adhesives for skin-interfaced wearable devices. Sci. Adv. 8, eabo0537 (2022).
Hang, C. et al. A soft and absorbable temporary epicardial pacing wire. Adv. Mater. 33, 2101447 (2021).
Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Proc. Advances in Neural Information Processing Systems (NeurIPS, 2019).
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at arXiv https://arxiv.org/abs/1412.6980 (2014).
Tolgyessy, M., Dekan, M., Chovanec, L. & Hubinsky, P. Evaluation of the azure Kinect and its comparison to Kinect V1 and Kinect V2. Sensors 21, 413 (2021).
Geiger, A., Moosmann, F., Car, O. & Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proc. IEEE International Conference on Robotics and Automation (IEEE, 2012).
Soleimani, V., Mirmehdi, M., Damen, D., Hannuna, S. & Camplani, M. 3D data acquisition and registration using two opposing Kinects. In Proc. IEEE International Conference on 3D Vision (IEEE, 2016).
Qi, C. R., Yi, L., Su, H. & Guibas, L. J. Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Proc. Advances in Neural Information Processing Systems (NeurIPS, 2017).
Hu, D., Giorgio-Serchi, F., Zhang, S. & Yang, Y. Stretchable e-skin and transformer enable high-resolution morphological reconstruction for soft robots. Edinburgh DataShare https://doi.org/10.7488/ds/3773 (2022).
Acknowledgements
Y.Y. and F.G.-S. acknowledge the support of the Data Driven Innovation Chancellor’s Fellowship at The University of Edinburgh. D.H. acknowledges the support of the studentship from the School of Engineering, The University of Edinburgh. S.Z. acknowledges the support of the Seed Funding for Strategic Interdisciplinary Research Scheme (SIRS) from The University of Hong Kong and the Germany/Hong Kong Joint Research Scheme (G-HKU707/22) from the Research Grants Council.
Author information
Authors and Affiliations
Contributions
Y.Y., F.G.-S., D.H. and S.Z. conceived the concept. Y.Y. and F.G.-S. supervised the project and acquired funding. D.H. and Y.Y. carried out the simulation, fabrication and experiments. S.Z. guided the material fabrication and characterization. Y.Y. designed the measurement electronics and software. D.H. designed the machine learning algorithm. D.H. and Y.Y. analysed the data. D.H., F.G.-S. and Y.Y. wrote the manuscript. All authors reviewed and revised the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks the anonymous reviewers for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Figs. 1–11 and Tables 1–4.
Examples of high PGR geometry reconstruction on the virtual proprioception dataset.
Examples of high PGR geometry reconstruction on the real-world dataset.
Observation of examples of high PGR geometry reconstruction on the real-world dataset from multiple viewpoints.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Hu, D., Giorgio-Serchi, F., Zhang, S. et al. Stretchable e-skin and transformer enable high-resolution morphological reconstruction for soft robots. Nat Mach Intell 5, 261–272 (2023). https://doi.org/10.1038/s42256-023-00622-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-023-00622-8