Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • News & Views
  • Published:

EDGE COMPUTING

Efficient deep learning

The computational complexity of deep neural networks is a major obstacle of many application scenarios driven by low-power devices, including federated learning. A recent finding shows that random sketches can substantially reduce the model complexity without affecting prediction accuracy.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Illustrative example of model compression and its application to federated learning.

References

  1. Hao, K. MIT Technology Review http://go.nature.com/3kagYua (2019).

  2. Li, B. et al. Nat. Comput. Sci. https://doi.org/10.1038/s43588-021-00039-6 (2021).

  3. Frankle, J. & Carbin, M. In Proc. Int. Conf. Learning Representations (ICLR, 2019).

  4. Han, S., Pool, J., Tran, J. & Dally, W. In Proc. Advances in Neural Information Processing Systems Vol. 28 1135–1143 (NeurIPS, 2015).

  5. Lee, N., Ajanthan, T. & Torr, P. In Proc. Int. Conf. Learning Representations (ICLR, 2019).

  6. McMahan, H. B., Moore, E., Ramage, D., Hampson, S. & Agüera y Arcas, B. In Proc. Int. Conf. Artificial Intelligence and Statistics (JMLR, 2017).

  7. Wang, S. et al. IEEE J. Select. Areas Commun 37, 1205–1221 (2019).

    Article  Google Scholar 

  8. Li, C., Farkhoor, H., Liu, R. & Yosinski, J. In Proc. Int. Conf. Learning Representations (ICLR, 2018).

  9. Hinton, G., Vinyals, O. & Dean, J. In Deep Learning and Representation Learning Workshop at NeurIPS (NeurIPS, 2015).

  10. Jiang, Y. et al. In SpicyFL Workshop at NeurIPS (NeurIPS, 2020).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiqiang Wang.

Ethics declarations

Competing interests

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, S. Efficient deep learning. Nat Comput Sci 1, 181–182 (2021). https://doi.org/10.1038/s43588-021-00042-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s43588-021-00042-x

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics