Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

A preprint version of the article is available at arXiv.

Abstract

Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton–proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of \({\mathcal{O}}(1)\,\upmu{\rm{s}}\) is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Quantized ReLU function in QKeras.
Fig. 2: The QKeras and hls4ml workflow.
Fig. 3: Performance on a Xilinx VU9P FPGA.

Similar content being viewed by others

Data availability

The data used in this study are openly available at Zenodo58 from https://doi.org/10.5281/zenodo.3602260.

Code availability

The QKeras library, which also includes AutoQKeras and QTools, is available from https://github.com/google/qkeras (the work presented here uses QKeras version 0.7.4). Examples on how to run the library are available in the notebook subdirectory. The hls4ml library is available at https://github.com/fastmachinelearning/hls4ml and all versions ≥0.2.1 support QKeras models (the work presented here is based on version 0.2.1). For examples on how to use QKeras models in hls4ml, the notebook part4_quantization at https://github.com/fastmachinelearning/hls4ml-tutorial serves as a general introduction.

References

  1. Lin, S.-C. et al. The architectural implications of autonomous driving: constraints and acceleration. ACM SIGPLAN Notices 53, 751–766 (2018).

  2. Ignatov, A. et al. AI benchmark: running deep neural networks on Android smartphones. In Computer Vision – ECCV 2018 Workshops. ECCV 2018 Lecture Notes in Computer Science Vol. 11133 (eds Leal-Taixé, L. & Roth, S.) 288–314 (Springer, 2018); https://doi.org/10.1007/978-3-030-11021-5_19

  3. Leber, C., Geib, B. & Litz, H. High frequency trading acceleration using FPGAs. In 2011 21st International Conference on Field Programmable Logic and Applications 317–322 (IEEE, 2011).

  4. The LHC Study Group. The Large Hadron Collider, Conceptual Design. Technical Report CERN/AC/95-05 (CERN, 1995).

  5. Apollinari, G., Béjar Alonso, I., Brüning, O., Lamont, M. & Rossi, L. High-Luminosity Large Hadron Collider (HL-LHC): Preliminary Design Report. Technical Report (Fermi National Accelerator Laboratory, 2015).

  6. Iandola, F. N. et al. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5-MB model size. Preprint at https://arxiv.org/pdf/1602.07360.pdf (2016).

  7. Howard, A. G. et al. MobileNets: efficient convolutional neural networks for mobile vision applications. Preprint at https://arxiv.org/pdf/1704.04861.pdf (2017).

  8. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. & Chen, L.-C. MobileNetV2: inverted residuals and linear bottlenecks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4510–4520 (IEEE, 2018).

  9. Ma, N., Zhang, X., Zheng, H.-T. & Sun, J. ShuffleNet V2: practical guidelines for efficient CNN architecture design. In Proc. European Conference on Computer Vision (ECCV) Lecture Notes in Computer Science 116–131 (Springer, 2018).

  10. Howard, A. et al. Searching for MobileNetV3. In Proc. IEEE International Conference on Computer Vision 1314–1324 (IEEE, 2019).

  11. Ding, X. et al. Global sparse momentum SGD for pruning very deep neural networks. In Advances in Neural Information Processing Systems (eds Wallach, H. et al.) 6382–6394 (NIPS, 2019).

  12. He, Y., Zhang, X. & Sun, J. Channel pruning for accelerating very deep neural networks. In Proc. IEEE International Conference on Computer Vision 1389–1397 (IEEE, 2017).

  13. Duarte, J. et al. Fast inference of deep neural networks in FPGAs for particle physics. J. Instrum. 13, P07027 (2018).

    Article  Google Scholar 

  14. Nagel, M., van Baalen, M., Blankevoort, T. & Welling, M. Data-free quantization through weight equalization and bias correction. In Proc. IEEE International Conference on Computer Vision 1325–1334 (IEEE, 2019).

  15. Meller, E., Finkelstein, A., Almog, U. & Grobman, M. Same, same but different: recovering neural network quantization error through weight factorization. In Proc. 36th International Conference on Machine Learning (eds Chaudhuri, K. & Salakhutdinov, R.) 4486–4495 (PMLR, 2019).

  16. Zhao, R., Hu, Y., Dotzel, J., De Sa, C. & Zhang, Z. Improving neural network quantization without retraining using outlier channel splitting. Preprint at https://arxiv.org/pdf/1901.09504.pdf (2019).

  17. Banner, R., Nahshan, Y. & Soudry, D. Post training 4-bit quantization of convolutional networks for rapid-deployment. In Advances in Neural Information Processing Systems (eds Wallach, H. et al.) 7950–7958 (NIPS, 2019).

  18. Moons, B., Goetschalckx, K., Van Berckelaer, N. & Verhelst, M. Minimum energy quantized neural networks. In 51st Asilomar Conference on Signals, Systems, and Computers 1921–192 (ACSSC, 2017).

  19. Courbariaux, M., Bengio, Y. & David, J.-P. BinaryConnect: training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems 28 (eds Cortes, C. et al.) 3123–3131 (Curran Associates, 2015).

  20. Zhang, D., Yang, J., Ye, D. & Hua, G. LQ-Nets: learned quantization for highly accurate and compact deep neural networks. In Proc. European Conference on Computer Vision (ECCV) 365–382 (Springer, 2018).

  21. Li, F. & Liu, B. Ternary weight networks. Preprint at https://arxiv.org/pdf/1605.04711.pdf (2016).

  22. Zhou, S. et al. DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients. Preprint at https://arxiv.org/pdf/1606.06160.pdf (2016).

  23. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R. & Bengio, Y. Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18, 6869–6898 (2017).

    MathSciNet  MATH  Google Scholar 

  24. Rastegari, M., Ordonez, V., Redmon, J. & Farhadi, A. XNOR-Net: ImageNet classification using binary convolutional neural networks. In Computer Vision – ECCV 2016. Lecture Notes in Computer Science Vol. 9908 (eds Leibe, B. et al.) 525–542 (Springer, 2016).

  25. Micikevicius, P. et al. Mixed precision training. In International Conference on Learning Representations (ICLR, 2018).

  26. Zhuang, B., Shen, C., Tan, M., Liu, L. & Reid, I. Towards effective low-bitwidth convolutional neural networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 7920–7928 (IEEE, 2018).

  27. Wang, N., Choi, J., Brand, D., Chen, C.-Y. & Gopalakrishnan, K. Training deep neural networks with 8-bit floating point numbers. In Advances in Neural Information Processing Systems 7675–7684 (NIPS, 2018).

  28. Wang, K., Liu, Z., Lin, Y., Lin, J. & Han, S. HAQ: hardware-aware automated quantization with mixed precision. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019).

  29. Dong, Z., Yao, Z., Gholami, A., Mahoney, M. & Keutzer, K. HAWQ: Hessian AWare Quantization of neural networks with mixed-precision. In Proc. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 293–302 (IEEE, 2019).

  30. Dong, Z. et al. HAWQ-V2: Hessian AWare trace-weighted Quantization of neural networks. In Advances in Neural Information Processing Systems Vol. 33 (eds Larochelle, H. et al.) 18518–18529 (Curran Associates, 2020).

  31. Wu, B. et al. Mixed precision quantization of ConvNets via differentiable neural architecture search. Preprint at https://arxiv.org/pdf/1812.00090.pdf (2018).

  32. Chollet, F. et al. Keras https://github.com/fchollet/keras (2015).

  33. Abadi, M. et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems http://tensorflow.org/ (2015).

  34. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32 (eds Wallach, H. et al.) 8024 (Curran Associates, 2019); https://arxiv.org/pdf/1912.01703.pdf

  35. Open Neural Network Exchange Collaboration https://onnx.ai/ (2017).

  36. Venieris, S. I., Kouris, A. & Bouganis, C.-S. Toolflows for mapping convolutional neural networks on FPGAs: a survey and future directions. ACM Comput. Surv. 51, 56 (2018).

    Article  Google Scholar 

  37. Guo, K., Zeng, S., Yu, J., Wang, Y. & Yang, H. A survey of FPGA-based neural network inference accelerators. ACM Trans. Reconfigurable Technol. Syst. 12, 2 (2018).

    Google Scholar 

  38. Shawahna, A., Sait, S. M. & El-Maleh, A. FPGA-based accelerators of deep learning networks for learning and classification: a review. IEEE Access 7, 7823–7859 (2019).

    Article  Google Scholar 

  39. Abdelouahab, K., Pelcat, M., Serot, J. & Berry, F. Accelerating CNN inference on FPGAs: a survey. Preprint at https://arxiv.org/pdf/1806.01683.pdf (2018).

  40. Intel. Intel High Level Synthesis Compiler https://www.intel.com/content/www/us/en/software/programmable/quartus-prime/hls-compiler.html (2020).

  41. Mentor/Siemens. Catapult High-Level Synthesis https://www.mentor.com/hls-lp/catapult-high-level-synthesis (2020).

  42. Iiyama, Y. et al. Distance-weighted graph neural networks on FPGAs for real-time particle reconstruction in high energy physics. Front. Big Data 3, 598927 (2021).

    Article  Google Scholar 

  43. Ngadiuba, J. et al. Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml. Mach. Learn. Sci. Technol. 2, 015001 (2020).

    Article  Google Scholar 

  44. Umuroglu, Y. et al. FINN: a framework for fast, scalable binarized neural network inference. In Proc. 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays 65–74 (ACM, 2017).

  45. Blott, M. et al. FINN-R: an end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Trans. Reconfigurable Technol. Syst. 11, 16 (2018).

    Article  Google Scholar 

  46. Alessandro, F. G. & Nickfraser, U. Y. Xilinx/brevitas: Release version 0.2.1 https://doi.org/10.5281/zenodo.4507794 (2021).

  47. Umuroglu, Y., Akhauri, Y., Fraser, N. J. & Blott, M. LogicNets: co-designed neural networks and circuits for extreme-throughput applications. In 30th International Conference on Field-Programmable Logic and Applications 291–297 (IEEE, 2020).

  48. Guan, Y. et al. FP-DNN: an automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates. In 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) 152 (IEEE, 2017).

  49. Sharma, H. et al. From high-level deep neural models to FPGAs. In Proc. 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture 1 (IEEE, 2016); https://doi.org/10.1109/MICRO.2016.7783720

  50. Gokhale, V., Zaidy, A., Chang, A. X. M. & Culurciello, E. Snowflake: an efficient hardware accelerator for convolutional neural networks. In Proc. 2017 IEEE International Symposium on Circuits and Systems (ISCAS) 1–4 (IEEE, 2017).

  51. Venieris, S. I. & Bouganis, C.-S. fpgaConvNet: a toolflow for mapping diverse convolutional neural networks on embedded FPGAs. In Proc. NIPS 2017 Workshop on Machine Learning on the Phone and other Consumer Devices (NIPS, 2017); https://arxiv.org/pdf/1711.08740.pdf

  52. Venieris, S. I. & Bouganis, C.-S. fpgaConvNet: automated mapping of convolutional neural networks on FPGAs. In Proc. 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays 291 (ACM, 2017).

  53. Venieris, S. I. & Bouganis, C.-S. fpgaConvNet: a framework for mapping convolutional neural networks on FPGAs. In Proc. 2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) 40 (IEEE, 2016).

  54. Huimin Li et al. A high performance FPGA-based accelerator for large-scale convolutional neural networks. In Proc. 2016 26th International Conference on Field Programmable Logic and Applications (FPL) 1–9 (IEEE, 2016).

  55. Zhao, R. et al. Hardware compilation of deep neural networks: an overview. In 2018 IEEE 29th International Conference on Application-Specific Systems, Architectures and Processors (ASAP) 1–8 (IEEE, 2018).

  56. Google. TensorFlow Lite https://www.tensorflow.org/lite (2020).

  57. Moreno, E. A. et al. JEDI-net: a jet identification algorithm based on interaction networks. Eur. Phys. J. C 80, 58 (2019).

    Article  Google Scholar 

  58. Pierini, M., Duarte, J. M., Tran, N. & Freytsis, M. HLS4ML LHC Jet Dataset (150 particles) https://doi.org/10.5281/zenodo.3602260 (2020).

  59. Zhu, M. & Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression. Preprint at https://arxiv.org/pdf/1710.01878.pdf (2017).

  60. Coelho, C. Qkeras https://github.com/google/qkeras (2019).

  61. Nair, V. & Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on International Conference on Machine Learning 807–814 (ICML, 2010).

  62. Hennessy, J. L. & et al. Computer Architecture: a Quantitative Approach 6th edn (Morgan Kaufmann, 2016).

  63. Horowitz, M. Computing’s energy problem (and what we can do about it). In Proc. 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC) 10–14 (IEEE, 2014).

  64. O’Malley, T. et al. Keras Tuner https://github.com/keras-team/keras-tuner (2019).

  65. Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A. & Talwalkar, A. Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 18, 6765–6816 (2017).

    MathSciNet  MATH  Google Scholar 

  66. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).

  67. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

  68. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In 3rd International Conference on Learning Representations (ICLR) 2015, Conference Track Proceedings (eds Bengio, Y. & LeCun, Y.) (ICLR, 2015); https://arxiv.org/pdf/1412.6980.pdf

  69. Aarrestad, T. et al. Fast convolutional neural networks on FPGAs with hls4ml. Preprint at https://arxiv.org/pdf/2101.05108.pdf (2021).

  70. Netzer, Y. et al. Reading digits in natural images with unsupervised feature learning. In Proc. NIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning (NIPS, 2011); https://deeplearningworkshopnips2011.files.wordpress.com/2011/12/12.pdf

  71. Gupta, S., Agrawal, A., Gopalakrishnan, K. & Narayanan, P. Deep learning with limited numerical precision. In Proc. 32nd International Conference on Machine Learning 1737–1746 (PMLR, 2015).

  72. Kwan, H. K. & Tang, C. Z. A design method for multilayer feedforward neural networks for simple hardware implementation. In Proc. 1993 IEEE International Symposium on Circuits and Systems Vol. 4, 2363–2366 (IEEE, 1993).

  73. Howard, A. G. et al. MobileNets: efficient convolutional neural networks for mobile vision applications. Preprint at https://arxiv.org/pdf/1704.04861.pdf (2017).

  74. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) (ICLR, 2015); https://arxiv.org/pdf/1409.1556.pdf

  75. Das, D. et al. Mixed precision training of convolutional neural networks using integer operations. In International Conference on Learning Representations (ICLR, 2018).

  76. Hwang, K. & Sung, W. Fixed-point feedforward deep neural network design using weights +1, 0, and −1. In Proc. 2014 IEEE Workshop on Signal Processing Systems (SiPS) 1–6 (IEEE, 2014).

  77. Li, F., Zhang, B. & Liu, B. Ternary weight networks. Preprint at https://arxiv.org/pdf/1605.04711.pdf (2016).

Download references

Acknowledgements

M.P. and S.S. are supported by, and V.L. and A.A.P. are partially supported by, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant no. 772369). V.L. is supported by Zenseact under the CERN Knowledge Transfer Group. A.A.P. is supported by CEVA under the CERN Knowledge Transfer Group. We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community was important for the development of this project.

Author information

Authors and Affiliations

Authors

Contributions

C.N.C., A.K., S.L. and H.Z. conceived and designed the QKeras, AutoQKeras and QTools software libraries. T.A., V.L., M.P., A.A.P., S.S. and J.N. designed and implemented support for QKeras in hls4ml. S.S. conducted the experiments. T.A., A.A.P. and S.S. wrote the manuscript.

Corresponding author

Correspondence to Thea Klaeboe Aarrestad.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review informationNature Machine Intelligence thanks Jose Nunez-Yanez, Stylianos Venieris and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Model architecture and quantization.

Model architecture for the fully-connected NN architecture under study. The numbers in brackets are the precisions used for each layer, quoted as 〈B, I〉, where B is the precision in bits and I the number of integer bits. When different precision is used for weights and biases, the quantization is listed as w and b, respectively. These have been obtained using the per-layer, per-parameter type automatic quantization procedure described in Section VI.

Extended Data Fig. 2 Variance shift.

Variance shift and the effect of initialization in gradient descent.

Extended Data Fig. 3 Layers and quantisers in QKeras.

List of available layers and quantizers in QKeras.

Extended Data Fig. 4 ROC curves for the models under study.

ROC curves of false positive rate (FPR) versus true positive rate (TPR) for the Baseline Full (BF), quantized 6-bit (Q6), AutoQKeras Energy Optimized (QE) and AutoQKeras Bits Optimized (QB) models.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Coelho, C.N., Kuusela, A., Li, S. et al. Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors. Nat Mach Intell 3, 675–686 (2021). https://doi.org/10.1038/s42256-021-00356-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-021-00356-5

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics