Human action recognition with a large-scale brain-inspired photonic computer


The recognition of human actions in video streams is a challenging task in computer vision, with cardinal applications in brain–computer interfaces and surveillance, for example. Recently, deep learning has produced remarkable results, but it can be hard to use in practice, as its training requires large datasets and special-purpose and energy-consuming hardware. In this work, we propose a photonic hardware approach. Our experimental set-up comprises off-the-shelf components and implements an easy-to-train recurrent neural network with 16,384 nodes, scalable to hundreds of thousands of nodes. The system, based on the reservoir computing paradigm, is trained to recognize six human actions from the KTH video database using either raw frames as inputs or a set of features extracted with the histograms of an oriented gradients algorithm. We report a classification accuracy of 91.3%, comparable to state-of-the-art digital implementations, while promising a higher processing speed in comparison to the existing hardware approaches. Because of the massively parallel processing capabilities offered by photonic architectures, we anticipate that this work will pave the way towards simply reconfigurable and energy-efficient solutions for real-time video processing.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Fig. 1: Examples of KTH frames and HOG feaures.
Fig. 2: Scheme of the principle of how our reservoir computer solves the human action classification task.
Fig. 3: Illustration of the experimental set-up, composed of an optical arm, connected to a computer.
Fig. 4: Performance of our photonic neuro-inspired architecture on the human action classification task.
Fig. 5: Confusion matrices with the best performance.

Data availability

The KTH dataset can be downloaded from The numerical and experimental data can be downloaded from the data folder in our GitHub repository: (

Code availability

The code used in this study can be downloaded from the scripts folder in our GitHub repository: (


  1. 1.

    Wu, D., Sharma, N. & Blumenstein, M. Recent advances in video-based human action recognition using deep learning: a review. In 2017 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2017).

  2. 2.

    Moeslund, T. B. & Granum, E. A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 81, 231–268 (2001).

  3. 3.

    Moeslund, T. B. in Virtual Interaction: Interaction in Virtual Inhabited 3D Worlds (eds Qvortrup, L. et al.) 221–234 (Springer, 2001).

  4. 4.

    Vrigkas, M., Nikou, C. & Kakadiaris, I. A. A review of human activity recognition methods. Front. Robot. AI 2, 28 (2015).

  5. 5.

    Jaeger, H. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004).

  6. 6.

    Maass, W., Natschläger, T. & Markram, H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).

  7. 7.

    Lukoševičius, M. & Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009).

  8. 8.

    Appeltant, L. et al. Information processing using a single dynamical node as complex system. Nat. Commun. 2, 468 (2011).

  9. 9.

    Paquot, Y. et al. Optoelectronic reservoir computing. Sci. Rep. 2, 287 (2012).

  10. 10.

    Larger, L. et al. Photonic information processing beyond turing: an optoelectronic implementation of reservoir computing. Opt. Express 20, 3241 (2012).

  11. 11.

    Martinenghi, R., Rybalko, S., Jacquot, M., Chembo, Y. K. & Larger, L. Photonic nonlinear transient computing with multiple-delay wavelength dynamics. Phys. Rev. Lett. 108, 244101 (2012).

  12. 12.

    Larger, L. et al. High-speed photonic reservoir computing using a time-delay-based architecture: million words per second classification. Phys. Rev. X 7, 011015 (2017).

  13. 13.

    Duport, F., Schneider, B., Smerieri, A., Haelterman, M. & Massar, S. All-optical reservoir computing. Opt. Express 20, 22783 (2012).

  14. 14.

    Brunner, D., Soriano, M. C., Mirasso, C. R. & Fischer, I. Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun. 4, 1364 (2013).

  15. 15.

    Vinckier, Q. et al. High-performance photonic reservoir computer based on a coherently driven passive cavity. Optica 2, 438 (2015).

  16. 16.

    Akrout, A. et al. Parallel photonic reservoir computing using frequency multiplexing of neurons. Preprint at (2016).

  17. 17.

    Vandoorne, K. et al. Experimental demonstration of reservoir computing on a silicon photonics chip. Nat. Commun. 5, 3541 (2014).

  18. 18.

    Triefenbach, F., Jalalvand, A., Schrauwen, B. & Martens, J.-P. Phoneme recognition with large hierarchical reservoirs. In Advances in Neural Information Processing Systems Proceedings 2307–2315 (NIPS, 2010).

  19. 19.

    The 2006/07 Forecasting Competition for Neural Networks and Computational Intelligence (2006).

  20. 20.

    Antonik, P., Haelterman, M. & Massar, S. Brain-inspired photonic signal processor for generating periodic patterns and emulating chaotic systems. Phys. Rev. Appl. 7, 054014 (2017).

  21. 21.

    Bueno, J. et al. Reinforcement learning in a large-scale photonic recurrent neural network. Optica 5, 756 (2018).

  22. 22.

    Hagerstrom, A. M. et al. Experimental observation of chimeras in coupled-map lattices. Nat. Phys. 8, 658–661 (2012).

  23. 23.

    Schuldt, C., Laptev, I. & Caputo, B. Recognizing human actions: a local SVM approach. In Proceedings of the 17th International Conference on Pattern Recognition, 2004 (IEEE, 2004).

  24. 24.

    Dalal, N. & Triggs, B. Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2005).

  25. 25.

    Bahi, H. E., Mahani, Z., Zatni, A. & Saoud, S. A robust system for printed and handwritten character recognition of images obtained by camera phone. In WSEAS Transactions on Signal Processing (WSEAS, 2015).

  26. 26.

    Pearson, K. L. III On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dubl. Phil. Mag. J. Sci. 2, 559–572 (1901).

  27. 27.

    Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24, 417–441 (1933).

  28. 28.

    Smith, L. I. A Tutorial on Principal Components Analysis. Technical report, Univ. Otago (2002).

  29. 29.

    Antonik, P. et al. Online training of an opto-electronic reservoir computer applied to real-time channel equalization. IEEE Trans. Neural Netw. Learn. Systems 28, 2686–2698 (2017).

  30. 30.

    Psaltis, D. & Farhat, N. Optical information processing based on an associative-memory model of neural nets with thresholding and feedback. Opt. Lett. 10, 98 (1985).

  31. 31.

    Jhuang, H. A Biologically Inspired System for Action Recognition. PhD thesis, Massachusetts Institute of Technology (2007).

  32. 32.

    Grushin, A., Monner, D. D., Reggia, J. A. & Mishra, A. Robust human action recognition via long short-term memory. In The 2013 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2013).

  33. 33.

    Gilbert, A., Illingworth, J. & Bowden, R. Action recognition using mined hierarchical compound features. IEEE Trans. Pattern Anal. Mach. Intell. 33, 883–897 (2011).

  34. 34.

    Tikhonov, A. N, Goncharsky, A, Stepanov, V. & Yagola, A. G. Numerical Methods for the Solution of Ill-posed Problems (Springer, 1995).

  35. 35.

    Saleh, B. E. A. & Teich, M. C. Fundamental of Photonics 3rd edn (Wiley, 2019).

  36. 36.

    Jaeger, H. The ‘echo state’ approach to analysing and training recurrent neural networks—with an Erratum note. GMD Report 148, 1–47 (2001).

  37. 37.

    Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004).

  38. 38.

    Yadav, G. K., Shukla, P. & Sethfi, A. Action recognition using interest points capturing differential motion information. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2016).

  39. 39.

    Shi, Y., Zeng, W., Huang, T. & Wang, Y. Learning deep trajectory descriptor for action recognition in videos using deep neural networks. In 2015 IEEE International Conference on Multimedia and Expo (ICME) (IEEE, 2015).

  40. 40.

    Kovashka, A. & Grauman, K. Learning a hierarchy of discriminative space–time neighborhood features for human action recognition. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2010).

  41. 41.

    Baccouche, M., Mamalet, F., Wolf, C., Garcia, C. & Baskurt, A. in Sequential Deep Learning for Human Action Recognition 29–39 (Springer, 2011).

  42. 42.

    Ali, K. H. & Wang, T. Learning features for action recognition and identity with deep belief networks. In 2014 International Conference on Audio, Language and Image Processing (IEEE, 2014).

  43. 43.

    Wang, H., Klaser, A., Schmid, C. & Liu, C.-L. Action recognition by dense trajectories. In 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011).

  44. 44.

    Liu, J. & Shah, M. Learning human actions via information maximization. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2008).

  45. 45.

    Sun, X., Chen, M. & Hauptmann, A. Action recognition via local descriptors and holistic features. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2009).

  46. 46.

    Veeriah, V., Zhuang, N. & Qi, G.-J. Differential recurrent neural networks for action recognition. In 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015).

  47. 47.

    Shu, N., Tang, Q. & Liu, H. A bio-inspired approach modeling spiking neural networks of visual cortex for human action recognition. In 2014 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2014).

  48. 48.

    Laptev, I., Marszalek, M., Schmid, C. & Rozenfeld, B. Learning realistic human actions from movies. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008).

  49. 49.

    Klaeser, A., Marszalek, M. & Schmid, C. A spatio-temporal descriptor based on 3D-gradients. In Proceedings of the British Machine Vision Conference 2008 (British Machine Vision Association, 2008).

  50. 50.

    Ji, S., Xu, W., Yang, M. & Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35, 221–231 (2013).

  51. 51.

    Escobar, M.-J. & Kornprobst, P. Action recognition via bio-inspired features: the richness of center–surround interaction. Comput. Vis. Image Underst. 116, 593–605 (2012).

Download references


The authors thank the creators of the KTH dataset for making the videos publicly available. This work was supported by AFOSR (grants nos. FA-9550-15-1-0279 and FA-9550-17-1-0072), Région Grand-Est and the Volkswagen Foundation via the NeuroQNet Project.

Author information




D.B., N.M. and D.R. designed and managed the study. P.A., N.M. and D.R. realized the experimental set-up. P.A. performed the numerical simulations and the experimental campaigns. P.A., N.M. and D.R. prepared the manuscript. All authors discussed the results and reviewed the manuscript.

Corresponding authors

Correspondence to Piotr Antonik or Damien Rontani.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Antonik, P., Marsal, N., Brunner, D. et al. Human action recognition with a large-scale brain-inspired photonic computer. Nat Mach Intell 1, 530–537 (2019).

Download citation

Further reading