体育赛事投注记录

advertisement

Segregating and Recognizing Human Actions from Video Footages Using LRCN Technique

  • Meet Pandya
  • Abhishek PillaiEmail author
  • Himanshu Rupani
Conference paper
  • 61 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1141)

Abstract

体育赛事投注记录computer vision is a vast area of research that includes extracting useful information from images or sequence of images. human activity recognition is one such field undergoing lots of research. the practical application for this model is vast in various kinds of researches as well as actual practice. this paper proposes a two-model approach using a combination of a convolutional neural network using transfer learning and a long short-term memory model. cnn network is applied to gather the feature vectors for each video, and the lstm network is used to classify the video activity. standard activities contain benchpress, horse riding, basketball dunk, etc. a high accuracy level of 94.2% was achieved by the proposed algorithm.

Keywords

Human activity recognition LRCN UCF-101 Computer vision Video analysis Image processing 

References

  1. 1.
    Tavakkoli, A., Kelley, R., King, C., Nicolescu, M., Nicolescu, M., Bebis, G.: A visual tracking framework for intent recognition in videos
  2. 2.
    Wu, Z., Yao, T., Fu, Y., Jiang, Y.-G.: Deep learning for video classification and captioning (February 2018)
  3. 3.
    Sunny, J.T., George, S.M., Kizhakkethottam, J.J.: Applications and challenges of human activity recognition using sensors in a smart environment (September 2015)
  4. 4.
    Ranasinghe, S., Machot, F.A., Mayr, H.C.: A review on applications of activity recognition systems with regard to performance and evaluation (August 2016)
  5. 5.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks (September 2014)
  6. 6.
    A must-read introduction to sequence modelling (with use cases), Analytics Vidhya.
  7. 7.
    Sherstinsky, A.: Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network (August 2018)
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory (1997)
  9. 9.
    Lipton, Z.C., Berkowitz, J.: A critical review of recurrent neural networks for sequence learning (June 2015)
  10. 10.
    Varol, G., Laptev, I., Schmid, C.: Long-term temporal convolutions for action recognition (2016)
  11. 11.
    Zhang, H.-B., Zhang, Y.-X., Zhong, B., Lei, Q., Yang, L., Du, J.-X., Chen, D.-S.: A comprehensive survey of vision-based human action recognition methods (February 2019)
  12. 12.
    Thankaraj, S., Gopalan, C.: A survey on human activity recognition from videos (February 2016)
  13. 13.
    Asadi-Aghbolaghi, M., Clapes, A., Bellantonio, M., Es-calante, H., Ponce-Lpez, V., Bar, X., Guyon, I., Kasaei, S., Escalera, S.: Deep learning for action and gesture recognition in image sequences: a survey (January 2018)
  14. 14.
    Krizhevsky, A., Sutskever, I.: ImageNet classification with deep convolutional neural network
  15. 15.
    Zha, S., Luisier, F., Andrews, W., Srivastava, N., Salakhutdinov, R.: Exploiting image-trained CNN architectures for unconstrained video classification (May 2015)
  16. 16.
    Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: towards good practices for deep action recognition (August 2016)
  17. 17.
    Zolfaghari, M., Oliveira, G.L., Sedaghat, N., Brox, T.: Chained multi-stream networks exploiting pose, motion, and appearance for action classification and detection (2017)
  18. 18.
    Ji, S., Xu, W., Yang, M.W., Yu, K.: 3D convolutional neural networks for human action recognition (2010)
  19. 19.
    Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 44894497. IEEE (2015)
  20. 20.
    Ma, C.-Y., Chen, M.-H., Kira, Z., AlRegib, G.: TS-LSTM and temporal-inception: exploiting spatiotemporal dynamics for activity recognition (March 2017)
  21. 21.
    Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos
  22. 22.
    Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition (April 2016)
  23. 23.
    Gammulle, H., Denman, S., Sridharan, S., Fookes, C.: Two stream LSTM: a deep fusion framework for human action recognition (2017)
  24. 24.
    Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition (September 2016)
  25. 25.
    Diba, A., Fayyaz, M., Sharma, V., Karami, A.H., Arzani, M.M., Yousefzadeh, R., Van Gool, L.: Temporal 3D convnets: new architecture and transfer learning for video classification (November 2017)
  26. 26.
    O’Shea, K.T., Nash, R.: An introduction to convolutional neural networks
  27. 27.
    Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., Liu, C.: A survey on deep transfer learning (August 2018)
  28. 28.
    Szegedy, C., Liu, W., Jia, Y.: Going deeper with convolutions (September 2017)
  29. 29.
    Szegedy, C., Vanhouck, V., Ioffe, S., Shlens, J.: Rethinking the inception architecture for computer vision
  30. 30.
    A simple guide to the versions of the inception network, towards data science.
  31. 31.
    UCF101—Action recognition data set, University of Central Florida.
  32. 32.
    Choutas, V., Weinzaepfel, P., Revaud, J., Schmid, C.: PoTion: pose moTion representation for action recognition. In: CVPR 2018 - IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, United States, pp. 7024–7033 ( Jun 2018). . hal-01764222

Copyright information

© Springer Nature Singapore Pte Ltd. 2021

Authors and Affiliations

  1. 1.U.V. Patel College of EngineeringMehsanaIndia
  2. 2.L.D. College of EngineeringAhmedabadIndia

Personalised recommendations