This repository is made as a part of ACM month of code. This contains fight detection algorithms.
Fight detection is a part of action recognition tasks which is one of the hardest problems in the computer vision as it includes recognizing the activities from videos making a prediction from a series of frames, not just a single frame, so training an action recognition model is itself a challange.
In this project we have used two approaches to solve the problem
- Using pretrained CNN to extract the features from video frames and then passing the extracted frames to rnn to get the prediction.
- using POSENET
1) We have used pretrained cnn model to encode the predefined number of frames of a video and encoded it in a feature map, so basically CNN model is acting as an encoder network in the architecture. We have used Resnet152 architecture pretrained on imagenet dataset as an encoder network to generate the feature vectors.
2) Training a RNN model on a feature vectors to get the prediction, the best choices for this are deep unidirectional/bidirectional LSTM or GRU layers, as bidirectional LSTM/GRU are computationally expensive as compared to unidirectional LSTM/GRU and also we are not getting significant accuracy boost, so we have used unidirectional LSTM architecture with 2 hidden layer and two dense layer.here RNN model is acting as decoder which decodes the featuremap of different map to binary classes fight and non-fight.
Training this model on 300 fight and 300 non-fight video we have achieved 95% accuracy on test dataset
- First we have extracted frames from videos using frame_extraction.py
- Then we have selected 40 frames from the total frames of video and then passed it through pretrained resnet152 model to extract feature vectors. The whole procedure is in feature_extraction.py
- Then we have trained RNN network, in which we have used unidirectional LSTM layers, to get the prediction of fight or no-fight from this feature maps. This can be obtained from rnn_training.py
- First clone the repository on your local machine. (make sure you have all the requirements given in requirement.txt)
- Then run test.py in command as " python test.py -m 'path_to_model' ", (model is given in model folder of fight_detection_using_crnn
https://drive.google.com/file/d/1EovOeSgtOsyhsiSE1K91q0ddIgYLmdbx/view?usp=sharing
Note: some issues with model weights given in fight_detection_using_crnn folder, so if it doesn't work visit the link: https://drive.google.com/file/d/1S_bBYflp1bFBM1EtLxk4nhQ7adn1UjW_/view?usp=sharing to download the weight files.
PoseNet can be used to estimate either a single pose or multiple poses. The model overlays keypoints over the input image.
PoseNet can be used to estimate either a single pose or multiple poses. The model overlays keypoints over the input image.
Removing the background from this results in a much more simplified output that can be given to a CNN to get prediction:
- sign up on https://www.sms4india.com/.
- get API&Secret keys