Skip to content

This repository is made as a part of ACM month of code, NIT Surat. It contains different algorithms to detect fight from videos like CRNN and POSENET.

Notifications You must be signed in to change notification settings

meet-soni5720/Fight-Detection

Repository files navigation

Fight-Detection

This repository is made as a part of ACM month of code. This contains fight detection algorithms.

Fight detection is a part of action recognition tasks which is one of the hardest problems in the computer vision as it includes recognizing the activities from videos making a prediction from a series of frames, not just a single frame, so training an action recognition model is itself a challange.

In this project we have used two approaches to solve the problem

  1. Using pretrained CNN to extract the features from video frames and then passing the extracted frames to rnn to get the prediction.
  2. using POSENET

USING CRNN:-

CNNs are better at recognizing the basic and high level features of an image and Recurrent networks works well with time dependent data or sequential data, so we tried to leverage the power of both the networks to predict the fight in the video.

The basic workflow is as follows:


1) We have used pretrained cnn model to encode the predefined number of frames of a video and encoded it in a feature map, so basically CNN model is acting as an encoder network in the architecture. We have used Resnet152 architecture pretrained on imagenet dataset as an encoder network to generate the feature vectors.

2) Training a RNN model on a feature vectors to get the prediction, the best choices for this are deep unidirectional/bidirectional LSTM or GRU layers, as bidirectional LSTM/GRU are computationally expensive as compared to unidirectional LSTM/GRU and also we are not getting significant accuracy boost, so we have used unidirectional LSTM architecture with 2 hidden layer and two dense layer.here RNN model is acting as decoder which decodes the featuremap of different map to binary classes fight and non-fight.

General Architecture

Training this model on 300 fight and 300 non-fight video we have achieved 95% accuracy on test dataset


Steps taken to train the model

  • First we have extracted frames from videos using frame_extraction.py
  • Then we have selected 40 frames from the total frames of video and then passed it through pretrained resnet152 model to extract feature vectors. The whole procedure is in feature_extraction.py
  • Then we have trained RNN network, in which we have used unidirectional LSTM layers, to get the prediction of fight or no-fight from this feature maps. This can be obtained from rnn_training.py

To test Fight detection model using crnn follow underlined steps.

  • First clone the repository on your local machine. (make sure you have all the requirements given in requirement.txt)
  • Then run test.py in command as " python test.py -m 'path_to_model' ", (model is given in model folder of fight_detection_using_crnn

Project demo video

https://drive.google.com/file/d/1EovOeSgtOsyhsiSE1K91q0ddIgYLmdbx/view?usp=sharing



Note: some issues with model weights given in fight_detection_using_crnn folder, so if it doesn't work visit the link: https://drive.google.com/file/d/1S_bBYflp1bFBM1EtLxk4nhQ7adn1UjW_/view?usp=sharing to download the weight files.

USING POSENET

PoseNet can be used to estimate either a single pose or multiple poses. The model overlays keypoints over the input image.

PoseNet can be used to estimate either a single pose or multiple poses. The model overlays keypoints over the input image.

Removing the background from this results in a much more simplified output that can be given to a CNN to get prediction:

Set-Up

About

This repository is made as a part of ACM month of code, NIT Surat. It contains different algorithms to detect fight from videos like CRNN and POSENET.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •