Skip to content

Latest commit

 

History

History
89 lines (67 loc) · 7.22 KB

README.md

File metadata and controls

89 lines (67 loc) · 7.22 KB

PYSKL

PWC PWC PWC

PYSKL is a toolbox focusing on action recognition based on SKeLeton data with PYTorch. Various algorithms will be supported for skeleton-based action recognition. We build this project based on the OpenSource Project MMAction2.

This repo is the official implementation of PoseConv3D and STGCN++.


Skeleton-base Action Recognition Results on NTU-RGB+D-120

News

  • Support skeleton action recognition demo with GCN algorithms (2022-05-03).
  • Release the skeleton annotations (generated by HRNet), config files, and pre-trained checkpoints for Kinetics-400. Note that Kinetics-400 is a large-scale dataset (even for skeleton) and you should have memcached and pymemcache installed for efficient training and testing on Kinetics-400 (2022-05-01).
  • Provide an example for processing a custom video dataset (we use diving48), generating 2D skeleton annotations, and using PoseC3D for skeleton-based action recognition. The tutorial for skeleton extraction part is available in diving48_example (2022-04-15).

Supported Algorithms

Supported Skeleton Datasets

For data pre-processing, we estimate 2D skeletons with a two-stage pose estimator (Faster-RCNN + HRNet). For 3D skeletons, we follow the pre-processing procedure of CTR-GCN. Currently, we do not provide the pre-processing scripts. Instead, we directly provide the processed skeleton data as pickle files, which can be directly used in training and evaluation. You can use vis_skeleton to visualize the provided skeleton data.

Installation

git clone https://github.com/kennymckormick/pyskl.git
cd pyskl
# Please first install pytorch according to instructions on the official website: https://pytorch.org/get-started/locally/. Please use pytorch with version smaller than 1.11.0 and larger (or equal) than 1.5.0
pip install -r requirements.txt
pip install -e .

Demonstration

# You should run the following scripts under the directory `$PYSKL`
# Running the demo with PoseC3D trained on NTURGB+D 120 (Joint Modality), which is the default option. The input file is demo/ntu_sample.avi, the output file is demo/demo.mp4
python demo/demo_skeleton.py demo/ntu_sample.avi demo/demo.mp4
# Running the demo with STGCN++ trained on NTURGB+D 120 (Joint Modality). The input file is demo/ntu_sample.avi, the output file is demo/demo.mp4
python demo/demo_skeleton.py demo/ntu_sample.avi demo/demo.mp4 --config configs/stgcn++/stgcn++_ntu120_xsub_hrnet/j.py --ckpt http://download.openmmlab.com/mmaction/pyskl/ckpt/stgcnpp/stgcnpp_ntu120_xsub_hrnet/j.pth

Note that for running demo on an arbitrary input video, you need a tracker to formulate pose estimation results for each frame into multiple skeleton sequences. Currently we are using a naive tracker based on inter-frame pose similarities. You can also try to write your own tracker.

Training & Testing

You can use following commands for training and testing. Basically, we support distributed training on a single server with multiple GPUs.

# Training
bash tools/dist_train.sh {config_name} {num_gpus} {other_options}
# Testing
bash tools/dist_test.sh {config_name} {checkpoint} {num_gpus} --out {output_file} --eval top_k_accuracy mean_class_accuracy

For specific examples, please go to the README for each specific algorithm we supported.

Citation

If you use PYSKL in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry and the BibTex entry corresponding to the specific algorithm you used.

% Tech Report Coming Soon!
@misc{duan2022pyskl,
    title={PYSKL: a toolbox for skeleton-based video understanding},
    author={PYSKL Contributors},
    howpublished = {\url{https://github.com/kennymckormick/pyskl}},
    year={2022}
}

Contact

For any questions, feel free to contact: [email protected]