Skip to content

Commit

Permalink
[Doc] Update tech report link (kennymckormick#30)
Browse files Browse the repository at this point in the history
  • Loading branch information
kennymckormick authored May 20, 2022
1 parent 1b3f668 commit 9fddb67
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 19 deletions.
4 changes: 0 additions & 4 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,6 @@ jobs:
python-version: 3.7
- name: Install pre-commit hook
run: |
# markdownlint requires ruby >= 2.7
sudo apt-add-repository ppa:brightbox/ruby-ng -y
sudo apt-get update
sudo apt-get install -y ruby2.7
pip install pre-commit
pre-commit install
- name: Linting
Expand Down
19 changes: 10 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,16 @@ This repo is the official implementation of [PoseConv3D](https://arxiv.org/abs/2

## News

- Release a [tech report](https://arxiv.org/abs/2205.09443) about this repository (**2022-05-20**).
- Support spatial augmentations and provide a benchmark on ST-GCN++ (**2022-05-12**).
- Support skeleton action recognition demo with GCN algorithms (**2022-05-03**).
- Release the skeleton annotations (generated by HRNet), config files, and pre-trained checkpoints for Kinetics-400. Note that Kinetics-400 is a large-scale dataset (even for skeleton) and you should have `memcached` and `pymemcache` installed for efficient training and testing on Kinetics-400 (**2022-05-01**).
- Provide an example for processing a custom video dataset (we use diving48), generating 2D skeleton annotations, and using PoseC3D for skeleton-based action recognition. The tutorial for skeleton extraction part is available in [diving48_example](/examples/extract_diving48_skeleton/diving48_example.ipynb) (**2022-04-15**).
- Release the skeleton annotations (HRNet 2D Pose), config files, and pre-trained ckpts for Kinetics-400. K400 is a large-scale dataset (even for skeleton), you should have `memcached` and `pymemcache` installed for efficient training & testing on K400 (**2022-05-01**).
- Provide an example (diving48) for processing a custom video dataset, generating 2D skeleton annotations, and using PoseC3D for skeleton-based action recognition. The tutorial for skeleton extraction part is available in [diving48_example](/examples/extract_diving48_skeleton/diving48_example.ipynb) (**2022-04-15**).

## Supported Algorithms

- [x] ST-GCN (AAAI 2018): https://arxiv.org/abs/1801.07455 [[MODELZOO](/configs/stgcn/README.md)]
- [x] ST-GCN++ (PYSKL): [Tech Report Coming Soon](https://github.com/kennymckormick/pyskl/tree/main/configs/stgcn%2B%2B) [[MODELZOO](/configs/stgcn++/README.md)]
- [x] ST-GCN++ (PYSKL, Tech Report): https://arxiv.org/abs/2205.09443 [[MODELZOO](/configs/stgcn++/README.md)]
- [x] PoseConv3D (CVPR 2022 Oral): https://arxiv.org/abs/2104.13586 [[MODELZOO](/configs/posec3d/README.md)]
- [x] AAGCN (TIP): https://arxiv.org/abs/1912.06971 [[MODELZOO](/configs/aagcn/README.md)]
- [x] MS-G3D (CVPR 2020 Oral): https://arxiv.org/abs/2003.14111 [[MODELZOO](/configs/msg3d/README.md)]
Expand Down Expand Up @@ -77,12 +78,12 @@ For specific examples, please go to the README for each specific algorithm we su
If you use PYSKL in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry and the BibTex entry corresponding to the specific algorithm you used.

```BibTeX
% Tech Report Coming Soon!
@misc{duan2022pyskl,
title={PYSKL: a toolbox for skeleton-based video understanding},
author={PYSKL Contributors},
howpublished = {\url{https://github.com/kennymckormick/pyskl}},
year={2022}
@misc{duan2022PYSKL,
url = {https://arxiv.org/abs/2205.09443},
author = {Duan, Haodong and Wang, Jiaqi and Chen, Kai and Lin, Dahua},
title = {PYSKL: Towards Good Practices for Skeleton Action Recognition},
publisher = {arXiv},
year = {2022}
}
```

Expand Down
13 changes: 7 additions & 6 deletions configs/stgcn++/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@

## Introduction

STGCN++ is a variant of STGCN we developed in PYSKL with some modifications in the architecture of the spatial module and the temporal module. We provide STGCN++ trained on NTURGB+D with 2D skeletons (HRNet) and 3D skeletons with **PYSKL** training setting. We provide checkpoints for four modalities: Joint, Bone, Joint Motion, and Bone Motion. We will describe the architecture of STGCN++ in the upcoming tech report.
STGCN++ is a variant of STGCN we developed in PYSKL with some modifications in the architecture of the spatial module and the temporal module. We provide STGCN++ trained on NTURGB+D with 2D skeletons (HRNet) and 3D skeletons with **PYSKL** training setting. We provide checkpoints for four modalities: Joint, Bone, Joint Motion, and Bone Motion. The architecture of STGCN++ is described in PYSKL [tech report](https://arxiv.org/abs/2205.09443).

## Citation

```BibTeX
@misc{duan2022pyskl,
title={PYSKL: a toolbox for skeleton-based video understanding},
author={PYSKL Contributors},
howpublished = {\url{https://github.com/kennymckormick/pyskl}},
year={2022}
@misc{duan2022PYSKL,
url = {https://arxiv.org/abs/2205.09443},
author = {Duan, Haodong and Wang, Jiaqi and Chen, Kai and Lin, Dahua},
title = {PYSKL: Towards Good Practices for Skeleton Action Recognition},
publisher = {arXiv},
year = {2022}
}
```

Expand Down

0 comments on commit 9fddb67

Please sign in to comment.