diff --git a/README.md b/README.md index 453b2e59f0..3e1ba85712 100644 --- a/README.md +++ b/README.md @@ -44,6 +44,7 @@ The master branch works with **PyTorch 1.3+**. ## Updates +- (2021-11-24) We support **2s-AGCN** on NTU60 XSub, achieve 86.82% Top-1 accuracy on joint stream and 87.91% Top-1 accuracy on bone stream respectively. - (2021-10-29) We provide a demo for skeleton-based and rgb-based spatio-temporal detection and action recognition (demo/demo_video_structuralize.py). - (2021-10-26) We train and test **ST-GCN** on NTU60 with 3D keypoint annotations, achieve 84.61% Top-1 accuracy (higher than 81.5% in the [paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/17135)). - (2021-10-25) We provide a script(tools/data/skeleton/gen_ntu_rgbd_raw.py) to convert the NTU60 and NTU120 3D raw skeleton data to our format. diff --git a/README_zh-CN.md b/README_zh-CN.md index e9ee522b9d..05550aae20 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -43,6 +43,7 @@ MMAction2 是一款基于 PyTorch 的视频理解开源工具箱,是 [OpenMMLa ## 更新记录 +- (2021-11-24) 在 NTU60 XSub 上支持 **2s-AGCN**, 在 joint stream 和 bone stream 上分别达到 86.82% 和 87.91% 的识别准确率。 - (2021-10-29) 支持基于 skeleton 模态和 rgb 模态的时空动作检测和行为识别 demo (demo/demo_video_structuralize.py)。 - (2021-10-26) 在 NTU60 3d 关键点标注数据集上训练测试 **STGCN**, 可达到 84.61% (高于 [paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/17135) 中的 81.5%) 的识别准确率。 - (2021-10-25) 提供将 NTU60 和 NTU120 的 3d 骨骼点数据转换成我们项目的格式的脚本(tools/data/skeleton/gen_ntu_rgbd_raw.py)。