Skip to content

A toolbox for skeleton-based action recognition.

License

Notifications You must be signed in to change notification settings

leotrentini22/pyskl

 
 

Repository files navigation


Learning Facial Action Unit Recognition through a general action recognition algorithm

Table of Contents
  1. General Information
  2. Installation
  3. Data and preprocessing
  4. Configuration
  5. Training and Testing
  6. Main Results

General Information

The repository contains the code for the second part of the Facial Action Unit Recognition project. In particular, this repository is an adaption of PoseConv3D, a general action recognition algorithm that we get from PYSKL, on the face action units recognition task.

PYSKL is a toolbox focusing on action recognition based on SKeLeton data with PYTorch. It supports various algorithms for skeleton-based action recognition. PYSKL is based on the OpenSource Project MMAction2.

Installation

git clone https://github.com/kennymckormick/pyskl.git
cd pyskl
# This command runs well with conda 22.9.0, if you are running an early conda version and got some errors, try to update your conda first
conda env create -f pyskl.yaml
conda activate pyskl
pip install -e .

Data and preprocessing

The Dataset we used:

To obtain the facial skeleton annotations, you can:

  1. Use our pre-processed skeleton annotations: we directly provide the processed skeleton data as pickle files (which can be directly used for training and testing), check Data Doc for the download links and descriptions of the annotation format.
  2. As an alternative, use our provided script to generate the processed pickle files. The generated file is the same with the provided AffWild_train_full.pkl. For detailed instructions, follow the Data Doc.

You can modify vis_skeleton to visualize the skeleton data.

Configuration

Before running, please modify the configuration file configuration file with your own personal paths (and preferences)

Training and Testing

You can use following commands for training and testing. Basically, we support distributed training on a single server with multiple GPUs.

# Training
bash tools/dist_train.sh configs/posec3d/slowonly_r50_affwild_xsub/joint.py ${NUM_GPUS} --validate --test-last --test-best
# Testing
bash tools/dist_test.sh configs/posec3d/slowonly_r50_affwild_xsub/joint.py ${CHECKPOINT_FILE} ${NUM_GPUS} --eval top_k_accuracy mean_class_accuracy --out result.pkl

We provide a release of our trained model. If you want to use it, please download it and run the commands above by setting the correct path of the model after --resume

Our trained models

AffWild2

arch_type GoogleDrive link Average F1-score
Ours (ResNet-50) link 48.28

Main Results

As a final result, we obtained an average f1-score of 48.23 on the test set

AffWild2

Method AU1 AU2 AU4 AU6 AU7 AU10 AU12 AU15 AU23 AU24 AU25 AU26 Avg.
Ours (ResNet-50) 56.35 35.54 49.45 58.97 73.61 74.03 69.59 32.47 14.76 8.77 84.09 23.97 48.47

(Back to top)

About

A toolbox for skeleton-based action recognition.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • Shell 0.6%