Reproduce the paper Skeleton-Aware Networks for Deep Motion Retargeting of Kfir Aberman, Peizhuo Li. Most of the pre-processing and post-processing codes which deal with motion capture files are borrowed from their github in Skeleton-Aware Networks for Deep Motion Retargeting
Pytorch >= 1.3.1
from left to right: input, target, output
First you need to download the test motion data we need from google drive or Baidu Disk, the pass code is (ye1q).
Place the Mixamo directory within ./datasets/
Then run
python inference.py
The output bvh files will be saved in the folder
./pretrained/result/
You need to download data from the Mixamo Datasets for training, here is a convenient intro about how to use a script to download fbx data from the Mixamo Datset. After downloading, you need to customize the "data_path" variable in the script:
./datasets/fbx2bvh.py
Then run:
python preprocessing.py
to convert the fbx files to bvh file, as well as generating the train and validation files list.
At last, run
python train.py
It will train the network which could retarget the motion of "Aj" to "BigVegas" which are 2 different characters from the "Mixamo" dataset. Please note: The architectures of the neural networks are based on the topologies of the skeletons.So if you'd like train a model with your customized skeletons. You might need to change some variables related with the topologies which defined in the
./model/skeleton.py
also the names of joints defined in the
./datatsets/bvh_parser/py
An automatic way to generate the skeleton structure after skeleton pooling might be released in the future.
Learning curves showed below:
There are some changes compared with code released by the author:
-
I re-implement the architecture of the neural network. I use nn.Conv2d instead of nn.Conv1d for the skeleton convolution, because I think 2D convolution is more in line with the description about skeleton convolution according to the paper. However, it seems that 2D convolution are not as efficient as the original 1D convolution.
-
GAN part are deprecated(bad memories with it, training GAN needs lots of tricks). Learning to walk before you run. So, technically my training process is in a "paired"(supervised) mode.
-
About the loss designing, I haven't add the end-effector loss into the training pipeline, multi-task losses training needs tricks too.
-
IK optimization has not been implemented.
-
More data are needed for evaluating the generalization ability.
You need to install Blender to visualize the result bvh files. Just import the bvh file into Blender and press "White Space" Key.
If you use this code for your research, please cite their fancy paper:
@article{aberman2020skeleton,
author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
title = {Skeleton-Aware Networks for Deep Motion Retargeting},
journal = {ACM Transactions on Graphics (TOG)},
volume = {39},
number = {4},
pages = {62},
year = {2020},
publisher = {ACM}
}