<<<<<<< HEAD
[Project Page] [Paper]
[Paper Page] [Paper]
a27df5a10e881f43ea8a694c47919ecdd399cfdd [Supp. Mat.]
- License
- Description <<<<<<< HEAD
- Dependencies =======
- News
- Installation
- Downloading the model
- Loading SMPL-X, SMPL+H and SMPL
- MANO and FLAME correspondences
- Example
- Modifying the global pose of the model
a27df5a10e881f43ea8a694c47919ecdd399cfdd
<<<<<<< HEAD
=======
a27df5a10e881f43ea8a694c47919ecdd399cfdd
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SMPL-X/SMPLify-X model, data and software, (the "Model & Software"), including 3D meshes, blend weights, blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
<<<<<<< HEAD The original images used for the figures 1 and 2 of the paper can be found in this link.
The original images used for the figures 1 and 2 of the paper can be found in this link.
a27df5a10e881f43ea8a694c47919ecdd399cfdd The images in the paper are used under license from gettyimages.com. We have acquired the right to use them in the publication, but redistribution is not allowed. Please follow the instructions on the given link to acquire right of usage. Our results are obtained on the 483 × 724 pixels resolution of the original images.
<<<<<<< HEAD This repository contains the fitting code used for the experiments in Expressive Body Capture: 3D Hands, Face, and Body from a Single Image.
Run the following command to execute the code:
python smplifyx/main.py --config cfg_files/fit_smplx.yaml
--data_folder DATA_FOLDER
--output_folder OUTPUT_FOLDER
--visualize="True/False"
--model_folder MODEL_FOLDER
--vposer_ckpt VPOSER_FOLDER
--part_segm_fn smplx_parts_segm.pkl
where the DATA_FOLDER
should contain two subfolders, images, where the
images are located, and keypoints, where the OpenPose output should be
stored.
To fit SMPL or SMPL+H, replace the yaml configuration file with either fit_smpl.yaml or fit_smplx.yaml, i.e.:
- for SMPL:
python smplifyx/main.py --config cfg_files/fit_smpl.yaml
--data_folder DATA_FOLDER
--output_folder OUTPUT_FOLDER
--visualize="True/False"
--model_folder MODEL_FOLDER
--vposer_ckpt VPOSER_FOLDER
- for SMPL+H:
python smplifyx/main.py --config cfg_files/fit_smplh.yaml
--data_folder DATA_FOLDER
--output_folder OUTPUT_FOLDER
--visualize="True/False"
--model_folder MODEL_FOLDER
--vposer_ckpt VPOSER_FOLDER
To visualize the results produced by the method you can run the following script:
python smplifyx/render_results.py --mesh_fns OUTPUT_MESH_FOLDER
where OUTPUT_MESH_FOLDER is the folder that contains the resulting meshes.
Follow the installation instructions for each of the following before using the fitting code.
- PyTorch Mesh self-intersection for interpenetration penalty
- Download the per-triangle part segmentation: smplx_parts_segm.pkl
- Trimesh for loading triangular meshes
- Pyrender for visualization
The code has been tested with Python 3.6, CUDA 10.0, CuDNN 7.3 and PyTorch 1.0 on Ubuntu 18.04.
If you find this Model & Software useful in your research we would kindly ask you to cite:
@inproceedings{SMPL-X:2019,
title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}
The LBFGS optimizer with Strong Wolfe Line search is taken from this Pytorch pull request. Special thanks to Du Phan for implementing this. We will update the repository once the pull request is merged.
The code of this repository was implemented by Vassilis Choutas and Georgios Pavlakos.
For questions, please contact [email protected].
For commercial licensing (and all related questions for business applications), please contact [email protected].
SMPL-X (SMPL eXpressive) is a unified body model with shape parameters trained jointly for the face, hands and body. SMPL-X uses standard vertex based linear blend skinning with learned corrective blend shapes, has N = 10, 475 vertices and K = 54 joints, which include joints for the neck, jaw, eyeballs and fingers. SMPL-X is defined by a function M(θ, β, ψ), where θ is the pose parameters, β the shape parameters and ψ the facial expression parameters.
- 3 November 2020: We release the code to transfer between the models in the SMPL family. For more details on the code, go to this readme file. A detailed explanation on how the mappings were extracted can be found here.
- 23 September 2020: A UV map is now available for SMPL-X, please check the Downloads section of the website.
- 20 August 2020: The full shape and expression space of SMPL-X are now available.
To install the model please follow the next steps in the specified order:
- To install from PyPi simply run:
pip install smplx[all]
- Clone this repository and install it using the setup.py script:
git clone https://github.com/vchoutas/smplx
python setup.py install
To download the SMPL-X model go to this project website and register to get access to the downloads section.
To download the SMPL+H model go to this project website and register to get access to the downloads section.
To download the SMPL model go to this (male and female models) and this (gender neutral model) project website and register to get access to the downloads section.
The loader gives the option to use any of the SMPL-X, SMPL+H, SMPL, and MANO models. Depending on the model you want to use, please follow the respective download instructions. To switch between MANO, SMPL, SMPL+H and SMPL-X just change the model_path or model_type parameters. For more details please check the docs of the model classes. Before using SMPL and SMPL+H you should follow the instructions in tools/README.md to remove the Chumpy objects from both model pkls, as well as merge the MANO parameters with SMPL+H.
You can either use the create function from body_models or directly call the constructor for the SMPL, SMPL+H and SMPL-X model. The path to the model can either be the path to the file with the parameters or a directory with the following structure:
models
├── smpl
│ ├── SMPL_FEMALE.pkl
│ └── SMPL_MALE.pkl
│ └── SMPL_NEUTRAL.pkl
├── smplh
│ ├── SMPLH_FEMALE.pkl
│ └── SMPLH_MALE.pkl
├── mano
| ├── MANO_RIGHT.pkl
| └── MANO_LEFT.pkl
└── smplx
├── SMPLX_FEMALE.npz
├── SMPLX_FEMALE.pkl
├── SMPLX_MALE.npz
├── SMPLX_MALE.pkl
├── SMPLX_NEUTRAL.npz
└── SMPLX_NEUTRAL.pkl
The vertex correspondences between SMPL-X and MANO, FLAME can be downloaded from the project website. If you have extracted the correspondence data in the folder correspondences, then use the following scripts to visualize them:
- To view MANO correspondences run the following command:
python examples/vis_mano_vertices.py --model-folder $SMPLX_FOLDER --corr-fname correspondences/MANO_SMPLX_vertex_ids.pkl
- To view FLAME correspondences run the following command:
python examples/vis_flame_vertices.py --model-folder $SMPLX_FOLDER --corr-fname correspondences/SMPL-X__FLAME_vertex_ids.npy
After installing the smplx package and downloading the model parameters you should be able to run the demo.py script to visualize the results. For this step you have to install the pyrender and trimesh packages.
python examples/demo.py --model-folder $SMPLX_FOLDER --plot-joints=True --gender="neutral"
If you want to modify the global pose of the model, i.e. the root rotation and translation, to a new coordinate system for example, you need to take into account that the model rotation uses the pelvis as the center of rotation. A more detailed description can be found in the following link. If something is not clear, please let me know so that I can update the description.
Depending on which model is loaded for your project, i.e. SMPL-X or SMPL+H or SMPL, please cite the most relevant work below, listed in the same order:
@inproceedings{SMPL-X:2019,
title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}
@article{MANO:SIGGRAPHASIA:2017,
title = {Embodied Hands: Modeling and Capturing Hands and Bodies Together},
author = {Romero, Javier and Tzionas, Dimitrios and Black, Michael J.},
journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
volume = {36},
number = {6},
series = {245:1--245:17},
month = nov,
year = {2017},
month_numeric = {11}
}
@article{SMPL:2015,
author = {Loper, Matthew and Mahmood, Naureen and Romero, Javier and Pons-Moll, Gerard and Black, Michael J.},
title = {{SMPL}: A Skinned Multi-Person Linear Model},
journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
month = oct,
number = {6},
pages = {248:1--248:16},
publisher = {ACM},
volume = {34},
year = {2015}
}
This repository was originally developed for SMPL-X / SMPLify-X (CVPR 2019), you might be interested in having a look: https://smpl-x.is.tue.mpg.de.
Special thanks to Soubhik Sanyal for sharing the Tensorflow code used for the facial landmarks.
The code of this repository was implemented by Vassilis Choutas.
For questions, please contact [email protected].
For commercial licensing (and all related questions for business applications), please contact [email protected].
a27df5a10e881f43ea8a694c47919ecdd399cfdd