- Python 3.5+
- Cython
- PyTorch 1.1+, for users with PyTorch 1.5 and 1.5+, please merge the pull request #592 by:
git pull origin pull/592/head
- torchvision 0.3.0+
- Linux, Windows user check here
Install conda from here.
# 1. Create a conda virtual environment.
conda create -n alphapose python=3.6 -y
conda activate alphapose
# 2. Install PyTorch
conda install pytorch==1.1.0 torchvision==0.3.0
# 3. Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
# git pull origin pull/592/head if you use PyTorch>=1.5
cd AlphaPose
# 4. install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
python -m pip install cython
sudo apt-get install libyaml-dev
################Only For Ubuntu 18.04#################
locale-gen C.UTF-8
# if locale-gen not found
sudo apt-get install locales
export LANG=C.UTF-8
######################################################
python setup.py build develop
# 1. Install PyTorch
pip3 install torch==1.1.0 torchvision==0.3.0
# 2. Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
# git pull origin pull/592/head if you use PyTorch>=1.5
cd AlphaPose
# 3. install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop --user
The installation process is same as above. But note that Windows users may face problem when installing cuda extension. Thus we disable the cuda extension in the setup.py by default. The affect is that models ended with "-dcn" is not supported. If you force to make cuda extension by modify this line to True, you should install Visual Studio due to the problem mentioned here. We recommend Windows users to run models like FastPose, FastPose-duc, etc., as they also provide good accuracy and speed.
For Windows user, if you meet error with PyYaml, you can download and install it manually from here: https://pyyaml.org/wiki/PyYAML.
If your OS platform is Windows
, make sure that Windows C++ build tool like visual studio 15+ or visual c++ 2015+ is installed for training.
-
Download the object detection model manually: yolov3-spp.weights(Google Drive | Baidu pan). Place it into
detector/yolo/data
. -
Download our pose models. Place them into
pretrained_models
. All models and details are available in our Model Zoo. -
For pose tracking, please refer to our tracking docments for model download
If you want to train the model by yourself, please download data from MSCOCO (train2017 and val2017). Download and extract them under ./data
, and make them look like this:
|-- json
|-- exp
|-- alphapose
|-- configs
|-- test
|-- data
`-- |-- coco
`-- |-- annotations
| |-- person_keypoints_train2017.json
| `-- person_keypoints_val2017.json
|-- train2017
| |-- 000000000009.jpg
| |-- 000000000025.jpg
| |-- 000000000030.jpg
| |-- ...
`-- val2017
|-- 000000000139.jpg
|-- 000000000285.jpg
|-- 000000000632.jpg
|-- ...
Please download images from MPII. We also provide the annotations in json format [annot_mpii.zip].
Download and extract them under ./data
, and make them look like this:
|-- data
`-- |-- mpii
`-- |-- annot_mpii.json
`-- images
|-- 027457270.jpg
|-- 036645665.jpg
|-- 045572740.jpg
|-- ...