This repository is the official implementation of DisPose.
📖 Table of Contents
case1.mp4 |
case2.mp4 |
case3.mp4 |
case4.mp4 |
case5.mp4 |
We present DisPose to mine more generalizable and effective control signals without additional dense input, which disentangles the sparse skeleton pose in human image animation into motion field guidance and keypoint correspondence.
The code requires python>=3.10
, as well as torch>=2.0.1
and torchvision>=0.15.2
. Please follow the instructions here to install both PyTorch and TorchVision dependencies. The demo has been tested on CUDA version of 12.4.
conda create -n dispose python==3.10
conda activate dispose
pip install -r requirements.txt
-
Download the weights of DisPose and put
DisPose.pth
into./pretrained_weights/
. -
Download the weights of other components and put them into
./pretrained_weights/
:
- Downlaod the weights of CMP and put it into
./mimicmotion/modules/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints
Finally, these weights should be organized in ./pretrained_weights/
. as follows:
./pretrained_weights/
|-- MimicMotion_1-1.pth
|-- DisPose.pth
|-- dwpose
| |-- dw-ll_ucoco_384.onnx
| └── yolox_l.onnx
|-- stable-diffusion-v1-5
|-- stable-video-diffusion-img2vid-xt-1-1
A sample configuration for testing is provided as test.yaml
. You can also easily modify the various configurations according to your needs.
bash scripts/test.sh
- If your GPU memory is limited, try set
decode_chunk_size
intest.yaml
to 1. - If you want to enhance the quality of the generated video, you could try some post-processing such as face swapping (insightface) and frame interpolation (IFRNet).
This is official code of DisPose. All the copyrights of the demo images and videos are from community users. Feel free to contact us if you would like remove them.
We sincerely appreciate the code release of the following projects: MimicMotion, Moore-AnimateAnyone, CMP.