Skip to content

wyhsirius/LIA

Repository files navigation

Latent Image Animator: Learning to Animate Images via Latent Space Navigation

This is the official PyTorch implementation of the ICLR 2022 paper "Latent Image Animator: Learning to Animate Images via Latent Space Navigation" Replicate

Requirements

  • Python 3.7
  • PyTorch 1.5+
  • tensorboard
  • moviepy
  • av
  • tqdm
  • lpips

1. Animation demo

Download pre-trained checkpoints from here and put models under ./checkpoints. We have provided several demo source images and driving videos in ./data. To obtain demos, you could run following commands, generated results will be saved under ./res.

python run_demo.py --model vox --source_path ./data/vox/macron.png --driving_path ./data/vox/driving1.mp4 # using vox model
python run_demo.py --model taichi --source_path ./data/taichi/subject1.png --driving_path ./data/taichi/driving1.mp4 # using taichi model
python run_demo.py --model ted --source_path ./data/ted/subject1.png --driving_path ./data/ted/driving1.mp4 # using ted model

If you would like to use your own image and video, indicate <SOURCE_PATH> (source image), <DRIVING_PATH> (driving video), <DATASET> and run

python run_demo.py --model <DATASET> --source_path <SOURCE_PATH> --driving_path <DRIVING_PATH>

2. Datasets

Please follow the instructions in FOMM and MRAA to download and preprocess VoxCeleb, Taichi and Ted datasets. Put datasets under ./datasets and organize them as follows:

Vox (Taichi, Ted)

Video Dataset (vox, taichi, ted)
|-- train
    |-- video1
        |-- frame1.png
        |-- frame2.png
        |-- ...
    |-- video2
        |-- frame1.png
        |-- frame2.png
        |-- ...
    |-- ...
|-- test
    |-- video1
        |-- frame1.png
        |-- frame2.png
        |-- ...
    |-- video2
        |-- frame1.png
        |-- frame2.png
        |-- ...
    |-- ...

3. Training

By default, we use DistributedDataParallel on 8 V100 for all datasets. To train the netowrk, run

python train.py --dataset <DATSET> --exp_path <EXP_PATH> --exp_name <EXP_NAME>

The dataset list is as follows, <DATASET>: {vox,taichi,ted}. Tensorboard log and checkpoints will be saved in <EXP_PATH>/<EXP_NAME>/log and <EXP_PATH>/<EXP_NAME>/chekcpoints respectively.

To train from a checkpoint, run

python train.py --dataset <DATASET> --exp_path <EXP_PATH> --exp_name <EXP_NAME> --resume_ckpt <CHECKPOINT_PATH>

4. Evaluation

To obtain reconstruction and LPIPS results, put checkpoints under ./checkpoints and run

python evaluation.py --dataset <DATASET> --save_path <SAVE_PATH>

Generated videos will be save under <SAVE_PATH>. For other evaluation metrics, we use the code from here.

5. Linear manipulation

To obtain linear manipulation results of a single image, run

python linear_manipulation.py --model <DATAET> --img_path <IMAGE_PATH> --save_folder <RESULTS_PATH>

By default, results will be saved under ./res_manipulation.

Citation

If you find this code useful for your research, please consider citing our paper:

@inproceedings{
wang2022latent,
title={Latent Image Animator: Learning to Animate Images via Latent Space Navigation},
author={Yaohui Wang and Di Yang and Francois Bremond and Antitza Dantcheva},
booktitle={International Conference on Learning Representations},
year={2022}
}

Acknowledgement

Part of the code is adapted from FOMM and MRAA. We thank authors for their contribution to the community.

About

[ICLR 22, TPAMI 24] Latent Image Animator

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages