Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ShenhaoZhu authored Mar 26, 2024
1 parent 9304755 commit 41f2654
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ https://github.com/fudan-generative-vision/champ/assets/82803297/b4571be6-dfb0-4

# Installation
- System requirement: Ubuntu20.04
- Tested GPUs: A100
- Tested GPUs: A100, RTX3090

Create conda environment:
```bash
Expand Down Expand Up @@ -92,6 +92,8 @@ Animation results will be saved in `results` folder. You can change the referenc

You can also extract the driving motion from any videos and then render with Blender. We will later provide the instructions and scripts for this.

Note: The default motion-01 in `inference.yaml` has more than 500 frames and takes about 36GB VRAM. If you encounter VRAM issues, consider switching to other example data with less frames.

# Acknowledgements
We thank the authors of [MagicAnimate](https://github.com/magic-research/magic-animate), [Animate Anyone](https://github.com/HumanAIGC/AnimateAnyone), and [AnimateDiff](https://github.com/guoyww/AnimateDiff) for their excellent work. Our project is built upon [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone), and we are grateful for their open-source contributions.

Expand Down

0 comments on commit 41f2654

Please sign in to comment.