DiffSynth Studio is a diffusion engine. We have restructured architectures, including Text Encoder, UNet, and VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. We offer many interesting features. Enjoy the magic of diffusion models!
git clone https://github.com/Agoooy/AI-Video
cd DiffSynth-Studio
pip install -e .
The Python examples are in examples
. We provide an overview here.
We trained an extended video synthesis model, which can generate 128 frames. examples/ExVideo
github_title.mp4
Generate high-resolution images, by breaking the limitation of diffusion models! examples/image_synthesis
512*512 | 1024*1024 | 2048*2048 | 4096*4096 |
---|---|---|---|
1024*1024 | 2048*2048 |
---|---|
Render realistic videos in a flatten style and enable video editing features. examples/Diffutoon
Diffutoon.mp4
Diffutoon_edit.mp4
Video stylization without video models. examples/diffsynth
winter_stone.mp4
Use Hunyuan-DiT to generate images with Chinese prompts. We also support LoRA fine-tuning of this model. examples/hunyuan_dit
Prompt: 少女手捧鲜花,坐在公园的长椅上,夕阳的余晖洒在少女的脸庞,整个画面充满诗意的美感
1024x1024 | 2048x2048 (highres-fix) |
---|---|
Prompt: 一只小狗蹦蹦跳跳,周围是姹紫嫣红的鲜花,远处是山脉
Without LoRA | With LoRA |
---|---|
python -m streamlit run DiffSynth_Studio.py