Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CogVideoX Support #6

Closed
kursatdinc opened this issue Dec 5, 2024 · 4 comments
Closed

CogVideoX Support #6

kursatdinc opened this issue Dec 5, 2024 · 4 comments

Comments

@kursatdinc
Copy link

Will you add cogvideox support?

@kinam0252
Copy link
Collaborator

Hello, thank you for your question. Yes, we’re planning to implement support for CogVideoX in the near future. Please stay tuned for updates!

@kinam0252
Copy link
Collaborator

kinam0252 commented Dec 19, 2024

Will you add cogvideox support?

Thanks for waiting! 🙌 We’ve added STG support to the CogVideoX pipeline and updated it at CogVideoXSTGPipeline. Feel free to check it out! 🚀

@kursatdinc
Copy link
Author

When I try to get inference with A100 40GB, I get Cuda OOM. Did you face this problem ?

@kinam0252
Copy link
Collaborator

kinam0252 commented Dec 20, 2024

In the experiment, there was no OOM with 80GB, but if you get an OOM with 40GB, you can try some tips from the diffusers CogVideoX docs - Memory Optimization part.

You can try removing .to("cuda") in pipe = CogVideoXSTGPipeline.from_pretrained(...).to("cuda")

and using some of the following options.
pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()

Hope this works!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants