Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.OutOfMemoryError: CUDA out of memory #82

Open
noskill opened this issue Nov 18, 2024 · 9 comments
Open

torch.OutOfMemoryError: CUDA out of memory #82

noskill opened this issue Nov 18, 2024 · 9 comments

Comments

@noskill
Copy link

noskill commented Nov 18, 2024

Hi!
I have 4 RTX 3090, but generation fails with out of memory error on 256x512 videos.
full stacktrace:

$ time python3 ./demos/cli.py --model_dir weights/ --width=512 --height=256                            
running                                                                                                                                  
Launching with 4 GPUs. If you want to force single GPU mode use CUDA_VISIBLE_DEVICES=0.                                                  
Attention mode: sdpa                                                                                                                     
2024-11-18 07:11:28,684 INFO worker.py:1819 -- Started a local Ray instance.                                                             
(MultiGPUContext pid=1171333) Initializing rank 2/4                                                                                      
(MultiGPUContext pid=1171333) Timing init_process_group                                                                                  
(MultiGPUContext pid=1171345) Timing load_text_encoder                                                                                   
(MultiGPUContext pid=1171345) Timing load_dit                                                                                            
(MultiGPUContext pid=1171345) Initializing rank 4/4 [repeated 3x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 
to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication
 for more options.)                                                                                                                      
(MultiGPUContext pid=1171345) Timing init_process_group [repeated 3x across cluster]                                                     
(MultiGPUContext pid=1171339) Timing load_text_encoder [repeated 3x across cluster]                                                      
(MultiGPUContext pid=1171345) Timing load_vae                                                                                            
(MultiGPUContext pid=1171336) Timing load_dit [repeated 3x across cluster]                                                               
(MultiGPUContext pid=1171345) Stage                   Time(s)    Percent                                                                 
(MultiGPUContext pid=1171345) init_process_group         1.25      2.80%                                                                 
(MultiGPUContext pid=1171345) load_text_encoder          9.70     21.74%                                                                 
(MultiGPUContext pid=1171345) load_dit                  30.95     69.37%                                                                 
(MultiGPUContext pid=1171345) load_vae                   2.72      6.09%                                                                 
(MultiGPUContext pid=1171336) Timing load_vae [repeated 3x across cluster]                                                               
(MultiGPUContext pid=1171339) Stage                   Time(s)    Percent [repeated 3x across cluster]                                    
(MultiGPUContext pid=1171339) init_process_group         1.33      2.99% [repeated 3x across cluster]                                    
(MultiGPUContext pid=1171339) load_text_encoder          9.47     21.21% [repeated 3x across cluster]                                    
(MultiGPUContext pid=1171339) load_dit                  30.95     69.35% [repeated 3x across cluster]                                    
(MultiGPUContext pid=1171339) load_vae                   2.88      6.45% [repeated 3x across cluster]                                    
(pid=1171336) Sampling 0: 100%|███████████████████████████████████████████████████████████████████████| 64.0/64.0 [10:37<00:00, 10.1s/it]
Traceback (most recent call last):             
  File "/home/imgen/projects/genmoai/./demos/cli.py", line 155, in <module>                                                                                                                                            
    generate_cli()                                                                                                                                                                                                     
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 1157, in __call__                                                                                                           
    return self.main(*args, **kwargs)                                                                                                                                                                                  
           ^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                  
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 1078, in main                                                                                                               
    rv = self.invoke(ctx)                                                                                                                                                                                              
         ^^^^^^^^^^^^^^^^                                                                                                                                                                                              
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 1434, in invoke                                                                                                             
    return ctx.invoke(self.callback, **ctx.params)                                                                                                                                                                     
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                     
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 783, in invoke                                                                                                              
    return __callback(*args, **kwargs)                                                                                                                                                                                 
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                 
  File "/home/imgen/projects/genmoai/./demos/cli.py", line 141, in generate_cli                                                                                                                                        
    output = generate_video(                                                                                                                                                                                           
             ^^^^^^^^^^^^^^^                                                                                                                                                                                           
  File "/home/imgen/projects/genmoai/./demos/cli.py", line 96, in generate_video                                                                                                                                       
    final_frames = pipeline(**args)                                                                                                                                                                                    
                   ^^^^^^^^^^^^^^^^                                                                                                                                                                                    
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/pipelines.py", line 544, in __call__                                                                                         
    return ray.get([ctx.run.remote(fn=sample, **kwargs, show_progress=i == 0) for i, ctx in enumerate(self.ctxs)])[                                                                                                    
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                     
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper                                                                                   
    return fn(*args, **kwargs)                                                                                                                                                                                         
           ^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                         
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper                                                                                          
    return func(*args, **kwargs)                                                                                                                                                                                       
           ^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                       
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/ray/_private/worker.py", line 2753, in get                                                                                                       
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)                                                                                                                                     
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                     
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/ray/_private/worker.py", line 904, in get_objects                                                                                                
    raise value.as_instanceof_cause()                                                                                                                                                                                  
ray.exceptions.RayTaskError(OutOfMemoryError): ray::MultiGPUContext.run() (pid=1171345, ip=192.168.1.66, actor_id=c0ae50f4dcb700a6726dfbc301000000, repr=<genmo.mochi_preview.pipelines.MultiGPUContext object at 0x7f5
c3ffe67d0>)                                                                                                                                                                                                            
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                        
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                             
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/pipelines.py", line 499, in run                                                                                              
    return fn(self, **kwargs)                                                                                                                                                                                          
           ^^^^^^^^^^^^^^^^^^                                                                                                                                                                                          
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/pipelines.py", line 541, in sample                                                                                           
    frames = decode_latents(ctx.decoder, latents)                                                                                                                                                                      
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                      
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context                                        
      return func(*args, **kwargs)                                                                                                                                                                              [38/1116]
           ^^^^^^^^^^^^^^^^^^^^^                     
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/vae/models.py", line 1015, in decode_latents                                                                                 
    samples = decoder(z)                             
              ^^^^^^^^^^                             
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl                                                                                    
    return self._call_impl(*args, **kwargs)                                                                
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl                                                                                            
    return forward_call(*args, **kwargs)                                                                   
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                   
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/vae/models.py", line 598, in forward                                                                                         
    x = block(x)                                     
        ^^^^^^^^                                     
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl                                                                                    
    return self._call_impl(*args, **kwargs)                                                                
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl                                                                                            
    return forward_call(*args, **kwargs)                                                                   
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                   
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/container.py", line 250, in forward                                                                                             
    input = module(input)                            
            ^^^^^^^^^^^^^                            
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl                                                                                    
    return self._call_impl(*args, **kwargs)                                                                
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl                                                                                            
    return forward_call(*args, **kwargs)                                                                   
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                   
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/vae/models.py", line 292, in forward                                                                                         
    x = self.stack(x)                                
        ^^^^^^^^^^^^^                                
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl                                                                                    
    return self._call_impl(*args, **kwargs)                                                                
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl                                                                                            
    return forward_call(*args, **kwargs)                                                                   
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                   
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/container.py", line 250, in forward                                                                                             
    input = module(input)                            
            ^^^^^^^^^^^^^                            
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl                                                                                    
    return self._call_impl(*args, **kwargs)                                                                
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl                                                                                            
    return forward_call(*args, **kwargs)                                                                   
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                   
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/vae/models.py", line 159, in forward                                                                                         
    return super().forward(x)                        
           ^^^^^^^^^^^^^^^^^^                        
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/genmo/mochi_preview/vae/models.py", line 77, in forward                                                                                          
    return super(SafeConv3d, self).forward(input)                                                          
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                          
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 725, in forward                                                                                                  
    return self._conv_forward(input, self.weight, self.bias)                                               
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                               
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 709, in _conv_forward                                                                                            
    return F.conv3d(                                 
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.31 GiB. GPU 0 has a total capacity of 23.68 GiB of which 419.06 MiB is free. Including non-PyTorch memory, this process has 23.27 GiB memory in use. Of
 the allocated memory 21.88 GiB is allocated by PyTorch, and 955.94 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:Tru
e to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(pid=1171336) Sampling 0: 100%|██████████| 64.0/64.0 [10:54<00:00, 10.2s/it]                               

real    11m53.248s                                   
user    0m39.424s                                    
sys     0m26.755s                                                             

nvidis-smi

Mon Nov 18 10:22:35 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.120                Driver Version: 550.120        CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        Off |   00000000:19:00.0 Off |                  N/A |
| 41%   69C    P2            237W /  370W |   21033MiB /  24576MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090        Off |   00000000:1A:00.0 Off |                  N/A |
| 75%   67C    P2            254W /  370W |   21032MiB /  24576MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA GeForce RTX 3090        Off |   00000000:67:00.0 Off |                  N/A |
| 72%   65C    P2            244W /  370W |   21032MiB /  24576MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA GeForce RTX 3090        Off |   00000000:68:00.0 Off |                  N/A |
| 74%   66C    P2            254W /  370W |   21032MiB /  24576MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A   1201659      C   ray::MultiGPUContext.run                    21024MiB |
|    1   N/A  N/A   1201658      C   ray::MultiGPUContext.run                    21024MiB |
|    2   N/A  N/A   1201660      C   ray::MultiGPUContext.run                    21024MiB |
|    3   N/A  N/A   1201667      C   ray::MultiGPUContext.run                    21024MiB |
+-----------------------------------------------------------------------------------------+
@konkura
Copy link

konkura commented Nov 19, 2024

I have the same problem with 6xRTX 4090 trying to run with the default settings.

@paras-genmo
Copy link
Contributor

@noskill @konkura Try adding --cpu_offload to the cli.py command to save memory

@noskill
Copy link
Author

noskill commented Nov 30, 2024

@paras-genmo i tried, but "CPU offload not supported in multi-GPU mode"

@paras-genmo paras-genmo reopened this Dec 1, 2024
@paras-genmo
Copy link
Contributor

paras-genmo commented Dec 1, 2024

I see, we haven't tested multi-GPU where each GPU has < 80 GB of memory as we don't have a system to test that setting.

The multi-GPU codepath is slightly different than single-GPU, but it should theoretically be possible to make it work with code changes since 6x 4090 is definitely better than 1x 4090.

Were you able to get the code to work on a single GPU?

CUDA_VISIBLE_DEVICES=0 python3 ./demos/cli.py --model_dir weights/ --width=512 --height=256 --num_frames=31 --cpu_offload

You can adjust --num_frames to save memory: 31, 37, 43 etc in increments of 6.

If that works, a smaller number of frames may work on multi-GPU even without CPU offloading:

python3 ./demos/cli.py --model_dir weights/ --width=512 --height=256 --num_frames=31

If you find a way to make CPU offload work on multi-GPU, we would be happy to review a pull request.

@noskill
Copy link
Author

noskill commented Dec 2, 2024

It seems that gpu offload doesn't work:

$ CUDA_VISIBLE_DEVICES=0 python3  ./demos/cli.py --model_dir weights/ --width=512 --height=256 --num_frames=31 --cpu
_offload                                                                                                                                              
Launching with 1 GPUs. If you want to force single GPU mode use CUDA_VISIBLE_DEVICES=0.                                                               
Attention mode: sdpa                                                                                                                                  
Timing load_text_encoder                                                                                                                              
Timing load_dit                                                                                                                                       
Timing load_vae                                                                                                                                       
Stage                   Time(s)    Percent                                                                                                            
load_text_encoder          0.64     12.45%                                                                                                            
load_dit                   2.76     53.34%                                                                                                            
load_vae                   1.77     34.21%                                                                                                            
Max memory reserved: 0.00 GB                                                                                                                          
moving model from cpu -> cuda:0                                                                                                                       
moving model from cuda:0 -> cpu                                                                                                                       
Max memory reserved: 17.85 GB                                                                                                                         
moving model from cpu -> cuda:0                                                                                                                       
Traceback (most recent call last):                                                                                                                    
  File "/home/imgen/projects/genmoai/./demos/cli.py", line 163, in <module>                                                                           
    generate_cli()                                                                                                                                    
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 1157, in __call__                                          
    return self.main(*args, **kwargs)                                                                                                                 
           ^^^^^^^^^^^^^^^^^^^^^^^^^^               
           ^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                         [9/1947]
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 1078, in main                                              
    rv = self.invoke(ctx)                                                                                                                             
         ^^^^^^^^^^^^^^^^                                                                                                                             
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 1434, in invoke                                            
    return ctx.invoke(self.callback, **ctx.params)                                                                                                    
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                    
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/imgen/projects/genmoai/./demos/cli.py", line 149, in generate_cli
    output = generate_video(
             ^^^^^^^^^^^^^^^
  File "/home/imgen/projects/genmoai/./demos/cli.py", line 103, in generate_video
    final_frames = pipeline(**args)
                   ^^^^^^^^^^^^^^^^
  File "/home/imgen/projects/genmoai/src/genmo/mochi_preview/pipelines.py", line 560, in __call__
    with move_to_device(self.dit, self.device):
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/contextlib.py", line 137, in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
  File "/home/imgen/projects/genmoai/src/genmo/mochi_preview/pipelines.py", line 499, in move_to_device
    model.to(target_device)
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
    module._apply(fn)
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
    module._apply(fn)
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
                   ^^^^^^^^^
  File "/home/imgen/miniconda3/envs/py32/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1326, in convert
    return t.to(
           ^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 MiB. GPU 0 has a total capacity of 23.68 GiB of which 2.88 MiB is free. Process 17
9526 has 254.00 MiB memory in use. Including non-PyTorch memory, this process has 23.42 GiB memory in use. Of the allocated memory 21.57 GiB is alloca
ted by PyTorch, and 1.56 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=e
xpandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment
-variables)

it fails on line
pipelines.py 560
with move_to_device(self.dit, self.device):

Also dit is loaded in float32

@saimeur
Copy link

saimeur commented Dec 13, 2024

Hi,

I have exactly the same issue regarding the OOM error. I'm working with several GPUs, each with less than 46 GB of memory, and offloading to the CPU doesn't seem to work. The model is moved multiple times between the CPU and the GPU, but it ends up on the GPU.

Did you solve this OOM problem @noskill ?

@noskill
Copy link
Author

noskill commented Dec 14, 2024

@saimeur Not yet, to be honest i am using mochi 1 repackaged by comfyui team https://huggingface.co/Comfy-Org/mochi_preview_repackaged/tree/main/split_files/diffusion_models?ref=blog.comfy.org
fp16 version works fine on RTX 3090

https://blog.comfy.org/p/mochi-1

@saimeur
Copy link

saimeur commented Dec 16, 2024

Thank you for your answer @noskill.

I'm not very familiar with ComfyUI, but I took a quick look at the workflow, and the user interface seems great for generating some outputs while having control over the parameters. Is it possible to use ComfyUI to generate videos in bulk, like what can be done with a .yml file, or is it limited to one at a time?

@noskill
Copy link
Author

noskill commented Dec 18, 2024

@saimeur yes, there are some examples of using api in comfyui repo, though i made PR moving all parameters to bfloat16 ,#120

This works with cpu offload enabled

CUDA_VISIBLE_DEVICES=3 python3 ./demos/cli.py --model_dir weights/ --width=512 --height=256 --num_frames=31 --cpu_offload

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants