Skip to content

Commit

Permalink
fix-format (huggingface#4458)
Browse files Browse the repository at this point in the history
make style

Co-authored-by: yiyixuxu <yixu310@gmail,com>
  • Loading branch information
yiyixuxu and yiyixuxu authored Aug 4, 2023
1 parent 29ece0d commit 1edd0de
Showing 1 changed file with 4 additions and 4 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -203,10 +203,10 @@ def enable_model_cpu_offload(self, gpu_id=0):

def enable_sequential_cpu_offload(self, gpu_id=0):
r"""
Offloads all models (`unet`, `text_encoder`, `vae`, and `safety checker` state dicts) to CPU using 🤗 Accelerate, significantly reducing memory usage. Models are moved to a
`torch.device('meta')` and loaded on a GPU only when their specific submodule's `forward` method is called.
Offloading happens on a submodule basis. Memory savings are higher than using
`enable_model_cpu_offload`, but performance is lower.
Offloads all models (`unet`, `text_encoder`, `vae`, and `safety checker` state dicts) to CPU using 🤗
Accelerate, significantly reducing memory usage. Models are moved to a `torch.device('meta')` and loaded on a
GPU only when their specific submodule's `forward` method is called. Offloading happens on a submodule basis.
Memory savings are higher than using `enable_model_cpu_offload`, but performance is lower.
"""
self.prior_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id)
self.decoder_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id)
Expand Down

0 comments on commit 1edd0de

Please sign in to comment.