Skip to content

Commit b52119a

Browse files
suzukimainstevhliu
andauthored
[docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 (huggingface#9428)
* [docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 Updated documentation as runwayml/stable-diffusion-v1-5 has been removed from Huggingface. * Update docs/source/en/using-diffusers/inpaint.md Co-authored-by: Steven Liu <[email protected]> * Replace with stable-diffusion-v1-5/stable-diffusion-v1-5 * Update inpaint.md --------- Co-authored-by: Steven Liu <[email protected]>
1 parent 8336405 commit b52119a

File tree

95 files changed

+316
-315
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

95 files changed

+316
-315
lines changed

PHILOSOPHY.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ Pipelines are designed to be easy to use (therefore do not follow [*Simple over
6565
The following design principles are followed:
6666
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [# Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
6767
- Pipelines all inherit from [`DiffusionPipeline`].
68-
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
68+
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
6969
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
7070
- Pipelines should be used **only** for inference.
7171
- Pipelines should be very readable, self-explanatory, and easy to tweak.

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Generating outputs is super easy with 🤗 Diffusers. To generate an image from
7373
from diffusers import DiffusionPipeline
7474
import torch
7575

76-
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
76+
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16)
7777
pipeline.to("cuda")
7878
pipeline("An image of a squirrel in Picasso style").images[0]
7979
```
@@ -144,7 +144,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
144144
<tr style="border-top: 2px solid black">
145145
<td>Text-to-Image</td>
146146
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img">Stable Diffusion Text-to-Image</a></td>
147-
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
147+
<td><a href="https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5"> stable-diffusion-v1-5/stable-diffusion-v1-5 </a></td>
148148
</tr>
149149
<tr>
150150
<td>Text-to-Image</td>
@@ -174,7 +174,7 @@ Also, say 👋 in our public Discord channel <a href="https://discord.gg/G7tWnz9
174174
<tr>
175175
<td>Text-guided Image-to-Image</td>
176176
<td><a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/img2img">Stable Diffusion Image-to-Image</a></td>
177-
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"> runwayml/stable-diffusion-v1-5 </a></td>
177+
<td><a href="https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5"> stable-diffusion-v1-5/stable-diffusion-v1-5 </a></td>
178178
</tr>
179179
<tr style="border-top: 2px solid black">
180180
<td>Text-guided Image Inpainting</td>

docs/source/en/api/models/controlnet.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
2929
url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path
3030
controlnet = ControlNetModel.from_single_file(url)
3131

32-
url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
32+
url = "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path
3333
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
3434
```
3535

docs/source/en/api/pipelines/stable_diffusion/inpaint.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The Stable Diffusion model can also be applied to inpainting which lets you edit
1919
It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such
2020
as [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting). Default
2121
text-to-image Stable Diffusion checkpoints, such as
22-
[runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) are also compatible but they might be less performant.
22+
[stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) are also compatible but they might be less performant.
2323

2424
<Tip>
2525

docs/source/en/api/pipelines/stable_diffusion/overview.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ from diffusers import StableDiffusionImg2ImgPipeline
203203
import gradio as gr
204204

205205

206-
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
206+
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
207207

208208
gr.Interface.from_pipeline(pipe).launch()
209209
```

docs/source/en/api/pipelines/text_to_video_zero.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ To generate a video from prompt, run the following Python code:
4141
import torch
4242
from diffusers import TextToVideoZeroPipeline
4343

44-
model_id = "runwayml/stable-diffusion-v1-5"
44+
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
4545
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
4646

4747
prompt = "A panda is playing guitar on times square"
@@ -63,7 +63,7 @@ import torch
6363
from diffusers import TextToVideoZeroPipeline
6464
import numpy as np
6565

66-
model_id = "runwayml/stable-diffusion-v1-5"
66+
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
6767
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
6868
seed = 0
6969
video_length = 24 #24 ÷ 4fps = 6 seconds
@@ -137,7 +137,7 @@ To generate a video from prompt with additional pose control
137137
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
138138
from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
139139

140-
model_id = "runwayml/stable-diffusion-v1-5"
140+
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
141141
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16)
142142
pipe = StableDiffusionControlNetPipeline.from_pretrained(
143143
model_id, controlnet=controlnet, torch_dtype=torch.float16

docs/source/en/conceptual/evaluation.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generato
9292

9393
![parti-prompts-14](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-14.png)
9494

95-
We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)), yields:
95+
We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)), yields:
9696

9797
![parti-prompts-15](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-15.png)
9898

@@ -177,10 +177,10 @@ generator = torch.manual_seed(seed)
177177
images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
178178
```
179179

180-
Then we load the [v1-5 checkpoint](https://huggingface.co/runwayml/stable-diffusion-v1-5) to generate images:
180+
Then we load the [v1-5 checkpoint](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) to generate images:
181181

182182
```python
183-
model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5"
183+
model_ckpt_1_5 = "stable-diffusion-v1-5/stable-diffusion-v1-5"
184184
sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device)
185185

186186
images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
@@ -198,7 +198,7 @@ print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}")
198198
# CLIP Score with v-1-5: 36.2137
199199
```
200200

201-
It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse.
201+
It seems like the [v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse.
202202

203203
<Tip warning={true}>
204204

docs/source/en/conceptual/philosophy.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ Pipelines are designed to be easy to use (therefore do not follow [*Simple over
6565
The following design principles are followed:
6666
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [# Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
6767
- Pipelines all inherit from [`DiffusionPipeline`].
68-
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
68+
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
6969
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
7070
- Pipelines should be used **only** for inference.
7171
- Pipelines should be very readable, self-explanatory, and easy to tweak.

docs/source/en/optimization/coreml.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -102,10 +102,10 @@ Pass the path of the downloaded checkpoint with `-i` flag to the script. `--comp
102102

103103
The inference script assumes you're using the original version of the Stable Diffusion model, `CompVis/stable-diffusion-v1-4`. If you use another model, you *have* to specify its Hub id in the inference command line, using the `--model-version` option. This works for models already supported and custom models you trained or fine-tuned yourself.
104104

105-
For example, if you want to use [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5):
105+
For example, if you want to use [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5):
106106

107107
```shell
108-
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5
108+
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version stable-diffusion-v1-5/stable-diffusion-v1-5
109109
```
110110

111111
## Core ML inference in Swift

docs/source/en/optimization/deepcache.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Then load and enable the [`DeepCacheSDHelper`](https://github.com/horseee/DeepCa
2323
```diff
2424
import torch
2525
from diffusers import StableDiffusionPipeline
26-
pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda")
26+
pipe = StableDiffusionPipeline.from_pretrained('stable-diffusion-v1-5/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda")
2727

2828
+ from DeepCache import DeepCacheSDHelper
2929
+ helper = DeepCacheSDHelper(pipe=pipe)

docs/source/en/optimization/fp16.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ import torch
4747
from diffusers import DiffusionPipeline
4848

4949
pipe = DiffusionPipeline.from_pretrained(
50-
"runwayml/stable-diffusion-v1-5",
50+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
5151
torch_dtype=torch.float16,
5252
use_safetensors=True,
5353
)

docs/source/en/optimization/habana.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ For more information, check out 🤗 Optimum Habana's [documentation](https://hu
6161

6262
We benchmarked Habana's first-generation Gaudi and Gaudi2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) and [Habana/stable-diffusion-2](https://huggingface.co/Habana/stable-diffusion-2) Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance.
6363

64-
For [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on 512x512 images:
64+
For [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) on 512x512 images:
6565

6666
| | Latency (batch size = 1) | Throughput |
6767
| ---------------------- |:------------------------:|:---------------------------:|

docs/source/en/optimization/memory.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ import torch
4141
from diffusers import StableDiffusionPipeline
4242

4343
pipe = StableDiffusionPipeline.from_pretrained(
44-
"runwayml/stable-diffusion-v1-5",
44+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
4545
torch_dtype=torch.float16,
4646
use_safetensors=True,
4747
)
@@ -66,7 +66,7 @@ import torch
6666
from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler
6767

6868
pipe = StableDiffusionPipeline.from_pretrained(
69-
"runwayml/stable-diffusion-v1-5",
69+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
7070
torch_dtype=torch.float16,
7171
use_safetensors=True,
7272
)
@@ -92,7 +92,7 @@ import torch
9292
from diffusers import StableDiffusionPipeline
9393

9494
pipe = StableDiffusionPipeline.from_pretrained(
95-
"runwayml/stable-diffusion-v1-5",
95+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
9696
torch_dtype=torch.float16,
9797
use_safetensors=True,
9898
)
@@ -140,7 +140,7 @@ import torch
140140
from diffusers import StableDiffusionPipeline
141141

142142
pipe = StableDiffusionPipeline.from_pretrained(
143-
"runwayml/stable-diffusion-v1-5",
143+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
144144
torch_dtype=torch.float16,
145145
use_safetensors=True,
146146
)
@@ -201,7 +201,7 @@ def generate_inputs():
201201

202202

203203
pipe = StableDiffusionPipeline.from_pretrained(
204-
"runwayml/stable-diffusion-v1-5",
204+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
205205
torch_dtype=torch.float16,
206206
use_safetensors=True,
207207
).to("cuda")
@@ -265,7 +265,7 @@ class UNet2DConditionOutput:
265265

266266

267267
pipe = StableDiffusionPipeline.from_pretrained(
268-
"runwayml/stable-diffusion-v1-5",
268+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
269269
torch_dtype=torch.float16,
270270
use_safetensors=True,
271271
).to("cuda")
@@ -315,7 +315,7 @@ from diffusers import DiffusionPipeline
315315
import torch
316316

317317
pipe = DiffusionPipeline.from_pretrained(
318-
"runwayml/stable-diffusion-v1-5",
318+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
319319
torch_dtype=torch.float16,
320320
use_safetensors=True,
321321
).to("cuda")

docs/source/en/optimization/mps.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ The `mps` backend uses PyTorch's `.to()` interface to move the Stable Diffusion
2424
```python
2525
from diffusers import DiffusionPipeline
2626

27-
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
27+
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
2828
pipe = pipe.to("mps")
2929

3030
# Recommended if your computer has < 64 GB of RAM
@@ -46,7 +46,7 @@ If you're using **PyTorch 1.13**, you need to "prime" the pipeline with an addit
4646
```diff
4747
from diffusers import DiffusionPipeline
4848

49-
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps")
49+
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5").to("mps")
5050
pipe.enable_attention_slicing()
5151

5252
prompt = "a photo of an astronaut riding a horse on mars"
@@ -67,7 +67,7 @@ To prevent this from happening, we recommend *attention slicing* to reduce memor
6767
from diffusers import DiffusionPipeline
6868
import torch
6969

70-
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps")
70+
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps")
7171
pipeline.enable_attention_slicing()
7272
```
7373

docs/source/en/optimization/onnx.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ To load and run inference, use the [`~optimum.onnxruntime.ORTStableDiffusionPipe
2727
```python
2828
from optimum.onnxruntime import ORTStableDiffusionPipeline
2929

30-
model_id = "runwayml/stable-diffusion-v1-5"
30+
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
3131
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
3232
prompt = "sailing ship in storm by Leonardo da Vinci"
3333
image = pipeline(prompt).images[0]
@@ -44,7 +44,7 @@ To export the pipeline in the ONNX format offline and use it later for inference
4444
use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:
4545

4646
```bash
47-
optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/
47+
optimum-cli export onnx --model stable-diffusion-v1-5/stable-diffusion-v1-5 sd_v15_onnx/
4848
```
4949

5050
Then to perform inference (you don't have to specify `export=True` again):

docs/source/en/optimization/open_vino.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ To load and run inference, use the [`~optimum.intel.OVStableDiffusionPipeline`].
2929
```python
3030
from optimum.intel import OVStableDiffusionPipeline
3131

32-
model_id = "runwayml/stable-diffusion-v1-5"
32+
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
3333
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
3434
prompt = "sailing ship in storm by Rembrandt"
3535
image = pipeline(prompt).images[0]

docs/source/en/optimization/tome.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ You can use ToMe from the [`tomesd`](https://github.com/dbolya/tomesd) library w
2828
import tomesd
2929

3030
pipeline = StableDiffusionPipeline.from_pretrained(
31-
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
31+
"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
3232
).to("cuda")
3333
+ tomesd.apply_patch(pipeline, ratio=0.5)
3434

0 commit comments

Comments
 (0)