You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to merge a LORA into a Checkpoint. Did not matter whether or not I did SDXL or 1.5 model it created the same issue.
Error is below.
I have Auto1111 installed into a MiniConda environment.
This is info about my Auto1111 installation:
version: v1.10.1 • python: 3.10.6 • torch: 2.1.2+cu121 • xformers: N/A • gradio: 3.41.2 • checkpoint: 676f0d60c8
Traceback (most recent call last):
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\pfawkes\ai_programs\auto1111\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\pfawkes\ai_programs\auto1111\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 745, in pluslora
theta_0 = read_model_state_dict(checkpoint_info, device)
File "C:\Users\pfawkes\ai_programs\auto1111\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 1549, in read_model_state_dict
from backend.utils import load_torch_file
ModuleNotFoundError: No module named 'backend'
The text was updated successfully, but these errors were encountered:
I was trying to merge a LORA into a Checkpoint. Did not matter whether or not I did SDXL or 1.5 model it created the same issue.
Error is below.
I have Auto1111 installed into a MiniConda environment.
This is info about my Auto1111 installation:
version: v1.10.1 • python: 3.10.6 • torch: 2.1.2+cu121 • xformers: N/A • gradio: 3.41.2 • checkpoint: 676f0d60c8
And the Supermerger version: e4df29bc
The text was updated successfully, but these errors were encountered: