-
-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merger does not work anymore #415
Comments
yup me too. File "G:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict Same issue. not sure what changed. it just started doing this one day. |
UGH wtf lol -- that's the same exact one i got, i can deal with the diffusers one but .. bleh this is gonna kill me XD |
I'mma see if a temporary fix works by using the older plus lora: https://github.com/hako-mikan/sd-webui-supermerger/blob/ver20/scripts/mergers/pluslora.py It's an issue in that file directly, and it's becuse they're working on FORGE and i'm not using forge yet.. because i'm lazy. Edit: |
Still not working for me, with the alternate loraplus.py Traceback (most recent call last): |
I have a issue where it manages to generate a model using the basic method but it couldn't generate anything and kept stating clip issues |
Look i used Web UI 1.8 or something - not forge, - and i was on a vast instance, it did throw key errors but if anything i don't know how to fix it beyond that at the moment. The main issue i had was the scripts.a1111 module that doesn't even exists |
I tried that and it didn't work for me |
Ok well what i did was used V20 for the lora merging, and then used a
different merger for main merging, i'm aware this is flux/forge additions
there's a way to do. git checkout that i dont understand
…On Thu, Nov 7, 2024 at 11:49 AM emersound ***@***.***> wrote:
I'mma see if a temporary fix works by using the older plus lora:
https://github.com/hako-mikan/sd-webui-supermerger/blob/ver20/scripts/mergers/pluslora.py
It's an issue in that file directly, and it's becuse they're working on
FORGE and i'm not using forge yet.. because i'm lazy.
Edit: If you use the pluslora.py from the above, restart your instance it
actually as far as I can see -works, i didn't Cuda after i restarted. So
yea, just literally replace the pluslora.py with the older one and it works.
I tried that and it didn't work for me
—
Reply to this email directly, view it on GitHub
<#415 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOBTJ6YW7D2WXPDWAK562ETZ7KMIHAVCNFSM6AAAAABQVYRXHSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRQHE2TAMBXGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
With the help of gpt o1 preview, which is the best model for coding and general reasoning tasks, I was able to fix the issue. I take no credit. Here is the solution. I downloaded the v20 pluslora.py file, and here are the changes suggested: What's Happening: Function Call: The function blockfromkey(key, keylist, isv2) is called with parameters key, keylist, and isv2. Definition of LBLCOKS26: LBLCOKS26 = [ BLOCKID26 = [ The LBLCOKS26 list contains 28 items, whereas the ratios list you provided contains 26 elements. Adjust the LBLCOKS26 List: Since your ratios list has 26 elements, modify the LBLCOKS26 list to contain only 26 items. This involves removing the last two items: "embedders" and "transformer_resblocks". Modified LBLCOKS26: LBLCOKS26 = [ Notice the comma sign after "output_blocks_11_" is also removed! Why? The ratios list is intended to correspond to each block in LBLCOKS26. Check for Other References: Make sure that the rest of the script does not rely on "embedders" and "transformer_resblocks" being part of LBLCOKS26. Removing these items should not affect other functionalities. Verify LoRA Model Structure: Ensure that the LoRA models you're attempting to merge do not require "embedders" and "transformer_resblocks". If they do, you'll need to provide corresponding ratios or adjust the script to handle these additional blocks. |
any recommendations for how we can do a temp fix job until there's an update? |
Yes, Modified LBLCOKS26: LBLCOKS26 = [ "encoder", "diffusion_model_input_blocks_0_", "diffusion_model_input_blocks_1_", "diffusion_model_input_blocks_2_", "diffusion_model_input_blocks_3_", "diffusion_model_input_blocks_4_", "diffusion_model_input_blocks_5_", "diffusion_model_input_blocks_6_", "diffusion_model_input_blocks_7_", "diffusion_model_input_blocks_8_", "diffusion_model_input_blocks_9_", "diffusion_model_input_blocks_10_", "diffusion_model_input_blocks_11_", "diffusion_model_middle_block_", "diffusion_model_output_blocks_0_", "diffusion_model_output_blocks_1_", "diffusion_model_output_blocks_2_", "diffusion_model_output_blocks_3_", "diffusion_model_output_blocks_4_", "diffusion_model_output_blocks_5_", "diffusion_model_output_blocks_6_", "diffusion_model_output_blocks_7_", "diffusion_model_output_blocks_8_", "diffusion_model_output_blocks_9_", "diffusion_model_output_blocks_10_", "diffusion_model_output_blocks_11_" # Removed "embedders" and "transformer_resblocks" ] i.e. you just delete the string |
|
that seems to fix a different issue regarding index out of bounds. Which occurred regularly too. But this issue is about missing modules and not fixed by this. |
-> deleted "embedders" and "transformer_resblocks" choose two models but if i press nothing happend |
If anyone is still having problems with the buttons not doing anything, its a bug with gradio, you have to disable this: Settings -> "Automatically open webui in browser on startup" |
and than? should i manualy paste local-adress:port in my browser ? and thats it ? |
or ctrl-click the link in the console. So it doesnt auto-launch and cause conflict with other extension |
i dont really understand ... i dont start forge or a1111 ? |
thx ... so the merge works at least without VAE Stage 0/2: 100%|###################################################################| 2515/2515 [00:31<00:00, 79.66it/s] and if i use it and black image so supermeger is done for me sorry : D |
if i press the red button merge, nothing happend , no error on CMD
(works until last month)
and if i press save fp16 (grey button)
Traceback (most recent call last):
File "e:\WebUI_Forge\system\python\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "e:\WebUI_Forge\system\python\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api
output = await app.get_blocks().process_api(
File "e:\WebUI_Forge\system\python\lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
File "e:\WebUI_Forge\system\python\lib\site-packages\gradio\blocks.py", line 1508, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "e:\WebUI_Forge\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "e:\WebUI_Forge\system\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "e:\WebUI_Forge\system\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "e:\WebUI_Forge\system\python\lib\site-packages\gradio\utils.py", line 818, in wrapper
response = f(*args, **kwargs)
File "E:\WebUI_Forge\webui\extensions\sd-webui-supermerger\scripts\supermerger.py", line 573, in save_current_merge
msg = savemodel(None,None,custom_name,save_settings)
File "E:\WebUI_Forge\webui\extensions\sd-webui-supermerger\scripts\mergers\model_util.py", line 61, in savemodel
for name,module in shared.sd_model.named_modules():
AttributeError: 'StableDiffusionXL' object has no attribute 'named_modules'
The text was updated successfully, but these errors were encountered: