You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was going through this code base and noticed that in the prepare_model_and_tokenizer function in llama-sample.py script, LoRa weights are not merged back in to the base model. Is there any reason for not doing so ?
model = PeftModel.from_pretrained(model, args.model_path, device_map="auto")
return model, tokenizer
The text was updated successfully, but these errors were encountered:
Hi crystal-text-llm team,
Thanks for this interesting work !
I was going through this code base and noticed that in the
prepare_model_and_tokenizer
function inllama-sample.py
script, LoRa weights are not merged back in to the base model. Is there any reason for not doing so ?The text was updated successfully, but these errors were encountered: