You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All models with llama.cpp as the backend just don't work
To Reproduce
Replicate my setup
Chat with pre-installed llava from the webui
See nothing in the webui
See weird stuff in logs
Expected behavior
I should've received a response
Logs
Here's localai running from start to finish (with me running llava from webui) localai-log.txt
Additional context
I have wiped /models and ran localai once before recording the log
From what I see, the model successfully loads in llama.cpp, but localai doesn't recognize this and tryes to use a bunch of other backend, untimately arriving at stablediffusion
The text was updated successfully, but these errors were encountered:
LocalAI version:
latest-aio-gpu-nvidia-cuda-12
Environment, CPU architecture, OS, and Version:
docker-compose.yml
Describe the bug
All models with llama.cpp as the backend just don't work
To Reproduce
Expected behavior
I should've received a response
Logs
Here's localai running from start to finish (with me running llava from webui)
localai-log.txt
Additional context
I have wiped /models and ran localai once before recording the log
From what I see, the model successfully loads in llama.cpp, but localai doesn't recognize this and tryes to use a bunch of other backend, untimately arriving at stablediffusion
The text was updated successfully, but these errors were encountered: