You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inspired by an earlier issue, the pace of development in this space is overwhelming, and I'm sure this project will gain traction. It wouldn't be feasible for you to manually add support for everything users are requesting then.
Hence, would it be possible to support users choosing their own models, probably locally downloaded as GGML or GPTQ as is popular with Ooga Booga? This would also allow users flexibilty (i.e. scaling down to 1.3B models or up to 30+B models) according to their devices.
The text was updated successfully, but these errors were encountered:
Where in the code is model selection? Can i alter myself to use my Models? Especially for code related tasks where having a specific model for code based questions
Inspired by an earlier issue, the pace of development in this space is overwhelming, and I'm sure this project will gain traction. It wouldn't be feasible for you to manually add support for everything users are requesting then.
Hence, would it be possible to support users choosing their own models, probably locally downloaded as GGML or GPTQ as is popular with Ooga Booga? This would also allow users flexibilty (i.e. scaling down to 1.3B models or up to 30+B models) according to their devices.
The text was updated successfully, but these errors were encountered: