Tags: mfreed420/LocalAI
Tags
⬆️ Update ggerganov/llama.cpp (mudler#1655) Signed-off-by: GitHub <[email protected]> Co-authored-by: mudler <[email protected]>
feat(grpc): backend SPI pluggable in embedding mode (mudler#1621) * run server * grpc backend embedded support * backend providable
feat(extra-backends): Improvements, adding mamba example (mudler#1618) * feat(extra-backends): Improvements vllm: add max_tokens, wire up stream event mamba: fixups, adding examples for mamba-chat * examples(mamba-chat): add * docs: update
Update Dockerfile Signed-off-by: Ettore Di Giacinto <[email protected]>
⬆️ Update ggerganov/llama.cpp (mudler#1558) Signed-off-by: GitHub <[email protected]> Co-authored-by: mudler <[email protected]>
docs: improve getting started (mudler#1553) * docs: improve getting started Signed-off-by: Ettore Di Giacinto <[email protected]> * cleanups * Use dockerhub links * Shrink command to minimum --------- Signed-off-by: Ettore Di Giacinto <[email protected]>
ci(dockerhub): push images also to dockerhub (mudler#1542) Signed-off-by: Ettore Di Giacinto <[email protected]>
fix(download): correctly check for not found error (mudler#1514)
Update README.md Signed-off-by: Ettore Di Giacinto <[email protected]>
pin go-rwkv Signed-off-by: Ettore Di Giacinto <[email protected]>
PreviousNext