Tags: lemanschik/node-llama-cpp
Tags
fix: bump llama.cpp release used in prebuilt binaries (withcatai#247)
fix: remove CUDA binary compression for Windows (withcatai#243) * fix: remove CUDA binary compression for Windows * fix: improve `inspect gpu` command output
fix: bugs (withcatai#241) * fix: avoid duplicate context shifts * fix: `onProgress` on `ModelDownloader` * fix: re-enable CUDA binary compression * fix: more thorough tests before loading a binary * fix: increase compatibility of prebuilt binaries
fix: remove CUDA binary compression for now (withcatai#238)
feat: compress CUDA prebuilt binaries (withcatai#236) * feat: compress CUDA prebuilt binaries * feat: automatically solve more CUDA compilation errors
feat: render markdown in the Electron example (withcatai#234) * feat: render markdown in the Electron example * fix: only build practical targets by default in the Electron example
fix: async gpu info getters (withcatai#232) * fix: make GPU info getters async * fix: Electron example build
fix: Electron example build (withcatai#228) * fix: Electron example build * refactor: rename `llamaBins` to `bins`
PreviousNext