-
Notifications
You must be signed in to change notification settings - Fork 19
Insights: ggml-org/llama.vscode
Overview
-
- 4 Merged pull requests
- 0 Open pull requests
- 1 Closed issue
- 2 New issues
Loading
Could not load contribution data
Please try again later
Loading
4 Pull requests merged by 3 people
-
core : fix requests
#26 merged
Feb 8, 2025 -
Adds OpenAI compatible endpoint option
#16 merged
Feb 8, 2025 -
Fix manual trigger without cache + accept always on pressing a Tab
#25 merged
Feb 7, 2025 -
Fix the problem with cutting the lines of a suggestion after
#22 merged
Feb 4, 2025
1 Issue closed by 1 person
-
[Bug] Cltrl+L doesn't toggle, only adds suggestion
#24 closed
Feb 8, 2025
2 Issues opened by 2 people
-
Tab-jumping
#27 opened
Feb 8, 2025 -
Offline Model Loading When Previously Downloaded via llama-server fails
#23 opened
Feb 4, 2025
1 Unresolved conversation
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
ability to auto start a cmd (e.g. llama-server) with vscode
#15 commented on
Feb 4, 2025 • 0 new comments