Skip to content

Commit

Permalink
Merge pull request karpathy#733 from zhangpiu/feature/llm.cpp
Browse files Browse the repository at this point in the history
Add llm.cpp(a port of this project using Eigen library, supporting CPU/CUDA), link to notable forks in readme
  • Loading branch information
karpathy authored Aug 26, 2024
2 parents ebc28b9 + 25a302f commit a2bdae2
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,9 @@ Lastly, I will be a lot more sensitive to complexity in the root folder of the p
- [llm.cpp](https://github.com/gevtushenko/llm.c) by @[gevtushenko](https://github.com/gevtushenko): a port of this project using the [CUDA C++ Core Libraries](https://github.com/NVIDIA/cccl)
- A presentation this fork was covered in [this lecture](https://www.youtube.com/watch?v=WiB_3Csfj_Q) in the [CUDA MODE Discord Server](https://discord.gg/cudamode)

- C++/CUDA
- [llm.cpp](https://github.com/zhangpiu/llm.cpp/tree/master/llmcpp) by @[zhangpiu](https://github.com/zhangpiu): a port of this project using the [Eigen](https://gitlab.com/libeigen/eigen), supporting CPU/CUDA.

- WebGPU C++
- [gpu.cpp](https://github.com/AnswerDotAI/gpu.cpp) by @[austinvhuang](https://github.com/austinvhuang): a library for portable GPU compute in C++ using native WebGPU. Aims to be a general-purpose library, but also porting llm.c kernels to WGSL.

Expand Down

0 comments on commit a2bdae2

Please sign in to comment.