Have you ever wanted to inference a baby Llama 2 model in pure C? No? Well, now you can!
With this code you can train the Llama 2 LLM architecture from scratch in PyTorch, then save the weights to a raw binary file, then load that into one ~simple 500-line C file (run.c) that inferences the model, simply in fp32 for now. On my cloud Linux devbox a dim 288 6-layer 6-head model (~15M params) inferences at ~18 tok/s in fp32, and about the same on my M1 MacBook Air. I was somewhat pleasantly surprised that one can run reasonably sized models (few ten million params) at interactive rates with an approach this simple.
Please note that this is just a weekend project: I took nanoGPT, tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in run.c. As such, this is not really meant to be a production-grade library right now.
Hat tip to llama.cpp for inspiring this project. I wanted something super minimal so I chose to hard-code the llama-2 architecture, stick to fp32, and just roll one inference file of pure C with no dependencies.
Let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the TinyStories dataset (~58MB download) and place it into the default checkpoint directory out
:
wget https://karpathy.ai/llama2c/model.bin -P out
(if that doesn't work try google drive). Compile and run the C code:
gcc -o run run.c -lm
./run out/model.bin
You'll notice that this just streams the raw tokens. Unless you can read those directly, you'll want to translate them into text. For now sadly we have to run this C code through a simple wrapper that does the translation (see the file, it's just 30 lines):
pip install sentencepiece
python run_wrap.py
You'll see text stream, but with weird spaces in it (sorry). And after that the whole sample will be properly printed. (Call for help: help me fix sentencepiece streaming decoding and even better delete this wrapper.) On my M1 MacBook Air this runs at ~18 tokens/s, not bad for super naive fp32 single-threaded C code.
It should be possible to load the weights released by Meta but I haven't tried because the inference speed, even of the 7B model, would probably be not great with this baby single-threaded C program. So in this repo we focus on more narrow applications, and train the same architecture but from scratch, in this case on the TinyStories dataset for fun.
First let's download and pretokenize some source dataset, e.g. I like TinyStories so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code.
python tinystories.py download
python tinystories.py pretokenize
Then train our model:
python train.py
See the train.py script for more exotic launches and hyperparameter overrides. I didn't tune the hyperparameters, I expect simple hyperparameter exploration should give better models. Totally understand if you want to skip model training, for simple demo just download my pretrained model and save it into the directory out
:
wget https://karpathy.ai/llama2c/model.bin -P out
Once we have the model.bin file, we can inference in C. Compile the C code first:
gcc -o run run.c -lm
You can now run it simply as
./run out/model.bin
But note that this only emits the SentencePiece tokens. To decode the tokens into text too, run this script through a simple wrapper:
python run_wrap.py
Watch the tokens stream by, fun! Help me fix the weird spaces. We can also run the PyTorch inference script for comparison:
python sample.py
Which gives the same results. More detailed testing will be done in test_all.py
, run as:
$ pytest
Currently you will need two files to run the test: the model.bin file and the model.ckpt file from PyTorch training I ran earlier. I have to think through running the tests without having to download 200MB of data.
- why SentencePiece can't iteratively decode properly?
- would love to delete run_wrap.py and just directly use C code to string
- todo multiquery support? doesn't seem as useful for smaller models that run on CPU (?)
- todo support inferencing beyond max_seq_len steps, have to think through the kv cache
- why is MFU so low (~10%) on my A100 40GB for training?
- weird errors with torch.compile and wandb when using DDP
- make more better tests to decrease yolo
- requirements.txt
MIT