Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Summary

Overview

This file provides you the instructions to run LLAMA model with different parameters via Qualcomm HTP backend. We currently support the following models:

  1. LLAMA2 Stories 110M
  2. LLAMA3.2 1B
  3. LLAMA3.2 3B (WIP)

We offer the following modes to execute the model:

KV Cache Mode: In KV Cache mode, the model takes in a single previous token and generates the next predicted token along with its KV cache. It is efficient for generating subsequent tokens after the initial prompt.

Hybrid Mode: Hybrid mode leverages the strengths of both AR-N model and KV cache modes to optimize token generation speed. Initially, it uses AR-N model to efficiently generate the prompt's key-value (KV) cache. Then, the mode switches to KV cache mode, which excels at generating subsequent tokens.

  • AR-N model: The auto-regression (AR) length determines the number of tokens to consume and the number of logits to produce. Use it to process the prompt and generate the key-value (kv) cache, which serves as a prompt processor in hybrid mode.
  • Prompt processing with AR-N model:

Prompt Processing With AR-N Model

Prompt processing is done using a for-loop. An N-token block is taken, and the KV cache is updated for that block. This process is repeated until all tokens are consumed, with the last block potentially requiring padding. For flexibility, the AR-N model can handle any input length less than the maximum sequence length. For TTFT, the input length (or number of blocks) will vary depending on the actual input length, rather than always being the same.

Instructions

Note

  1. For hybrid mode, the export time will be longer and can take up to 1-4 hours to complete, depending on the specific model users are exporting.
  2. When exporting a hybrid mode model, memory consumption will be higher. Taking LLAMA3.2 1B as an example, please ensure the device has at least 80 GB of memory and swap space.

Step 1: Setup

  1. Follow the tutorial to set up ExecuTorch.
  2. Follow the tutorial to build Qualcomm AI Engine Direct Backend.

Step 2: Prepare Model

LLAMA2

Download and prepare stories110M model

# tokenizer.model & stories110M.pt:
wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt"
wget "https://raw.githubusercontent.com/karpathy/llama2.c/master/tokenizer.model"

# tokenizer.bin:
python -m extension.llm.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin

# params.json:
echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json

LLAMA3.2

Follow the instructions to download models. At the end of this step, users should have the following files ready: consolidated.00.pth, params.json, and tokenizer.model.

Step3: Run default examples using hybrid mode.

LLAMA2

python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --ptq 16a4w --checkpoint stories110M.pt --params params.json --tokenizer_model tokenizer.model --tokenizer_bin tokenizer.bin --llama_model stories110m --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "Once upon a time"

LLAMA3.2

Default example using hybrid mode.

python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "what is 1+1"

KV Cache update mechanism

We have two distinct mechanisms for updating the key-value (KV) cache, which can be selected at runtime. Shift Pointer and Smart Mask.

Shift Pointer mechanism

Shift Pointer mechanism

The figure illustrates the process of updating the key and value caches during each inference step. In key cache update process, we initially allocate memory for each layer with num_head size of (head_dim + 1) * (seq_len - 1). After a single inference, the new key cache is copied from the key output pointer k_out and appended to the key cache. Subsequently, the buffer start pointer of the key cache k_in moves to the next token, making the previous position of the buffer start pointer unused. This process is repeated for each subsequent inference step. For the value cache update process, we first allocate a contiguous memory of size (num_head + 1) * head_dim * (seq_len - 1) for each layer, with the last head reserved for I/O shifting, After the first inference, the cache is updated by simply shifting the pointers of all heads to the next token position, making only the previous head_dim * 1 section of the buffer start pointer v_in of the first head unused. This process is repeated for each subsequent inference step.

Smart Mask mechanism:

Smart Mask mechanism

The Smart Mask mechanism streamlines the process of updating tokens in the cache. Unlike the Shift Pointer mechanism, which requires moving the buffer start pointer k_in/v_in of the cache, the Smart Mask mechanism updates only the new token at the specified position. This approach eliminates the need to adjust the buffer start pointer. This mechanism is beneficial for shared buffers but requires CPU memory copying.

Analysis KV Cache Update Mechanism for each Layer each inference

Mechanism Time Complexity Space Complexity
K V K V
Shift Pointer num_head * head_dim 1 num_head * (head_dim + 1) * seq_len (num_head + 1) * head_dim * (seq_len - 1)
Smart Mask num_head * head_dim num_head * head_dim num_head * seq_len * head_dim num_head * seq_len * head_dim

Additional Configs when running the script

If you would like to compile the model only, we have provided the flag --compile_only. Taking LLAMA3.2 as an example:

python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "what is 1+1" --compile_only

On the other hand, if you already have a pre-compiled .pte model, you can perform inference by providing the flag --pre_gen_pte and specifying the folder that contains the .pte model. Taking LLAMA3.2 as an example:

python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "what is 1+1" --pre_gen_pte ${FOLDER_TO_PRE_GEN_PTE}

You can select the KV Cache update mechanism at runtime by setting the KV_UPDATER variable to either "shift_pointer" or "smart_mask". By default, it is set to "smart_mask". KV_UPDATER = "shift_pointer"

python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "what is 1+1" --kv_updator ${KV_UPDATER}