This is the script of our paper, “Token-Budget-Aware LLM Reasoning”.
Reasoning is crucial for LLMs to perform complex tasks, but methods like Chain-of-Thought (CoT) reasoning often lead to significant token overhead and increased costs. We identify substantial token redundancy in the reasoning process of state-of-the-art LLMs and propose a token-budget-aware reasoning framework. This approach dynamically allocates token budgets based on problem complexity to guide the reasoning process. Experiments demonstrate that our method reduces token usage in CoT reasoning with minimal performance trade-offs, striking a practical balance between efficiency and accuracy.
Please see requirements.txt.
We provide the implementation for Directly Answering and Vanilla CoT.
python -u inference.py --data_name GSM8K-Zero --model gpt-4o-mini
python -u inference.py --data_name GSM8K-Zero --model gpt-4o-mini --reasoning
The output token costs between Directly Answering and Vanilla CoT are illustrated as follows:
python -u search_budget.py --do_search --data_name GSM8K-Zero
The output token costs between Vanilla CoT and CoT with optimal searched budget are illustrated as follows:
We provide two implementations of TALE, TALE-EP and TALE-PT.
TALE with Zero-shot Estimator:
python -u TALE-EP.py --data_name GSM8K-Zero --model gpt-4o-mini
# for training
python -u TALE-PT.py --strategy lora --model_name llama-3.1-8B-Instruct --data_path <your_training_data_path> --output_dir <your_output_dir> --batch_size 2 --save
# for eval
python -u TALE-PT.py --eval --strategy lora --model_name llama-3.1-8B-Instruct --data_path <your_eval_data_path> --output_dir <your_output_dir> --batch_size 2 --save
# for training
python -u TALE-PT.py--strategy dpo --model_name llama-3.1-8B-Instruct --data_path <your_training_data_path> --output_dir <your_output_dir> --batch_size 2 --save
# for eval
python -u TALE-PT.py --eval --strategy dpo --model_name llama-3.1-8B-Instruct --data_path <your_eval_data_path> --output_dir <your_output_dir> --batch_size 2 --save
@article{han2024token,
title={Token-Budget-Aware LLM Reasoning},
author={Han, Tingxu and Wang, Zhenting and Fang, Chunrong and Zhao, Shiyu and Ma, Shiqing and Chen, Zhenyu},
journal={arXiv preprint arXiv:2412.18547},
year={2024}
}