Cheer AI up with the "let's think step by step" prompt? More plz. Let’s think not just step by step, but also one by one.
Auto-CoT uses more cheers & diversity to SAVE huge manual efforts in chain of thought prompt design, matching or even exceeding performance of manual design on GPT-3.
Check out our 25-page paper for more information.
Python>=3.8
pip install torch==1.8.2+cu111 torchtext==0.9.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
pip install -r requirements.txt
- Clone the repository:
git clone https://github.com/your-repo/auto-cot.git
cd auto-cot
- Install dependencies:
pip install -r requirements.txt
- Install MCP with CLI extras using uv:
uv add 'mcp[cli]'
- Set up your OpenAI API key:
export OPENAI_API_KEY='your-api-key'
The server.py script can be used in four different ways:
Run an interactive chat session:
python server.py --chat
Use as a Python module:
from server import CoTServer
server = CoTServer()
result = server.process_question("auto_cot", "your question")
print(result)
Run as a web server:
python server.py --server
The server will be available at http://localhost:5000
- POST /api/cot
Request body:
Response:
{ "question": "your question", "method": "auto_cot" # optional, default is auto_cot }
{ "question": "your question", "response": "model response", "method": "auto_cot" }
Run as a Model Context Protocol server:
python server.py --mcp
The MCP server exposes a process_question
tool that can be used via the MCP Python SDK.
You can configure the server using command-line arguments:
python server.py --chat --model gpt-4 --temperature 0.7
Available options:
- --model: Model to use (default: gpt-4o-mini)
- --method: CoT method (default: auto_cot)
- --temperature: Sampling temperature (default: 0)
- --max_length_cot: Max tokens for CoT (default: 256)
- --max_length_direct: Max tokens for direct answer (default: 32)
Download the datasets from the following:
https://github.com/kojima-takeshi188/zero_shot_cot/tree/main/dataset
https://github.com/kojima-takeshi188/zero_shot_cot/tree/main/log
@inproceedings{zhang2023automatic,
title={Automatic Chain of Thought Prompting in Large Language Models},
author={Zhang, Zhuosheng and Zhang, Aston and Li, Mu and Smola, Alex},
booktitle={The Eleventh International Conference on Learning Representations (ICLR 2023)},
year={2023}
}
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.