RationaLlama is a Llama 2 model fine-tuned to solve logical reasoning tasks on the LogiQA dataset.
Medium Article: RationaLlama: Fine-tuning an LLM for Logical Reasoning, and Why it's Hard. . .
- Create a conda environment and install the required dependencies:
conda env create -f environment.yaml
- Activate the environment:
conda activate rationallama
- Install bitsandbytes package from source to enable quantization:
git clone https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cuda -S .
make
pip install .
Log in to Hugging Face from the terminal:
huggingface-cli login
The data sets used in the article can be found here:
Training Dataset: LogiQA
Baseline Datasets: ReClor, LogiQA 2.0