Skip to content

HaakonMongstad/RationaLlama

Repository files navigation

RationaLlama

Image Description

RationaLlama is a Llama 2 model fine-tuned to solve logical reasoning tasks on the LogiQA dataset.

Medium Article: RationaLlama: Fine-tuning an LLM for Logical Reasoning, and Why it's Hard. . .

Env Installation

  1. Create a conda environment and install the required dependencies:
conda env create -f environment.yaml
  1. Activate the environment:
conda activate rationallama
  1. Install bitsandbytes package from source to enable quantization:
git clone https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cuda -S .
make
pip install .

🤗Hugging Face CLI

Log in to Hugging Face from the terminal:

huggingface-cli login

Dataset Used for RationaLlama

The data sets used in the article can be found here:

Training Dataset: LogiQA

Baseline Datasets: ReClor, LogiQA 2.0