Skip to content

ZetangForward/mamba-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mamba-Chat 🐍

Mamba-Chat is the first chat language model based on a state-space model architecture, not a transformer.

The model is based on Albert Gu's and Tri Dao's work Mamba: Linear-Time Sequence Modeling with Selective State Spaces (paper) as well as their model implementation. This repository provides training / fine-tuning code for the model based on some modifications of the Huggingface Trainer class.

Mamba-Chat is based on Mamba-2.8B and was fine-tuned on 16,000 samples of the HuggingFaceH4/ultrachat_200k dataset. We used a single A100 (40GB) GPU for training.

To learn more, you can:


Run Mamba-Chat

We provide code for testing and fine-tuning our model. Here's how to get started and what you can do with it:


Clone repository and install dependencies:

git clone https://github.com/havenhq/mamba-chat.git

cd mamba-chat

pip install -r requirements.txt

Talk to Mamba-Chat:

python chat.py

Fine-Tune Mamba (the base model) on a subset of the Ultrachat dataset:

python train_mamba.py --model state-spaces/mamba-2.8b --tokenizer EleutherAI/gpt-neox-20b --learning_rate 5e-5 --batch_size 4 --data_path ./data/ultrachat_small.jsonl --num_epochs 3

About

self-implemented mamba training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.6%
  • Shell 3.4%